Started by user OpenShift CI Robot [EnvInject] - Loading node environment variables. Building in workspace /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace [WS-CLEANUP] Deleting project workspace... [workspace] $ /bin/bash /tmp/jenkins6793972318574644754.sh ########## STARTING STAGE: INSTALL THE ORIGIN-CI-TOOL ########## + [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ]] ++ readlink /var/lib/jenkins/origin-ci-tool/latest + latest=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 + touch /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 + cp /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin/activate /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate + cat + source /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ unset PYTHON_HOME ++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config + mkdir -p /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config + rm -rf /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool + oct configure ansible-client verbosity 2 Option verbosity updated to be 2. + oct configure aws-client keypair_name libra Option keypair_name updated to be libra. + oct configure aws-client private_key_path /var/lib/jenkins/.ssh/devenv.pem Option private_key_path updated to be /var/lib/jenkins/.ssh/devenv.pem. + set +o xtrace ########## FINISHED STAGE: SUCCESS: INSTALL THE ORIGIN-CI-TOOL [00h 00m 02s] ########## [workspace] $ /bin/bash /tmp/jenkins842940205603491374.sh ########## STARTING STAGE: PROVISION CLOUD RESOURCES ########## + [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ]] + source /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ unset PYTHON_HOME ++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config + oct provision remote all-in-one --os rhel --stage base --provider aws --discrete-ssh-config --name test_pull_request_origin_extended_networking_24 PLAYBOOK: aws-up.yml *********************************************************** 2 plays in /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/aws-up.yml PLAY [ensure we have the parameters necessary to bring up the AWS EC2 instance] *** TASK [ensure all required variables are set] *********************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/aws-up.yml:9 skipping: [localhost] => (item=origin_ci_inventory_dir) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.869042", "item": "origin_ci_inventory_dir", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_keypair_name) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.872833", "item": "origin_ci_aws_keypair_name", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_private_key_path) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.876024", "item": "origin_ci_aws_private_key_path", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_region) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.880757", "item": "origin_ci_aws_region", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_ami_os) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.883978", "item": "origin_ci_aws_ami_os", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_ami_stage) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.888600", "item": "origin_ci_aws_ami_stage", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_instance_name) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.891934", "item": "origin_ci_aws_instance_name", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_master_instance_type) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.895403", "item": "origin_ci_aws_master_instance_type", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_identifying_tag_key) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.899956", "item": "origin_ci_aws_identifying_tag_key", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_hostname) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.904602", "item": "origin_ci_aws_hostname", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_ssh_config_strategy) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.907846", "item": "origin_ci_ssh_config_strategy", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=openshift_schedulable) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.911003", "item": "openshift_schedulable", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=openshift_node_labels) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.914579", "item": "openshift_node_labels", "skip_reason": "Conditional check failed", "skipped": true } TASK [ensure all required variables are set] *********************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/aws-up.yml:28 skipping: [localhost] => (item=origin_ci_aws_master_subnet) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.962272", "item": "origin_ci_aws_master_subnet", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_etcd_security_group) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.967526", "item": "origin_ci_aws_etcd_security_group", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_node_security_group) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.973304", "item": "origin_ci_aws_node_security_group", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_master_security_group) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.980074", "item": "origin_ci_aws_master_security_group", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_master_external_elb_security_group) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.986779", "item": "origin_ci_aws_master_external_elb_security_group", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_master_internal_elb_security_group) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.992324", "item": "origin_ci_aws_master_internal_elb_security_group", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_router_security_group) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:01.996991", "item": "origin_ci_aws_router_security_group", "skip_reason": "Conditional check failed", "skipped": true } skipping: [localhost] => (item=origin_ci_aws_router_elb_security_group) => { "changed": false, "generated_timestamp": "2018-04-05 15:23:02.004185", "item": "origin_ci_aws_router_elb_security_group", "skip_reason": "Conditional check failed", "skipped": true } PLAY [provision an AWS EC2 instance] ******************************************* TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [inventory : initialize the inventory directory] ************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/inventory/tasks/main.yml:2 ok: [localhost] => { "changed": false, "generated_timestamp": "2018-04-05 15:23:02.799250", "gid": 995, "group": "jenkins", "mode": "0755", "owner": "jenkins", "path": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 997 } TASK [inventory : add the nested group mapping] ******************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/inventory/tasks/main.yml:7 changed: [localhost] => { "changed": true, "checksum": "18aaee00994df38cc3a63b635893175235331a9c", "dest": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/nested_group_mappings", "generated_timestamp": "2018-04-05 15:23:03.309400", "gid": 995, "group": "jenkins", "md5sum": "b30c3226ea63efa3ff9c5e346c14a16e", "mode": "0644", "owner": "jenkins", "secontext": "system_u:object_r:var_lib_t:s0", "size": 93, "src": "/var/lib/jenkins/.ansible/tmp/ansible-tmp-1522956183.06-160154153448980/source", "state": "file", "uid": 997 } TASK [inventory : initialize the OSEv3 group variables directory] ************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/inventory/tasks/main.yml:12 changed: [localhost] => { "changed": true, "generated_timestamp": "2018-04-05 15:23:03.506294", "gid": 995, "group": "jenkins", "mode": "0755", "owner": "jenkins", "path": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/group_vars/OSEv3", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 997 } TASK [inventory : initialize the host variables directory] ********************* task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/inventory/tasks/main.yml:17 changed: [localhost] => { "changed": true, "generated_timestamp": "2018-04-05 15:23:03.700004", "gid": 995, "group": "jenkins", "mode": "0755", "owner": "jenkins", "path": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/host_vars", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 997 } TASK [inventory : add the default Origin installation configuration] *********** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/inventory/tasks/main.yml:22 changed: [localhost] => { "changed": true, "checksum": "4c06ba508f055c20f13426e8587342e8765a7b66", "dest": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/group_vars/OSEv3/general.yml", "generated_timestamp": "2018-04-05 15:23:04.055291", "gid": 995, "group": "jenkins", "md5sum": "8aec71c75f7d512b278ae7c6f2959b12", "mode": "0644", "owner": "jenkins", "secontext": "system_u:object_r:var_lib_t:s0", "size": 331, "src": "/var/lib/jenkins/.ansible/tmp/ansible-tmp-1522956183.91-35366194075433/source", "state": "file", "uid": 997 } TASK [aws-up : determine if we are inside AWS EC2] ***************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:2 changed: [localhost] => { "changed": true, "cmd": [ "curl", "-s", "http://instance-data.ec2.internal" ], "delta": "0:00:00.011763", "end": "2018-04-05 15:23:04.294900", "failed": false, "failed_when_result": false, "generated_timestamp": "2018-04-05 15:23:04.311618", "rc": 0, "start": "2018-04-05 15:23:04.283137", "stderr": [], "stdout": [ "1.0", "2007-01-19", "2007-03-01", "2007-08-29", "2007-10-10", "2007-12-15", "2008-02-01", "2008-09-01", "2009-04-04", "2011-01-01", "2011-05-01", "2012-01-12", "2014-02-25", "2014-11-05", "2015-10-20", "2016-04-19", "2016-06-30", "2016-09-02", "latest" ], "warnings": [ "Consider using get_url or uri module rather than running curl" ] } [WARNING]: Consider using get_url or uri module rather than running curl TASK [aws-up : configure EC2 parameters for inventory when controlling from inside EC2] *** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:7 ok: [localhost] => { "ansible_facts": { "origin_ci_aws_destination_variable": "private_dns_name", "origin_ci_aws_host_address_variable": "private_ip", "origin_ci_aws_vpc_destination_variable": "private_ip_address" }, "changed": false, "generated_timestamp": "2018-04-05 15:23:04.357331" } TASK [aws-up : determine where to put the AWS API cache] *********************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:14 ok: [localhost] => { "ansible_facts": { "origin_ci_aws_cache_dir": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/.ec2_cache" }, "changed": false, "generated_timestamp": "2018-04-05 15:23:04.417751" } TASK [aws-up : ensure we have a place to put the AWS API cache] **************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:18 changed: [localhost] => { "changed": true, "generated_timestamp": "2018-04-05 15:23:04.591833", "gid": 995, "group": "jenkins", "mode": "0755", "owner": "jenkins", "path": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/.ec2_cache", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 997 } TASK [aws-up : place the EC2 dynamic inventory script] ************************* task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:23 changed: [localhost] => { "changed": true, "checksum": "625b8af723189db3b96ba0026d0f997a0025bc47", "dest": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/ec2.py", "generated_timestamp": "2018-04-05 15:23:04.947515", "gid": 995, "group": "jenkins", "md5sum": "cac06c14065dac74904232b89d4ba24c", "mode": "0755", "owner": "jenkins", "secontext": "system_u:object_r:var_lib_t:s0", "size": 63725, "src": "/var/lib/jenkins/.ansible/tmp/ansible-tmp-1522956184.79-73721075283520/source", "state": "file", "uid": 997 } TASK [aws-up : place the EC2 dynamic inventory configuration] ****************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:29 changed: [localhost] => { "changed": true, "checksum": "1a6960808fe9e09695e4b0fa9f137a7180db732e", "dest": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/ec2.ini", "generated_timestamp": "2018-04-05 15:23:05.304457", "gid": 995, "group": "jenkins", "md5sum": "e95202d70d653def990a7df7065f4b88", "mode": "0644", "owner": "jenkins", "secontext": "system_u:object_r:var_lib_t:s0", "size": 410, "src": "/var/lib/jenkins/.ansible/tmp/ansible-tmp-1522956185.0-193127629695863/source", "state": "file", "uid": 997 } TASK [aws-up : place the EC2 tag to group mappings] **************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:34 changed: [localhost] => { "changed": true, "checksum": "b4205a33dc73f62bd4f77f35d045cf8e09ae62b0", "dest": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/tag_to_group_mappings", "generated_timestamp": "2018-04-05 15:23:05.626221", "gid": 995, "group": "jenkins", "md5sum": "bc3a567a1b6f342e1005182efc1b66be", "mode": "0644", "owner": "jenkins", "secontext": "system_u:object_r:var_lib_t:s0", "size": 287, "src": "/var/lib/jenkins/.ansible/tmp/ansible-tmp-1522956185.48-182769546833685/source", "state": "file", "uid": 997 } TASK [aws-up : list available AMIs] ******************************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:40 ok: [localhost] => { "changed": false, "generated_timestamp": "2018-04-05 15:23:12.915889", "results": [ { "ami_id": "ami-091038c2724a8834e", "architecture": "x86_64", "block_device_mapping": { "/dev/sda1": { "delete_on_termination": true, "encrypted": false, "size": 75, "snapshot_id": "snap-09042843a5714b710", "volume_type": "gp2" }, "/dev/sdb": { "delete_on_termination": true, "encrypted": false, "size": 50, "snapshot_id": "snap-03b526946b77517ea", "volume_type": "gp2" } }, "creationDate": "2018-03-05T18:40:09.000Z", "description": "OpenShift Origin development AMI on rhel at the base stage.", "hypervisor": "xen", "is_public": false, "location": "531415883065/ami_build_origin_int_rhel_base_611", "name": "ami_build_origin_int_rhel_base_611", "owner_id": "531415883065", "platform": null, "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "available", "tags": { "Name": "ami_build_origin_int_rhel_base_611", "image_stage": "base", "operating_system": "rhel", "ready": "yes" }, "virtualization_type": "hvm" }, { "ami_id": "ami-069c0ca6cc091e8fa", "architecture": "x86_64", "block_device_mapping": { "/dev/sda1": { "delete_on_termination": true, "encrypted": false, "size": 75, "snapshot_id": "snap-0d20c69b20a8b3f3d", "volume_type": "gp2" }, "/dev/sdb": { "delete_on_termination": true, "encrypted": false, "size": 50, "snapshot_id": "snap-012e3422f546895da", "volume_type": "gp2" } }, "creationDate": "2018-03-08T22:39:48.000Z", "description": "OpenShift Origin development AMI on rhel at the base stage.", "hypervisor": "xen", "is_public": false, "location": "531415883065/ami_build_origin_int_rhel_base_618", "name": "ami_build_origin_int_rhel_base_618", "owner_id": "531415883065", "platform": null, "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "available", "tags": { "Name": "ami_build_origin_int_rhel_base_618", "image_stage": "base", "operating_system": "rhel", "ready": "yes" }, "virtualization_type": "hvm" } ] } TASK [aws-up : choose appropriate AMIs for use] ******************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:52 ok: [localhost] => (item={u'ami_id': u'ami-091038c2724a8834e', u'root_device_type': u'ebs', u'description': u'OpenShift Origin development AMI on rhel at the base stage.', u'tags': {u'ready': u'yes', u'image_stage': u'base', u'Name': u'ami_build_origin_int_rhel_base_611', u'operating_system': u'rhel'}, u'hypervisor': u'xen', u'block_device_mapping': {u'/dev/sdb': {u'encrypted': False, u'snapshot_id': u'snap-03b526946b77517ea', u'delete_on_termination': True, u'volume_type': u'gp2', u'size': 50}, u'/dev/sda1': {u'encrypted': False, u'snapshot_id': u'snap-09042843a5714b710', u'delete_on_termination': True, u'volume_type': u'gp2', u'size': 75}}, u'architecture': u'x86_64', u'owner_id': u'531415883065', u'platform': None, u'state': u'available', u'location': u'531415883065/ami_build_origin_int_rhel_base_611', u'is_public': False, u'creationDate': u'2018-03-05T18:40:09.000Z', u'root_device_name': u'/dev/sda1', u'virtualization_type': u'hvm', u'name': u'ami_build_origin_int_rhel_base_611'}) => { "ansible_facts": { "origin_ci_aws_ami_id_candidate": "ami-091038c2724a8834e" }, "changed": false, "generated_timestamp": "2018-04-05 15:23:12.979817", "item": { "ami_id": "ami-091038c2724a8834e", "architecture": "x86_64", "block_device_mapping": { "/dev/sda1": { "delete_on_termination": true, "encrypted": false, "size": 75, "snapshot_id": "snap-09042843a5714b710", "volume_type": "gp2" }, "/dev/sdb": { "delete_on_termination": true, "encrypted": false, "size": 50, "snapshot_id": "snap-03b526946b77517ea", "volume_type": "gp2" } }, "creationDate": "2018-03-05T18:40:09.000Z", "description": "OpenShift Origin development AMI on rhel at the base stage.", "hypervisor": "xen", "is_public": false, "location": "531415883065/ami_build_origin_int_rhel_base_611", "name": "ami_build_origin_int_rhel_base_611", "owner_id": "531415883065", "platform": null, "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "available", "tags": { "Name": "ami_build_origin_int_rhel_base_611", "image_stage": "base", "operating_system": "rhel", "ready": "yes" }, "virtualization_type": "hvm" } } ok: [localhost] => (item={u'ami_id': u'ami-069c0ca6cc091e8fa', u'root_device_type': u'ebs', u'description': u'OpenShift Origin development AMI on rhel at the base stage.', u'tags': {u'ready': u'yes', u'image_stage': u'base', u'Name': u'ami_build_origin_int_rhel_base_618', u'operating_system': u'rhel'}, u'hypervisor': u'xen', u'block_device_mapping': {u'/dev/sdb': {u'encrypted': False, u'snapshot_id': u'snap-012e3422f546895da', u'delete_on_termination': True, u'volume_type': u'gp2', u'size': 50}, u'/dev/sda1': {u'encrypted': False, u'snapshot_id': u'snap-0d20c69b20a8b3f3d', u'delete_on_termination': True, u'volume_type': u'gp2', u'size': 75}}, u'architecture': u'x86_64', u'owner_id': u'531415883065', u'platform': None, u'state': u'available', u'location': u'531415883065/ami_build_origin_int_rhel_base_618', u'is_public': False, u'creationDate': u'2018-03-08T22:39:48.000Z', u'root_device_name': u'/dev/sda1', u'virtualization_type': u'hvm', u'name': u'ami_build_origin_int_rhel_base_618'}) => { "ansible_facts": { "origin_ci_aws_ami_id_candidate": "ami-069c0ca6cc091e8fa" }, "changed": false, "generated_timestamp": "2018-04-05 15:23:12.989279", "item": { "ami_id": "ami-069c0ca6cc091e8fa", "architecture": "x86_64", "block_device_mapping": { "/dev/sda1": { "delete_on_termination": true, "encrypted": false, "size": 75, "snapshot_id": "snap-0d20c69b20a8b3f3d", "volume_type": "gp2" }, "/dev/sdb": { "delete_on_termination": true, "encrypted": false, "size": 50, "snapshot_id": "snap-012e3422f546895da", "volume_type": "gp2" } }, "creationDate": "2018-03-08T22:39:48.000Z", "description": "OpenShift Origin development AMI on rhel at the base stage.", "hypervisor": "xen", "is_public": false, "location": "531415883065/ami_build_origin_int_rhel_base_618", "name": "ami_build_origin_int_rhel_base_618", "owner_id": "531415883065", "platform": null, "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "available", "tags": { "Name": "ami_build_origin_int_rhel_base_618", "image_stage": "base", "operating_system": "rhel", "ready": "yes" }, "virtualization_type": "hvm" } } TASK [aws-up : determine which AMI to use] ************************************* task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:58 ok: [localhost] => { "ansible_facts": { "origin_ci_aws_ami_id": "ami-069c0ca6cc091e8fa" }, "changed": false, "generated_timestamp": "2018-04-05 15:23:13.038101" } TASK [aws-up : determine which subnets are available] ************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:63 ok: [localhost] => { "changed": false, "generated_timestamp": "2018-04-05 15:23:13.581588", "subnets": [ { "availability_zone": "us-east-1d", "available_ip_address_count": 3809, "cidr_block": "172.18.0.0/20", "default_for_az": "false", "id": "subnet-cf57c596", "map_public_ip_on_launch": "true", "state": "available", "tags": { "Name": "devenv-subnet-1", "origin_ci_aws_cluster_component": "master_subnet" }, "vpc_id": "vpc-69705d0c" }, { "availability_zone": "us-east-1c", "available_ip_address_count": 4081, "cidr_block": "172.18.16.0/20", "default_for_az": "false", "id": "subnet-8bdb5ac2", "map_public_ip_on_launch": "true", "state": "available", "tags": { "Name": "devenv-subnet-2", "origin_ci_aws_cluster_component": "master_subnet" }, "vpc_id": "vpc-69705d0c" } ] } TASK [aws-up : determine which subnets to use for the master] ****************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:70 ok: [localhost] => { "ansible_facts": { "origin_ci_aws_master_subnet_ids": [ "subnet-cf57c596", "subnet-8bdb5ac2" ] }, "changed": false, "generated_timestamp": "2018-04-05 15:23:13.636281" } TASK [aws-up : determine which security groups are available] ****************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:75 ok: [localhost] => { "changed": false, "generated_timestamp": "2018-04-05 15:23:14.382768", "security_groups": [ { "description": "default VPC security group", "group_id": "sg-7e73221a", "group_name": "default", "ip_permissions": [ { "ip_protocol": "-1", "ip_ranges": [], "ipv6_ranges": [], "prefix_list_ids": [], "user_id_group_pairs": [ { "group_id": "sg-7e73221a", "user_id": "531415883065" } ] }, { "from_port": 80, "ip_protocol": "tcp", "ip_ranges": [ { "cidr_ip": "54.241.19.245/32" }, { "cidr_ip": "97.65.119.184/29" }, { "cidr_ip": "107.20.219.35/32" }, { "cidr_ip": "108.166.48.153/32" }, { "cidr_ip": "212.199.177.64/27" }, { "cidr_ip": "212.72.208.162/32" } ], "ipv6_ranges": [], "prefix_list_ids": [], "to_port": 443, "user_id_group_pairs": [] }, { "from_port": 53, "ip_protocol": "tcp", "ip_ranges": [ { "cidr_ip": "119.254.120.64/26" }, { "cidr_ip": "209.132.176.0/20" }, { "cidr_ip": "209.132.186.34/32" }, { "cidr_ip": "213.175.37.10/32" }, { "cidr_ip": "62.40.79.66/32" }, { "cidr_ip": "66.187.224.0/20" }, { "cidr_ip": "66.187.239.0/24" }, { "cidr_ip": "38.140.108.0/24" }, { "cidr_ip": "213.175.37.9/32" }, { "cidr_ip": "38.99.12.232/29" }, { "cidr_ip": "4.14.33.72/30" }, { "cidr_ip": "4.14.35.88/29" }, { "cidr_ip": "50.227.40.96/29" } ], "ipv6_ranges": [], "prefix_list_ids": [], "to_port": 8444, "user_id_group_pairs": [] }, { "from_port": 22, "ip_protocol": "tcp", "ip_ranges": [ { "cidr_ip": "0.0.0.0/0" } ], "ipv6_ranges": [], "prefix_list_ids": [], "to_port": 22, "user_id_group_pairs": [] }, { "from_port": 53, "ip_protocol": "udp", "ip_ranges": [ { "cidr_ip": "209.132.176.0/20" }, { "cidr_ip": "66.187.224.0/20" }, { "cidr_ip": "66.187.239.0/24" } ], "ipv6_ranges": [], "prefix_list_ids": [], "to_port": 53, "user_id_group_pairs": [] }, { "from_port": 3389, "ip_protocol": "tcp", "ip_ranges": [ { "cidr_ip": "0.0.0.0/0" } ], "ipv6_ranges": [], "prefix_list_ids": [], "to_port": 3389, "user_id_group_pairs": [] }, { "from_port": -1, "ip_protocol": "icmp", "ip_ranges": [ { "cidr_ip": "0.0.0.0/0" } ], "ipv6_ranges": [], "prefix_list_ids": [], "to_port": -1, "user_id_group_pairs": [] } ], "ip_permissions_egress": [ { "ip_protocol": "-1", "ip_ranges": [ { "cidr_ip": "0.0.0.0/0" } ], "ipv6_ranges": [], "prefix_list_ids": [], "user_id_group_pairs": [] } ], "owner_id": "531415883065", "tags": { "Name": "devenv-vpc", "origin_ci_aws_cluster_component": "master_security_group" }, "vpc_id": "vpc-69705d0c" } ] } TASK [aws-up : determine which security group to use] ************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:82 ok: [localhost] => { "ansible_facts": { "origin_ci_aws_master_security_group_ids": [ "sg-7e73221a" ] }, "changed": false, "generated_timestamp": "2018-04-05 15:23:14.431287" } TASK [aws-up : provision an AWS EC2 instance] ********************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:87 changed: [localhost] => { "changed": true, "generated_timestamp": "2018-04-05 15:23:31.731330", "instance_ids": [ "i-04061ee3036408c7a" ], "instances": [ { "ami_launch_index": "0", "architecture": "x86_64", "block_device_mapping": { "/dev/sda1": { "delete_on_termination": true, "status": "attached", "volume_id": "vol-02b413e0f8db8a839" }, "/dev/sdb": { "delete_on_termination": true, "status": "attached", "volume_id": "vol-062890c1a389ff452" } }, "dns_name": "ec2-54-242-107-75.compute-1.amazonaws.com", "ebs_optimized": false, "groups": { "sg-7e73221a": "default" }, "hypervisor": "xen", "id": "i-04061ee3036408c7a", "image_id": "ami-069c0ca6cc091e8fa", "instance_type": "m4.xlarge", "kernel": null, "key_name": "libra", "launch_time": "2018-04-05T19:23:15.000Z", "placement": "us-east-1d", "private_dns_name": "ip-172-18-1-48.ec2.internal", "private_ip": "172.18.1.48", "public_dns_name": "ec2-54-242-107-75.compute-1.amazonaws.com", "public_ip": "54.242.107.75", "ramdisk": null, "region": "us-east-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "running", "state_code": 16, "tags": { "Name": "test_pull_request_origin_extended_networking_24", "openshift_etcd": "", "openshift_master": "", "openshift_node": "" }, "tenancy": "default", "virtualization_type": "hvm" } ], "tagged_instances": [] } TASK [aws-up : determine the host address] ************************************* task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:113 ok: [localhost] => { "ansible_facts": { "origin_ci_aws_host": "172.18.1.48" }, "changed": false, "generated_timestamp": "2018-04-05 15:23:31.779688" } TASK [aws-up : determine the default user to use for SSH] ********************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:117 skipping: [localhost] => { "changed": false, "generated_timestamp": "2018-04-05 15:23:31.821538", "skip_reason": "Conditional check failed", "skipped": true } TASK [aws-up : determine the default user to use for SSH] ********************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:122 ok: [localhost] => { "ansible_facts": { "origin_ci_aws_ssh_user": "origin" }, "changed": false, "generated_timestamp": "2018-04-05 15:23:31.880103" } TASK [aws-up : update variables for the host] ********************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:127 changed: [localhost] => { "changed": true, "checksum": "473a0bde278b8621ab329157899f1012eec26369", "dest": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/host_vars/172.18.1.48.yml", "generated_timestamp": "2018-04-05 15:23:32.251713", "gid": 995, "group": "jenkins", "md5sum": "c1333f56bddd84589b906550d529533e", "mode": "0644", "owner": "jenkins", "secontext": "system_u:object_r:var_lib_t:s0", "size": 684, "src": "/var/lib/jenkins/.ansible/tmp/ansible-tmp-1522956212.11-268378027192619/source", "state": "file", "uid": 997 } TASK [aws-up : determine where updated SSH configuration should go] ************ task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:144 ok: [localhost] => { "ansible_facts": { "origin_ci_ssh_config_files": [ "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/.ssh_config" ] }, "changed": false, "generated_timestamp": "2018-04-05 15:23:32.302055" } TASK [aws-up : determine where updated SSH configuration should go] ************ task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:149 skipping: [localhost] => { "changed": false, "generated_timestamp": "2018-04-05 15:23:32.344951", "skip_reason": "Conditional check failed", "skipped": true } TASK [aws-up : ensure the targeted SSH configuration file exists] ************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:154 changed: [localhost] => (item=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/.ssh_config) => { "changed": true, "dest": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/.ssh_config", "generated_timestamp": "2018-04-05 15:23:32.525641", "gid": 995, "group": "jenkins", "item": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/.ssh_config", "mode": "0644", "owner": "jenkins", "secontext": "system_u:object_r:var_lib_t:s0", "size": 0, "state": "file", "uid": 997 } TASK [aws-up : update the SSH configuration] *********************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:160 changed: [localhost] => (item=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/.ssh_config) => { "changed": true, "generated_timestamp": "2018-04-05 15:23:32.830729", "item": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config/origin-ci-tool/inventory/.ssh_config", "msg": "Block inserted" } TASK [aws-up : wait for SSH to be available] *********************************** task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/provision/roles/aws-up/tasks/main.yml:178 ok: [localhost] => { "changed": false, "elapsed": 35, "generated_timestamp": "2018-04-05 15:24:08.160730", "path": null, "port": 22, "search_regex": null, "state": "started" } PLAY RECAP ********************************************************************* localhost : ok=28 changed=13 unreachable=0 failed=0 + set +o xtrace ########## FINISHED STAGE: SUCCESS: PROVISION CLOUD RESOURCES [00h 01m 07s] ########## [workspace] $ /bin/bash /tmp/jenkins514167514780897200.sh ########## STARTING STAGE: FORWARD GCS CREDENTIALS TO REMOTE HOST ########## + [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ]] + source /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ unset PYTHON_HOME ++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config + scp -F ./.config/origin-ci-tool/inventory/.ssh_config /var/lib/jenkins/.config/gcloud/gcs-publisher-credentials.json openshiftdevel:/data/credentials.json + set +o xtrace ########## FINISHED STAGE: SUCCESS: FORWARD GCS CREDENTIALS TO REMOTE HOST [00h 00m 01s] ########## [workspace] $ /bin/bash /tmp/jenkins4504524759820285936.sh ########## STARTING STAGE: FORWARD PARAMETERS TO THE REMOTE HOST ########## + [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ]] + source /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ unset PYTHON_HOME ++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod o+rw /etc/environment + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''JOB_SPEC={"type":"presubmit","job":"test_pull_request_origin_extended_networking","buildid":"bbc76ae2-3906-11e8-a837-0a58ac100475","refs":{"org":"openshift","repo":"origin","base_ref":"master","base_sha":"6512a2b31cc35ee6b5429d69894efecf162af0dd","pulls":[{"number":19233,"author":"danwinship","sha":"73ce10f9005bc045f51b6adcdc5ad8622f060eeb"}]}}'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''buildId=bbc76ae2-3906-11e8-a837-0a58ac100475'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''BUILD_ID=bbc76ae2-3906-11e8-a837-0a58ac100475'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''REPO_OWNER=openshift'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''REPO_NAME=origin'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''PULL_BASE_REF=master'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''PULL_BASE_SHA=6512a2b31cc35ee6b5429d69894efecf162af0dd'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''PULL_REFS=master:6512a2b31cc35ee6b5429d69894efecf162af0dd,19233:73ce10f9005bc045f51b6adcdc5ad8622f060eeb'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''PULL_NUMBER=19233'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''PULL_PULL_SHA=73ce10f9005bc045f51b6adcdc5ad8622f060eeb'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''JOB_SPEC={"type":"presubmit","job":"test_pull_request_origin_extended_networking","buildid":"bbc76ae2-3906-11e8-a837-0a58ac100475","refs":{"org":"openshift","repo":"origin","base_ref":"master","base_sha":"6512a2b31cc35ee6b5429d69894efecf162af0dd","pulls":[{"number":19233,"author":"danwinship","sha":"73ce10f9005bc045f51b6adcdc5ad8622f060eeb"}]}}'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''BUILD_NUMBER=24'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''CLONEREFS_ARGS='\'' >> /etc/environment' + set +o xtrace ########## FINISHED STAGE: SUCCESS: FORWARD PARAMETERS TO THE REMOTE HOST [00h 00m 05s] ########## [workspace] $ /bin/bash /tmp/jenkins2618767434532009676.sh ########## STARTING STAGE: SYNC REPOSITORIES ########## + [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ]] + source /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ unset PYTHON_HOME ++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ mktemp + script=/tmp/tmp.H0bZttBWCH + cat + chmod +x /tmp/tmp.H0bZttBWCH + scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.H0bZttBWCH openshiftdevel:/tmp/tmp.H0bZttBWCH + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 14400 /tmp/tmp.H0bZttBWCH"' + cd /home/origin ++ jq --compact-output .buildid Using BUILD_NUMBER + [[ "bbc76ae2-3906-11e8-a837-0a58ac100475" =~ ^\[0-9]\+$ ]] + echo 'Using BUILD_NUMBER' ++ jq --compact-output '.buildid |= "24"' + JOB_SPEC='{"type":"presubmit","job":"test_pull_request_origin_extended_networking","buildid":"24","refs":{"org":"openshift","repo":"origin","base_ref":"master","base_sha":"6512a2b31cc35ee6b5429d69894efecf162af0dd","pulls":[{"number":19233,"author":"danwinship","sha":"73ce10f9005bc045f51b6adcdc5ad8622f060eeb"}]}}' + for image in ''\''registry.svc.ci.openshift.org/ci/clonerefs:latest'\''' ''\''registry.svc.ci.openshift.org/ci/initupload:latest'\''' + (( i = 0 )) + (( i < 5 )) + docker pull registry.svc.ci.openshift.org/ci/clonerefs:latest Trying to pull repository registry.svc.ci.openshift.org/ci/clonerefs ... latest: Pulling from registry.svc.ci.openshift.org/ci/clonerefs 6d987f6f4279: Pulling fs layer 4cccebe844ee: Pulling fs layer 91f69e3a333d: Pulling fs layer 310bf9de3328: Pulling fs layer 310bf9de3328: Waiting 4cccebe844ee: Verifying Checksum 4cccebe844ee: Download complete 91f69e3a333d: Verifying Checksum 91f69e3a333d: Download complete 310bf9de3328: Verifying Checksum 310bf9de3328: Download complete 6d987f6f4279: Verifying Checksum 6d987f6f4279: Download complete 6d987f6f4279: Pull complete 4cccebe844ee: Pull complete 91f69e3a333d: Pull complete 310bf9de3328: Pull complete Digest: sha256:4cbcb14dd1a77b8d4f810b84479a0b27781c7c0bdfd20c025efe7ead577f4775 Status: Downloaded newer image for registry.svc.ci.openshift.org/ci/clonerefs:latest + break + for image in ''\''registry.svc.ci.openshift.org/ci/clonerefs:latest'\''' ''\''registry.svc.ci.openshift.org/ci/initupload:latest'\''' + (( i = 0 )) + (( i < 5 )) + docker pull registry.svc.ci.openshift.org/ci/initupload:latest Trying to pull repository registry.svc.ci.openshift.org/ci/initupload ... latest: Pulling from registry.svc.ci.openshift.org/ci/initupload 6d987f6f4279: Already exists 4cccebe844ee: Already exists 23e4017c0ba8: Pulling fs layer 23e4017c0ba8: Download complete 23e4017c0ba8: Pull complete Digest: sha256:d94fd5317f379ab83aa010ba35fe5f24b60814bd296b2a2e72bb825f8b9951b0 Status: Downloaded newer image for registry.svc.ci.openshift.org/ci/initupload:latest + break + clonerefs_args= + docker run -e 'JOB_SPEC={"type":"presubmit","job":"test_pull_request_origin_extended_networking","buildid":"24","refs":{"org":"openshift","repo":"origin","base_ref":"master","base_sha":"6512a2b31cc35ee6b5429d69894efecf162af0dd","pulls":[{"number":19233,"author":"danwinship","sha":"73ce10f9005bc045f51b6adcdc5ad8622f060eeb"}]}}' -v /data:/data:z registry.svc.ci.openshift.org/ci/clonerefs:latest --src-root=/data --log=/data/clone.json {"component":"clonerefs","level":"info","msg":"Cloning refs","refs":{"org":"openshift","repo":"origin","base_ref":"master","base_sha":"6512a2b31cc35ee6b5429d69894efecf162af0dd","pulls":[{"number":19233,"author":"danwinship","sha":"73ce10f9005bc045f51b6adcdc5ad8622f060eeb"}]},"time":"2018-04-05T19:25:18Z"} {"command":"os.MkdirAll(/data/src/github.com/openshift/origin, 0755)","component":"clonerefs","error":null,"level":"info","msg":"Ran clone command","output":"","time":"2018-04-05T19:25:18Z"} {"command":"git init","component":"clonerefs","error":null,"level":"info","msg":"Ran clone command","output":"Reinitialized existing shared Git repository in /data/src/github.com/openshift/origin/.git/\n","time":"2018-04-05T19:25:18Z"} {"command":"git config user.name ci-robot","component":"clonerefs","error":null,"level":"info","msg":"Ran clone command","output":"","time":"2018-04-05T19:25:18Z"} {"command":"git config user.email ci-robot@k8s.io","component":"clonerefs","error":null,"level":"info","msg":"Ran clone command","output":"","time":"2018-04-05T19:25:18Z"} {"command":"git fetch https://github.com/openshift/origin.git --tags --prune","component":"clonerefs","error":null,"level":"info","msg":"Ran clone command","output":"From https://github.com/openshift/origin\n * branch HEAD -\u003e FETCH_HEAD\n * [new tag] v3.7.2 -\u003e v3.7.2\n * [new tag] v3.8.0 -\u003e v3.8.0\n * [new tag] v3.9.0 -\u003e v3.9.0\n","time":"2018-04-05T19:25:25Z"} {"command":"git fetch https://github.com/openshift/origin.git master","component":"clonerefs","error":null,"level":"info","msg":"Ran clone command","output":"From https://github.com/openshift/origin\n * branch master -\u003e FETCH_HEAD\n","time":"2018-04-05T19:25:26Z"} {"command":"git checkout 6512a2b31cc35ee6b5429d69894efecf162af0dd","component":"clonerefs","error":null,"level":"info","msg":"Ran clone command","output":"Note: checking out '6512a2b31cc35ee6b5429d69894efecf162af0dd'.\n\nYou are in 'detached HEAD' state. You can look around, make experimental\nchanges and commit them, and you can discard any commits you make in this\nstate without impacting any branches by performing another checkout.\n\nIf you want to create a new branch to retain commits you create, you may\ndo so (now or later) by using -b with the checkout command again. Example:\n\n git checkout -b \u003cnew-branch-name\u003e\n\nHEAD is now at 6512a2b31c... Merge pull request #19082 from rajatchopra/net_reqs_doc\n","time":"2018-04-05T19:26:44Z"} {"command":"git branch --force master 6512a2b31cc35ee6b5429d69894efecf162af0dd","component":"clonerefs","error":null,"level":"info","msg":"Ran clone command","output":"","time":"2018-04-05T19:26:44Z"} {"command":"git checkout master","component":"clonerefs","error":null,"level":"info","msg":"Ran clone command","output":"Switched to branch 'master'\nYour branch is ahead of 'origin/master' by 366 commits.\n (use \"git push\" to publish your local commits)\n","time":"2018-04-05T19:26:45Z"} {"command":"git fetch https://github.com/openshift/origin.git pull/19233/head","component":"clonerefs","error":null,"level":"info","msg":"Ran clone command","output":"From https://github.com/openshift/origin\n * branch refs/pull/19233/head -\u003e FETCH_HEAD\n","time":"2018-04-05T19:26:46Z"} {"command":"git merge 73ce10f9005bc045f51b6adcdc5ad8622f060eeb","component":"clonerefs","error":null,"level":"info","msg":"Ran clone command","output":"Merge made by the 'recursive' strategy.\n images/dind/master/openshift-generate-master-config.sh | 10 +++++-----\n 1 file changed, 5 insertions(+), 5 deletions(-)\n","time":"2018-04-05T19:26:47Z"} {"component":"clonerefs","level":"info","msg":"Finished cloning refs","time":"2018-04-05T19:26:47Z"} + docker run -e 'JOB_SPEC={"type":"presubmit","job":"test_pull_request_origin_extended_networking","buildid":"24","refs":{"org":"openshift","repo":"origin","base_ref":"master","base_sha":"6512a2b31cc35ee6b5429d69894efecf162af0dd","pulls":[{"number":19233,"author":"danwinship","sha":"73ce10f9005bc045f51b6adcdc5ad8622f060eeb"}]}}' -v /data:/data:z registry.svc.ci.openshift.org/ci/initupload:latest --clone-log=/data/clone.json --dry-run=false --gcs-bucket=origin-ci-test --gcs-credentials-file=/data/credentials.json --path-strategy=single --default-org=openshift --default-repo=origin {"component":"clonerefs","dest":"pr-logs/directory/test_pull_request_origin_extended_networking/24.txt","level":"info","msg":"Queued for upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/directory/test_pull_request_origin_extended_networking/latest-build.txt","level":"info","msg":"Queued for upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/pull/19233/test_pull_request_origin_extended_networking/latest-build.txt","level":"info","msg":"Queued for upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/pull/19233/test_pull_request_origin_extended_networking/24/job-spec.json","level":"info","msg":"Queued for upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/pull/19233/test_pull_request_origin_extended_networking/24/started.json","level":"info","msg":"Queued for upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/pull/19233/test_pull_request_origin_extended_networking/24/clone-log.txt","level":"info","msg":"Queued for upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/pull/19233/test_pull_request_origin_extended_networking/24/clone-records.json","level":"info","msg":"Queued for upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/pull/19233/test_pull_request_origin_extended_networking/24/clone-log.txt","level":"info","msg":"Finished upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/pull/19233/test_pull_request_origin_extended_networking/24/clone-records.json","level":"info","msg":"Finished upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/pull/19233/test_pull_request_origin_extended_networking/latest-build.txt","level":"info","msg":"Finished upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/directory/test_pull_request_origin_extended_networking/latest-build.txt","level":"info","msg":"Finished upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/directory/test_pull_request_origin_extended_networking/24.txt","level":"info","msg":"Finished upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/pull/19233/test_pull_request_origin_extended_networking/24/job-spec.json","level":"info","msg":"Finished upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","dest":"pr-logs/pull/19233/test_pull_request_origin_extended_networking/24/started.json","level":"info","msg":"Finished upload","time":"2018-04-05T19:26:53Z"} {"component":"clonerefs","level":"info","msg":"Finished upload to GCS","time":"2018-04-05T19:26:53Z"} + sudo chmod -R a+rwX /data + sudo chown -R origin:origin-git /data + set +o xtrace ########## FINISHED STAGE: SUCCESS: SYNC REPOSITORIES [00h 02m 44s] ########## [workspace] $ /bin/bash /tmp/jenkins5416387771798170247.sh ########## STARTING STAGE: FORWARD PARAMETERS TO THE REMOTE HOST ########## + [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ]] + source /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ unset PYTHON_HOME ++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod o+rw /etc/environment + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''JOB_NAME=test_pull_request_origin_extended_networking'\'' >> /etc/environment' + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''BUILD_NUMBER=24'\'' >> /etc/environment' + set +o xtrace ########## FINISHED STAGE: SUCCESS: FORWARD PARAMETERS TO THE REMOTE HOST [00h 00m 01s] ########## [workspace] $ /bin/bash /tmp/jenkins1488843573580619860.sh ########## STARTING STAGE: USE A RAMDISK FOR ETCD ########## + [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ]] + source /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ unset PYTHON_HOME ++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ mktemp + script=/tmp/tmp.GAzFAcaZeQ + cat + chmod +x /tmp/tmp.GAzFAcaZeQ + scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.GAzFAcaZeQ openshiftdevel:/tmp/tmp.GAzFAcaZeQ + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.GAzFAcaZeQ"' + cd /home/origin + sudo su root + set +o xtrace ########## FINISHED STAGE: SUCCESS: USE A RAMDISK FOR ETCD [00h 00m 01s] ########## [workspace] $ /bin/bash /tmp/jenkins6913161281927365291.sh ########## STARTING STAGE: TURN OFF UNNECESSARY CENTOS PAAS SIG REPOS ########## + [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ]] + source /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ unset PYTHON_HOME ++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ mktemp + script=/tmp/tmp.avJfTZ8WLK + cat + chmod +x /tmp/tmp.avJfTZ8WLK + scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.avJfTZ8WLK openshiftdevel:/tmp/tmp.avJfTZ8WLK + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.avJfTZ8WLK"' + cd /home/origin + sudo yum-config-manager --disable 'centos-paas-sig-openshift-origin*-rpms' Loaded plugins: amazon-id, rhui-lb ================ repo: centos-paas-sig-openshift-origin13-rpms ================= [centos-paas-sig-openshift-origin13-rpms] async = True bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/7Server baseurl = https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin13/ cache = 0 cachedir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin13-rpms check_config_file_age = True compare_providers_priority = 80 cost = 1000 deltarpm_metadata_percentage = 100 deltarpm_percentage = enabled = 0 enablegroups = True exclude = failovermethod = priority ftp_disable_epsv = False gpgcadir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin13-rpms/gpgcadir gpgcakey = gpgcheck = False gpgdir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin13-rpms/gpgdir gpgkey = hdrdir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin13-rpms/headers http_caching = all includepkgs = ip_resolve = keepalive = True keepcache = False mddownloadpolicy = sqlite mdpolicy = group:small mediaid = metadata_expire = 21600 metadata_expire_filter = read-only:present metalink = minrate = 0 mirrorlist = mirrorlist_expire = 86400 name = CentOS PaaS SIG Origin 1.3 Repository old_base_cache_dir = password = persistdir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin13-rpms pkgdir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin13-rpms/packages proxy = False proxy_dict = proxy_password = proxy_username = repo_gpgcheck = False retries = 10 skip_if_unavailable = False ssl_check_cert_permissions = True sslcacert = sslclientcert = /var/lib/yum/client-cert.pem sslclientkey = /var/lib/yum/client-key.pem sslverify = False throttle = 0 timeout = 120.0 ui_id = centos-paas-sig-openshift-origin13-rpms ui_repoid_vars = releasever, basearch username = ================ repo: centos-paas-sig-openshift-origin14-rpms ================= [centos-paas-sig-openshift-origin14-rpms] async = True bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/7Server baseurl = https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin14/ cache = 0 cachedir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin14-rpms check_config_file_age = True compare_providers_priority = 80 cost = 1000 deltarpm_metadata_percentage = 100 deltarpm_percentage = enabled = 0 enablegroups = True exclude = failovermethod = priority ftp_disable_epsv = False gpgcadir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin14-rpms/gpgcadir gpgcakey = gpgcheck = False gpgdir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin14-rpms/gpgdir gpgkey = hdrdir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin14-rpms/headers http_caching = all includepkgs = ip_resolve = keepalive = True keepcache = False mddownloadpolicy = sqlite mdpolicy = group:small mediaid = metadata_expire = 21600 metadata_expire_filter = read-only:present metalink = minrate = 0 mirrorlist = mirrorlist_expire = 86400 name = CentOS PaaS SIG Origin 1.4 Repository old_base_cache_dir = password = persistdir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin14-rpms pkgdir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin14-rpms/packages proxy = False proxy_dict = proxy_password = proxy_username = repo_gpgcheck = False retries = 10 skip_if_unavailable = False ssl_check_cert_permissions = True sslcacert = sslclientcert = /var/lib/yum/client-cert.pem sslclientkey = /var/lib/yum/client-key.pem sslverify = False throttle = 0 timeout = 120.0 ui_id = centos-paas-sig-openshift-origin14-rpms ui_repoid_vars = releasever, basearch username = ================ repo: centos-paas-sig-openshift-origin15-rpms ================= [centos-paas-sig-openshift-origin15-rpms] async = True bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/7Server baseurl = https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin15/ cache = 0 cachedir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin15-rpms check_config_file_age = True compare_providers_priority = 80 cost = 1000 deltarpm_metadata_percentage = 100 deltarpm_percentage = enabled = 0 enablegroups = True exclude = failovermethod = priority ftp_disable_epsv = False gpgcadir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin15-rpms/gpgcadir gpgcakey = gpgcheck = False gpgdir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin15-rpms/gpgdir gpgkey = hdrdir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin15-rpms/headers http_caching = all includepkgs = ip_resolve = keepalive = True keepcache = False mddownloadpolicy = sqlite mdpolicy = group:small mediaid = metadata_expire = 21600 metadata_expire_filter = read-only:present metalink = minrate = 0 mirrorlist = mirrorlist_expire = 86400 name = CentOS PaaS SIG Origin 1.5 Repository old_base_cache_dir = password = persistdir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin15-rpms pkgdir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin15-rpms/packages proxy = False proxy_dict = proxy_password = proxy_username = repo_gpgcheck = False retries = 10 skip_if_unavailable = False ssl_check_cert_permissions = True sslcacert = sslclientcert = /var/lib/yum/client-cert.pem sslclientkey = /var/lib/yum/client-key.pem sslverify = False throttle = 0 timeout = 120.0 ui_id = centos-paas-sig-openshift-origin15-rpms ui_repoid_vars = releasever, basearch username = ================ repo: centos-paas-sig-openshift-origin36-rpms ================= [centos-paas-sig-openshift-origin36-rpms] async = True bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/7Server baseurl = https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin36/ cache = 0 cachedir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin36-rpms check_config_file_age = True compare_providers_priority = 80 cost = 1000 deltarpm_metadata_percentage = 100 deltarpm_percentage = enabled = 0 enablegroups = True exclude = failovermethod = priority ftp_disable_epsv = False gpgcadir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin36-rpms/gpgcadir gpgcakey = gpgcheck = False gpgdir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin36-rpms/gpgdir gpgkey = hdrdir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin36-rpms/headers http_caching = all includepkgs = ip_resolve = keepalive = True keepcache = False mddownloadpolicy = sqlite mdpolicy = group:small mediaid = metadata_expire = 21600 metadata_expire_filter = read-only:present metalink = minrate = 0 mirrorlist = mirrorlist_expire = 86400 name = CentOS PaaS SIG Origin 3.6 Repository old_base_cache_dir = password = persistdir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin36-rpms pkgdir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin36-rpms/packages proxy = False proxy_dict = proxy_password = proxy_username = repo_gpgcheck = False retries = 10 skip_if_unavailable = False ssl_check_cert_permissions = True sslcacert = sslclientcert = /var/lib/yum/client-cert.pem sslclientkey = /var/lib/yum/client-key.pem sslverify = False throttle = 0 timeout = 120.0 ui_id = centos-paas-sig-openshift-origin36-rpms ui_repoid_vars = releasever, basearch username = ================ repo: centos-paas-sig-openshift-origin37-rpms ================= [centos-paas-sig-openshift-origin37-rpms] async = True bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/7Server baseurl = https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin37/ cache = 0 cachedir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin37-rpms check_config_file_age = True compare_providers_priority = 80 cost = 1000 deltarpm_metadata_percentage = 100 deltarpm_percentage = enabled = 0 enablegroups = True exclude = failovermethod = priority ftp_disable_epsv = False gpgcadir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin37-rpms/gpgcadir gpgcakey = gpgcheck = False gpgdir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin37-rpms/gpgdir gpgkey = hdrdir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin37-rpms/headers http_caching = all includepkgs = ip_resolve = keepalive = True keepcache = False mddownloadpolicy = sqlite mdpolicy = group:small mediaid = metadata_expire = 21600 metadata_expire_filter = read-only:present metalink = minrate = 0 mirrorlist = mirrorlist_expire = 86400 name = CentOS PaaS SIG Origin 3.7 Repository old_base_cache_dir = password = persistdir = /var/lib/yum/repos/x86_64/7Server/centos-paas-sig-openshift-origin37-rpms pkgdir = /var/cache/yum/x86_64/7Server/centos-paas-sig-openshift-origin37-rpms/packages proxy = False proxy_dict = proxy_password = proxy_username = repo_gpgcheck = False retries = 10 skip_if_unavailable = False ssl_check_cert_permissions = True sslcacert = sslclientcert = /var/lib/yum/client-cert.pem sslclientkey = /var/lib/yum/client-key.pem sslverify = False throttle = 0 timeout = 120.0 ui_id = centos-paas-sig-openshift-origin37-rpms ui_repoid_vars = releasever, basearch username = + [[ test_pull_request_origin_extended_networking == *update* ]] + set +o xtrace ########## FINISHED STAGE: SUCCESS: TURN OFF UNNECESSARY CENTOS PAAS SIG REPOS [00h 00m 02s] ########## [workspace] $ /bin/bash /tmp/jenkins4525515752377960711.sh ########## STARTING STAGE: ENABLE DOCKER TESTED REPO ########## + [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ]] + source /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ unset PYTHON_HOME ++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ mktemp + script=/tmp/tmp.cw2BXiyDhH + cat + chmod +x /tmp/tmp.cw2BXiyDhH + scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.cw2BXiyDhH openshiftdevel:/tmp/tmp.cw2BXiyDhH + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.cw2BXiyDhH"' + cd /home/origin + [[ master == \m\a\s\t\e\r ]] + sudo touch /etc/yum.repos.d/dockertested.repo + sudo chmod a+rw /etc/yum.repos.d/dockertested.repo + cat + set +o xtrace ########## FINISHED STAGE: SUCCESS: ENABLE DOCKER TESTED REPO [00h 00m 01s] ########## [workspace] $ /bin/bash /tmp/jenkins2032276451148237877.sh ########## STARTING STAGE: BUILD AN ORIGIN RELEASE ########## + [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ]] + source /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ unset PYTHON_HOME ++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ mktemp + script=/tmp/tmp.hoPBUKAbMg + cat + chmod +x /tmp/tmp.hoPBUKAbMg + scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.hoPBUKAbMg openshiftdevel:/tmp/tmp.hoPBUKAbMg + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 7200 /tmp/tmp.hoPBUKAbMg"' + cd /data/src/github.com/openshift/origin ++ git rev-parse --abbrev-ref --symbolic-full-name HEAD + ORIGIN_TARGET_BRANCH=master + export OS_BUILD_IMAGE_ARGS= + OS_BUILD_IMAGE_ARGS= + export OS_ONLY_BUILD_PLATFORMS=linux/amd64 + OS_ONLY_BUILD_PLATFORMS=linux/amd64 + export OS_BUILD_ENV_PRESERVE=_output/local + OS_BUILD_ENV_PRESERVE=_output/local + hack/build-base-images.sh [openshift/origin-source] --> Image centos:7 was not found, pulling ... [openshift/origin-source] --> Pulled 1/2 layers, 50% complete [openshift/origin-source] --> FROM centos:7 as 0 [openshift/origin-source] --> COPY *.repo /etc/yum.repos.d/ [openshift/origin-source] --> Committing changes to openshift/origin-source:4253ab3 ... [openshift/origin-source] --> Tagged as openshift/origin-source:latest [openshift/origin-source] --> Done [openshift/origin-base] --> FROM openshift/origin-source as 0 [openshift/origin-base] --> COPY *.repo /etc/yum.repos.d/ [openshift/origin-base] --> RUN INSTALL_PKGS=" which git tar wget hostname sysvinit-tools util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof device-mapper-persistent-data ceph-common " && yum install -y centos-release-ceph-luminous && rpm -V centos-release-ceph-luminous && yum install -y ${INSTALL_PKGS} && rpm -V ${INSTALL_PKGS} && yum clean all && mkdir -p /var/lib/origin [openshift/origin-base] Loaded plugins: fastestmirror, ovl [openshift/origin-base] Determining fastest mirrors [openshift/origin-base] * base: mirror.math.princeton.edu [openshift/origin-base] * extras: mirror.ash.fastserv.com [openshift/origin-base] * updates: repos-va.psychz.net [openshift/origin-base] Resolving Dependencies [openshift/origin-base] --> Running transaction check [openshift/origin-base] ---> Package centos-release-ceph-luminous.noarch 0:1.0-1.el7.centos will be installed [openshift/origin-base] --> Processing Dependency: centos-release-storage-common for package: centos-release-ceph-luminous-1.0-1.el7.centos.noarch [openshift/origin-base] --> Running transaction check [openshift/origin-base] ---> Package centos-release-storage-common.noarch 0:1-2.el7.centos will be installed [openshift/origin-base] --> Finished Dependency Resolution [openshift/origin-base] Dependencies Resolved [openshift/origin-base] ================================================================================ [openshift/origin-base] Package Arch Version Repository [openshift/origin-base] Size [openshift/origin-base] ================================================================================ [openshift/origin-base] Installing: [openshift/origin-base] centos-release-ceph-luminous noarch 1.0-1.el7.centos extras 4.1 k [openshift/origin-base] Installing for dependencies: [openshift/origin-base] centos-release-storage-common noarch 1-2.el7.centos extras 4.5 k [openshift/origin-base] Transaction Summary [openshift/origin-base] ================================================================================ [openshift/origin-base] Install 1 Package (+1 Dependent package) [openshift/origin-base] Total download size: 8.6 k [openshift/origin-base] Installed size: 2.1 k [openshift/origin-base] Downloading packages: [openshift/origin-base] warning: /var/cache/yum/x86_64/7/extras/packages/centos-release-ceph-luminous-1.0-1.el7.centos.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY [openshift/origin-base] Public key for centos-release-ceph-luminous-1.0-1.el7.centos.noarch.rpm is not installed [openshift/origin-base] -------------------------------------------------------------------------------- [openshift/origin-base] Total 67 kB/s | 8.6 kB 00:00 [openshift/origin-base] Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 [openshift/origin-base] Importing GPG key 0xF4A80EB5: [openshift/origin-base] Userid : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>" [openshift/origin-base] Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5 [openshift/origin-base] Package : centos-release-7-4.1708.el7.centos.x86_64 (@CentOS) [openshift/origin-base] From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 [openshift/origin-base] Running transaction check [openshift/origin-base] Running transaction test [openshift/origin-base] Transaction test succeeded [openshift/origin-base] Running transaction [openshift/origin-base] Installing : centos-release-storage-common-1-2.el7.centos.noarch 1/2 [openshift/origin-base] Installing : centos-release-ceph-luminous-1.0-1.el7.centos.noarch 2/2 [openshift/origin-base] Verifying : centos-release-storage-common-1-2.el7.centos.noarch 1/2 [openshift/origin-base] Verifying : centos-release-ceph-luminous-1.0-1.el7.centos.noarch 2/2 [openshift/origin-base] Installed: [openshift/origin-base] centos-release-ceph-luminous.noarch 0:1.0-1.el7.centos [openshift/origin-base] Dependency Installed: [openshift/origin-base] centos-release-storage-common.noarch 0:1-2.el7.centos [openshift/origin-base] Complete! [openshift/origin-base] Loaded plugins: fastestmirror, ovl [openshift/origin-base] Loading mirror speeds from cached hostfile [openshift/origin-base] * base: mirror.math.princeton.edu [openshift/origin-base] * extras: mirror.ash.fastserv.com [openshift/origin-base] * updates: repos-va.psychz.net [openshift/origin-base] Package 2:tar-1.26-32.el7.x86_64 already installed and latest version [openshift/origin-base] Package hostname-3.13-3.el7.x86_64 already installed and latest version [openshift/origin-base] Package util-linux-2.23.2-43.el7_4.2.x86_64 already installed and latest version [openshift/origin-base] Package 7:device-mapper-1.02.140-8.el7.x86_64 already installed and latest version [openshift/origin-base] Package 1:findutils-4.5.11-5.el7.x86_64 already installed and latest version [openshift/origin-base] Resolving Dependencies [openshift/origin-base] --> Running transaction check [openshift/origin-base] ---> Package bsdtar.x86_64 0:3.1.2-10.el7_2 will be installed [openshift/origin-base] --> Processing Dependency: libarchive = 3.1.2-10.el7_2 for package: bsdtar-3.1.2-10.el7_2.x86_64 [openshift/origin-base] --> Processing Dependency: liblzo2.so.2()(64bit) for package: bsdtar-3.1.2-10.el7_2.x86_64 [openshift/origin-base] --> Processing Dependency: libarchive.so.13()(64bit) for package: bsdtar-3.1.2-10.el7_2.x86_64 [openshift/origin-base] ---> Package ceph-common.x86_64 2:12.2.2-0.el7 will be installed [openshift/origin-base] --> Processing Dependency: python-rgw = 2:12.2.2-0.el7 for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: python-rbd = 2:12.2.2-0.el7 for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: python-rados = 2:12.2.2-0.el7 for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: python-cephfs = 2:12.2.2-0.el7 for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: librbd1 = 2:12.2.2-0.el7 for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: librados2 = 2:12.2.2-0.el7 for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libcephfs2 = 2:12.2.2-0.el7 for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: python-requests for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: python-prettytable for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libibverbs.so.1(IBVERBS_1.1)(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libibverbs.so.1(IBVERBS_1.0)(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libtcmalloc.so.4()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libsnappy.so.1()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: librbd.so.1()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libradosstriper.so.1()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: librados.so.2()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libleveldb.so.1()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libibverbs.so.1()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libfuse.so.2()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libcephfs.so.2()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libceph-common.so.0()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libbabeltrace.so.1()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libbabeltrace-ctf.so.1()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libaio.so.1()(64bit) for package: 2:ceph-common-12.2.2-0.el7.x86_64 [openshift/origin-base] ---> Package device-mapper-persistent-data.x86_64 0:0.7.0-0.1.rc6.el7_4.1 will be installed [openshift/origin-base] ---> Package e2fsprogs.x86_64 0:1.42.9-10.el7 will be installed [openshift/origin-base] --> Processing Dependency: libss = 1.42.9-10.el7 for package: e2fsprogs-1.42.9-10.el7.x86_64 [openshift/origin-base] --> Processing Dependency: e2fsprogs-libs(x86-64) = 1.42.9-10.el7 for package: e2fsprogs-1.42.9-10.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libss.so.2()(64bit) for package: e2fsprogs-1.42.9-10.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libext2fs.so.2()(64bit) for package: e2fsprogs-1.42.9-10.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libe2p.so.2()(64bit) for package: e2fsprogs-1.42.9-10.el7.x86_64 [openshift/origin-base] ---> Package ethtool.x86_64 2:4.8-1.el7 will be installed [openshift/origin-base] ---> Package git.x86_64 0:1.8.3.1-12.el7_4 will be installed [openshift/origin-base] --> Processing Dependency: perl-Git = 1.8.3.1-12.el7_4 for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl >= 5.008 for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: rsync for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(warnings) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(vars) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(strict) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(lib) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Term::ReadKey) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Git) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Getopt::Long) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(File::stat) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(File::Temp) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(File::Spec) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(File::Path) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(File::Find) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(File::Copy) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(File::Basename) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Exporter) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Error) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: openssh-clients for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: less for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: /usr/bin/perl for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: libgnome-keyring.so.0()(64bit) for package: git-1.8.3.1-12.el7_4.x86_64 [openshift/origin-base] ---> Package iptables.x86_64 0:1.4.21-18.3.el7_4 will be installed [openshift/origin-base] --> Processing Dependency: libnfnetlink.so.0()(64bit) for package: iptables-1.4.21-18.3.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: libnetfilter_conntrack.so.3()(64bit) for package: iptables-1.4.21-18.3.el7_4.x86_64 [openshift/origin-base] ---> Package lsof.x86_64 0:4.87-4.el7 will be installed [openshift/origin-base] ---> Package nmap-ncat.x86_64 2:6.40-7.el7 will be installed [openshift/origin-base] --> Processing Dependency: libpcap.so.1()(64bit) for package: 2:nmap-ncat-6.40-7.el7.x86_64 [openshift/origin-base] ---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed [openshift/origin-base] --> Processing Dependency: libwrap.so.0()(64bit) for package: socat-1.7.3.2-2.el7.x86_64 [openshift/origin-base] ---> Package sysvinit-tools.x86_64 0:2.88-14.dsf.el7 will be installed [openshift/origin-base] ---> Package tree.x86_64 0:1.6.0-10.el7 will be installed [openshift/origin-base] ---> Package wget.x86_64 0:1.14-15.el7_4.1 will be installed [openshift/origin-base] ---> Package which.x86_64 0:2.20-7.el7 will be installed [openshift/origin-base] ---> Package xfsprogs.x86_64 0:4.5.0-12.el7 will be installed [openshift/origin-base] --> Running transaction check [openshift/origin-base] ---> Package e2fsprogs-libs.x86_64 0:1.42.9-10.el7 will be installed [openshift/origin-base] ---> Package fuse-libs.x86_64 0:2.9.2-8.el7 will be installed [openshift/origin-base] ---> Package gperftools-libs.x86_64 0:2.4-8.el7 will be installed [openshift/origin-base] --> Processing Dependency: libunwind.so.8()(64bit) for package: gperftools-libs-2.4-8.el7.x86_64 [openshift/origin-base] ---> Package less.x86_64 0:458-9.el7 will be installed [openshift/origin-base] --> Processing Dependency: groff-base for package: less-458-9.el7.x86_64 [openshift/origin-base] ---> Package leveldb.x86_64 0:1.12.0-5.el7.1 will be installed [openshift/origin-base] ---> Package libaio.x86_64 0:0.3.109-13.el7 will be installed [openshift/origin-base] ---> Package libarchive.x86_64 0:3.1.2-10.el7_2 will be installed [openshift/origin-base] ---> Package libbabeltrace.x86_64 0:1.2.4-3.1.el7 will be installed [openshift/origin-base] ---> Package libcephfs2.x86_64 2:12.2.2-0.el7 will be installed [openshift/origin-base] ---> Package libgnome-keyring.x86_64 0:3.12.0-1.el7 will be installed [openshift/origin-base] ---> Package libibverbs.x86_64 0:13-7.el7 will be installed [openshift/origin-base] --> Processing Dependency: rdma-core(x86-64) = 13-7.el7 for package: libibverbs-13-7.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libnl-route-3.so.200(libnl_3)(64bit) for package: libibverbs-13-7.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libnl-3.so.200(libnl_3)(64bit) for package: libibverbs-13-7.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libnl-route-3.so.200()(64bit) for package: libibverbs-13-7.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libnl-3.so.200()(64bit) for package: libibverbs-13-7.el7.x86_64 [openshift/origin-base] ---> Package libnetfilter_conntrack.x86_64 0:1.0.6-1.el7_3 will be installed [openshift/origin-base] --> Processing Dependency: libmnl.so.0(LIBMNL_1.1)(64bit) for package: libnetfilter_conntrack-1.0.6-1.el7_3.x86_64 [openshift/origin-base] --> Processing Dependency: libmnl.so.0(LIBMNL_1.0)(64bit) for package: libnetfilter_conntrack-1.0.6-1.el7_3.x86_64 [openshift/origin-base] --> Processing Dependency: libmnl.so.0()(64bit) for package: libnetfilter_conntrack-1.0.6-1.el7_3.x86_64 [openshift/origin-base] ---> Package libnfnetlink.x86_64 0:1.0.1-4.el7 will be installed [openshift/origin-base] ---> Package libpcap.x86_64 14:1.5.3-9.el7 will be installed [openshift/origin-base] ---> Package librados2.x86_64 2:12.2.2-0.el7 will be installed [openshift/origin-base] --> Processing Dependency: liblttng-ust.so.0()(64bit) for package: 2:librados2-12.2.2-0.el7.x86_64 [openshift/origin-base] ---> Package libradosstriper1.x86_64 2:12.2.2-0.el7 will be installed [openshift/origin-base] ---> Package librbd1.x86_64 2:12.2.2-0.el7 will be installed [openshift/origin-base] ---> Package libss.x86_64 0:1.42.9-10.el7 will be installed [openshift/origin-base] ---> Package lzo.x86_64 0:2.06-8.el7 will be installed [openshift/origin-base] ---> Package openssh-clients.x86_64 0:7.4p1-13.el7_4 will be installed [openshift/origin-base] --> Processing Dependency: openssh = 7.4p1-13.el7_4 for package: openssh-clients-7.4p1-13.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: fipscheck-lib(x86-64) >= 1.3.0 for package: openssh-clients-7.4p1-13.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: libfipscheck.so.1()(64bit) for package: openssh-clients-7.4p1-13.el7_4.x86_64 [openshift/origin-base] --> Processing Dependency: libedit.so.0()(64bit) for package: openssh-clients-7.4p1-13.el7_4.x86_64 [openshift/origin-base] ---> Package perl.x86_64 4:5.16.3-292.el7 will be installed [openshift/origin-base] --> Processing Dependency: perl-libs = 4:5.16.3-292.el7 for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Socket) >= 1.3 for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Scalar::Util) >= 1.10 for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl-macros for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl-libs for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(threads::shared) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(threads) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(constant) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Time::Local) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Time::HiRes) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Storable) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Socket) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Scalar::Util) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Pod::Simple::XHTML) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Pod::Simple::Search) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Filter::Util::Call) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: perl(Carp) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libperl.so()(64bit) for package: 4:perl-5.16.3-292.el7.x86_64 [openshift/origin-base] ---> Package perl-Error.noarch 1:0.17020-2.el7 will be installed [openshift/origin-base] ---> Package perl-Exporter.noarch 0:5.68-3.el7 will be installed [openshift/origin-base] ---> Package perl-File-Path.noarch 0:2.09-2.el7 will be installed [openshift/origin-base] ---> Package perl-File-Temp.noarch 0:0.23.01-3.el7 will be installed [openshift/origin-base] ---> Package perl-Getopt-Long.noarch 0:2.40-2.el7 will be installed [openshift/origin-base] --> Processing Dependency: perl(Pod::Usage) >= 1.14 for package: perl-Getopt-Long-2.40-2.el7.noarch [openshift/origin-base] --> Processing Dependency: perl(Text::ParseWords) for package: perl-Getopt-Long-2.40-2.el7.noarch [openshift/origin-base] ---> Package perl-Git.noarch 0:1.8.3.1-12.el7_4 will be installed [openshift/origin-base] ---> Package perl-PathTools.x86_64 0:3.40-5.el7 will be installed [openshift/origin-base] ---> Package perl-TermReadKey.x86_64 0:2.30-20.el7 will be installed [openshift/origin-base] ---> Package python-cephfs.x86_64 2:12.2.2-0.el7 will be installed [openshift/origin-base] ---> Package python-prettytable.noarch 0:0.7.2-3.el7 will be installed [openshift/origin-base] ---> Package python-rados.x86_64 2:12.2.2-0.el7 will be installed [openshift/origin-base] ---> Package python-rbd.x86_64 2:12.2.2-0.el7 will be installed [openshift/origin-base] ---> Package python-requests.noarch 0:2.6.0-1.el7_1 will be installed [openshift/origin-base] --> Processing Dependency: python-urllib3 >= 1.10.2-1 for package: python-requests-2.6.0-1.el7_1.noarch [openshift/origin-base] ---> Package python-rgw.x86_64 2:12.2.2-0.el7 will be installed [openshift/origin-base] --> Processing Dependency: librgw2 = 2:12.2.2-0.el7 for package: 2:python-rgw-12.2.2-0.el7.x86_64 [openshift/origin-base] --> Processing Dependency: librgw.so.2()(64bit) for package: 2:python-rgw-12.2.2-0.el7.x86_64 [openshift/origin-base] ---> Package rsync.x86_64 0:3.0.9-18.el7 will be installed [openshift/origin-base] ---> Package snappy.x86_64 0:1.1.0-3.el7 will be installed [openshift/origin-base] ---> Package tcp_wrappers-libs.x86_64 0:7.6-77.el7 will be installed [openshift/origin-base] --> Running transaction check [openshift/origin-base] ---> Package fipscheck-lib.x86_64 0:1.4.1-6.el7 will be installed [openshift/origin-base] --> Processing Dependency: /usr/bin/fipscheck for package: fipscheck-lib-1.4.1-6.el7.x86_64 [openshift/origin-base] ---> Package groff-base.x86_64 0:1.22.2-8.el7 will be installed [openshift/origin-base] ---> Package libedit.x86_64 0:3.0-12.20121213cvs.el7 will be installed [openshift/origin-base] ---> Package libmnl.x86_64 0:1.0.3-7.el7 will be installed [openshift/origin-base] ---> Package libnl3.x86_64 0:3.2.28-4.el7 will be installed [openshift/origin-base] ---> Package librgw2.x86_64 2:12.2.2-0.el7 will be installed [openshift/origin-base] ---> Package libunwind.x86_64 2:1.2-2.el7 will be installed [openshift/origin-base] ---> Package lttng-ust.x86_64 0:2.10.0-1.el7 will be installed [openshift/origin-base] --> Processing Dependency: liburcu-cds.so.6()(64bit) for package: lttng-ust-2.10.0-1.el7.x86_64 [openshift/origin-base] --> Processing Dependency: liburcu-bp.so.6()(64bit) for package: lttng-ust-2.10.0-1.el7.x86_64 [openshift/origin-base] ---> Package openssh.x86_64 0:7.4p1-13.el7_4 will be installed [openshift/origin-base] ---> Package perl-Carp.noarch 0:1.26-244.el7 will be installed [openshift/origin-base] ---> Package perl-Filter.x86_64 0:1.49-3.el7 will be installed [openshift/origin-base] ---> Package perl-Pod-Simple.noarch 1:3.28-4.el7 will be installed [openshift/origin-base] --> Processing Dependency: perl(Pod::Escapes) >= 1.04 for package: 1:perl-Pod-Simple-3.28-4.el7.noarch [openshift/origin-base] --> Processing Dependency: perl(Encode) for package: 1:perl-Pod-Simple-3.28-4.el7.noarch [openshift/origin-base] ---> Package perl-Pod-Usage.noarch 0:1.63-3.el7 will be installed [openshift/origin-base] --> Processing Dependency: perl(Pod::Text) >= 3.15 for package: perl-Pod-Usage-1.63-3.el7.noarch [openshift/origin-base] --> Processing Dependency: perl-Pod-Perldoc for package: perl-Pod-Usage-1.63-3.el7.noarch [openshift/origin-base] ---> Package perl-Scalar-List-Utils.x86_64 0:1.27-248.el7 will be installed [openshift/origin-base] ---> Package perl-Socket.x86_64 0:2.010-4.el7 will be installed [openshift/origin-base] ---> Package perl-Storable.x86_64 0:2.45-3.el7 will be installed [openshift/origin-base] ---> Package perl-Text-ParseWords.noarch 0:3.29-4.el7 will be installed [openshift/origin-base] ---> Package perl-Time-HiRes.x86_64 4:1.9725-3.el7 will be installed [openshift/origin-base] ---> Package perl-Time-Local.noarch 0:1.2300-2.el7 will be installed [openshift/origin-base] ---> Package perl-constant.noarch 0:1.27-2.el7 will be installed [openshift/origin-base] ---> Package perl-libs.x86_64 4:5.16.3-292.el7 will be installed [openshift/origin-base] ---> Package perl-macros.x86_64 4:5.16.3-292.el7 will be installed [openshift/origin-base] ---> Package perl-threads.x86_64 0:1.87-4.el7 will be installed [openshift/origin-base] ---> Package perl-threads-shared.x86_64 0:1.43-6.el7 will be installed [openshift/origin-base] ---> Package python-urllib3.noarch 0:1.10.2-3.el7 will be installed [openshift/origin-base] --> Processing Dependency: python-six for package: python-urllib3-1.10.2-3.el7.noarch [openshift/origin-base] --> Processing Dependency: python-backports-ssl_match_hostname for package: python-urllib3-1.10.2-3.el7.noarch [openshift/origin-base] ---> Package rdma-core.x86_64 0:13-7.el7 will be installed [openshift/origin-base] --> Processing Dependency: pciutils for package: rdma-core-13-7.el7.x86_64 [openshift/origin-base] --> Processing Dependency: initscripts for package: rdma-core-13-7.el7.x86_64 [openshift/origin-base] --> Running transaction check [openshift/origin-base] ---> Package fipscheck.x86_64 0:1.4.1-6.el7 will be installed [openshift/origin-base] ---> Package initscripts.x86_64 0:9.49.39-1.el7_4.1 will be installed [openshift/origin-base] --> Processing Dependency: iproute for package: initscripts-9.49.39-1.el7_4.1.x86_64 [openshift/origin-base] ---> Package pciutils.x86_64 0:3.5.1-2.el7 will be installed [openshift/origin-base] --> Processing Dependency: pciutils-libs = 3.5.1-2.el7 for package: pciutils-3.5.1-2.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libpci.so.3(LIBPCI_3.5)(64bit) for package: pciutils-3.5.1-2.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libpci.so.3(LIBPCI_3.3)(64bit) for package: pciutils-3.5.1-2.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libpci.so.3(LIBPCI_3.1)(64bit) for package: pciutils-3.5.1-2.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libpci.so.3(LIBPCI_3.0)(64bit) for package: pciutils-3.5.1-2.el7.x86_64 [openshift/origin-base] --> Processing Dependency: hwdata for package: pciutils-3.5.1-2.el7.x86_64 [openshift/origin-base] --> Processing Dependency: libpci.so.3()(64bit) for package: pciutils-3.5.1-2.el7.x86_64 [openshift/origin-base] ---> Package perl-Encode.x86_64 0:2.51-7.el7 will be installed [openshift/origin-base] ---> Package perl-Pod-Escapes.noarch 1:1.04-292.el7 will be installed [openshift/origin-base] ---> Package perl-Pod-Perldoc.noarch 0:3.20-4.el7 will be installed [openshift/origin-base] --> Processing Dependency: perl(parent) for package: perl-Pod-Perldoc-3.20-4.el7.noarch [openshift/origin-base] --> Processing Dependency: perl(HTTP::Tiny) for package: perl-Pod-Perldoc-3.20-4.el7.noarch [openshift/origin-base] ---> Package perl-podlators.noarch 0:2.5.1-3.el7 will be installed [openshift/origin-base] ---> Package python-backports-ssl_match_hostname.noarch 0:3.4.0.2-4.el7 will be installed [openshift/origin-base] --> Processing Dependency: python-backports for package: python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch [openshift/origin-base] ---> Package python-six.noarch 0:1.9.0-2.el7 will be installed [openshift/origin-base] ---> Package userspace-rcu.x86_64 0:0.10.0-3.el7 will be installed [openshift/origin-base] --> Running transaction check [openshift/origin-base] ---> Package hwdata.x86_64 0:0.252-8.6.el7 will be installed [openshift/origin-base] ---> Package iproute.x86_64 0:3.10.0-87.el7 will be installed [openshift/origin-base] ---> Package pciutils-libs.x86_64 0:3.5.1-2.el7 will be installed [openshift/origin-base] ---> Package perl-HTTP-Tiny.noarch 0:0.033-3.el7 will be installed [openshift/origin-base] ---> Package perl-parent.noarch 1:0.225-244.el7 will be installed [openshift/origin-base] ---> Package python-backports.x86_64 0:1.0-8.el7 will be installed [openshift/origin-base] --> Finished Dependency Resolution [openshift/origin-base] Dependencies Resolved [openshift/origin-base] ================================================================================ [openshift/origin-base] Package Arch Version Repository Size [openshift/origin-base] ================================================================================ [openshift/origin-base] Installing: [openshift/origin-base] bsdtar x86_64 3.1.2-10.el7_2 base 56 k [openshift/origin-base] ceph-common x86_64 2:12.2.2-0.el7 centos-ceph-luminous 15 M [openshift/origin-base] device-mapper-persistent-data [openshift/origin-base] x86_64 0.7.0-0.1.rc6.el7_4.1 updates 400 k [openshift/origin-base] e2fsprogs x86_64 1.42.9-10.el7 base 698 k [openshift/origin-base] ethtool x86_64 2:4.8-1.el7 base 123 k [openshift/origin-base] git x86_64 1.8.3.1-12.el7_4 updates 4.4 M [openshift/origin-base] iptables x86_64 1.4.21-18.3.el7_4 updates 428 k [openshift/origin-base] lsof x86_64 4.87-4.el7 base 331 k [openshift/origin-base] nmap-ncat x86_64 2:6.40-7.el7 base 201 k [openshift/origin-base] socat x86_64 1.7.3.2-2.el7 base 290 k [openshift/origin-base] sysvinit-tools x86_64 2.88-14.dsf.el7 base 63 k [openshift/origin-base] tree x86_64 1.6.0-10.el7 base 46 k [openshift/origin-base] wget x86_64 1.14-15.el7_4.1 updates 547 k [openshift/origin-base] which x86_64 2.20-7.el7 base 41 k [openshift/origin-base] xfsprogs x86_64 4.5.0-12.el7 base 895 k [openshift/origin-base] Installing for dependencies: [openshift/origin-base] e2fsprogs-libs x86_64 1.42.9-10.el7 base 166 k [openshift/origin-base] fipscheck x86_64 1.4.1-6.el7 base 21 k [openshift/origin-base] fipscheck-lib x86_64 1.4.1-6.el7 base 11 k [openshift/origin-base] fuse-libs x86_64 2.9.2-8.el7 base 93 k [openshift/origin-base] gperftools-libs x86_64 2.4-8.el7 base 272 k [openshift/origin-base] groff-base x86_64 1.22.2-8.el7 base 942 k [openshift/origin-base] hwdata x86_64 0.252-8.6.el7 base 2.2 M [openshift/origin-base] initscripts x86_64 9.49.39-1.el7_4.1 updates 435 k [openshift/origin-base] iproute x86_64 3.10.0-87.el7 base 651 k [openshift/origin-base] less x86_64 458-9.el7 base 120 k [openshift/origin-base] leveldb x86_64 1.12.0-5.el7.1 centos-ceph-luminous 160 k [openshift/origin-base] libaio x86_64 0.3.109-13.el7 base 24 k [openshift/origin-base] libarchive x86_64 3.1.2-10.el7_2 base 318 k [openshift/origin-base] libbabeltrace x86_64 1.2.4-3.1.el7 centos-ceph-luminous 146 k [openshift/origin-base] libcephfs2 x86_64 2:12.2.2-0.el7 centos-ceph-luminous 422 k [openshift/origin-base] libedit x86_64 3.0-12.20121213cvs.el7 [openshift/origin-base] base 92 k [openshift/origin-base] libgnome-keyring x86_64 3.12.0-1.el7 base 109 k [openshift/origin-base] libibverbs x86_64 13-7.el7 base 194 k [openshift/origin-base] libmnl x86_64 1.0.3-7.el7 base 23 k [openshift/origin-base] libnetfilter_conntrack x86_64 1.0.6-1.el7_3 base 55 k [openshift/origin-base] libnfnetlink x86_64 1.0.1-4.el7 base 26 k [openshift/origin-base] libnl3 x86_64 3.2.28-4.el7 base 278 k [openshift/origin-base] libpcap x86_64 14:1.5.3-9.el7 base 138 k [openshift/origin-base] librados2 x86_64 2:12.2.2-0.el7 centos-ceph-luminous 2.8 M [openshift/origin-base] libradosstriper1 x86_64 2:12.2.2-0.el7 centos-ceph-luminous 328 k [openshift/origin-base] librbd1 x86_64 2:12.2.2-0.el7 centos-ceph-luminous 1.1 M [openshift/origin-base] librgw2 x86_64 2:12.2.2-0.el7 centos-ceph-luminous 1.7 M [openshift/origin-base] libss x86_64 1.42.9-10.el7 base 45 k [openshift/origin-base] libunwind x86_64 2:1.2-2.el7 base 57 k [openshift/origin-base] lttng-ust x86_64 2.10.0-1.el7 centos-ceph-luminous 245 k [openshift/origin-base] lzo x86_64 2.06-8.el7 base 59 k [openshift/origin-base] openssh x86_64 7.4p1-13.el7_4 updates 509 k [openshift/origin-base] openssh-clients x86_64 7.4p1-13.el7_4 updates 654 k [openshift/origin-base] pciutils x86_64 3.5.1-2.el7 base 93 k [openshift/origin-base] pciutils-libs x86_64 3.5.1-2.el7 base 46 k [openshift/origin-base] perl x86_64 4:5.16.3-292.el7 base 8.0 M [openshift/origin-base] perl-Carp noarch 1.26-244.el7 base 19 k [openshift/origin-base] perl-Encode x86_64 2.51-7.el7 base 1.5 M [openshift/origin-base] perl-Error noarch 1:0.17020-2.el7 base 32 k [openshift/origin-base] perl-Exporter noarch 5.68-3.el7 base 28 k [openshift/origin-base] perl-File-Path noarch 2.09-2.el7 base 26 k [openshift/origin-base] perl-File-Temp noarch 0.23.01-3.el7 base 56 k [openshift/origin-base] perl-Filter x86_64 1.49-3.el7 base 76 k [openshift/origin-base] perl-Getopt-Long noarch 2.40-2.el7 base 56 k [openshift/origin-base] perl-Git noarch 1.8.3.1-12.el7_4 updates 53 k [openshift/origin-base] perl-HTTP-Tiny noarch 0.033-3.el7 base 38 k [openshift/origin-base] perl-PathTools x86_64 3.40-5.el7 base 82 k [openshift/origin-base] perl-Pod-Escapes noarch 1:1.04-292.el7 base 51 k [openshift/origin-base] perl-Pod-Perldoc noarch 3.20-4.el7 base 87 k [openshift/origin-base] perl-Pod-Simple noarch 1:3.28-4.el7 base 216 k [openshift/origin-base] perl-Pod-Usage noarch 1.63-3.el7 base 27 k [openshift/origin-base] perl-Scalar-List-Utils x86_64 1.27-248.el7 base 36 k [openshift/origin-base] perl-Socket x86_64 2.010-4.el7 base 49 k [openshift/origin-base] perl-Storable x86_64 2.45-3.el7 base 77 k [openshift/origin-base] perl-TermReadKey x86_64 2.30-20.el7 base 31 k [openshift/origin-base] perl-Text-ParseWords noarch 3.29-4.el7 base 14 k [openshift/origin-base] perl-Time-HiRes x86_64 4:1.9725-3.el7 base 45 k [openshift/origin-base] perl-Time-Local noarch 1.2300-2.el7 base 24 k [openshift/origin-base] perl-constant noarch 1.27-2.el7 base 19 k [openshift/origin-base] perl-libs x86_64 4:5.16.3-292.el7 base 688 k [openshift/origin-base] perl-macros x86_64 4:5.16.3-292.el7 base 43 k [openshift/origin-base] perl-parent noarch 1:0.225-244.el7 base 12 k [openshift/origin-base] perl-podlators noarch 2.5.1-3.el7 base 112 k [openshift/origin-base] perl-threads x86_64 1.87-4.el7 base 49 k [openshift/origin-base] perl-threads-shared x86_64 1.43-6.el7 base 39 k [openshift/origin-base] python-backports x86_64 1.0-8.el7 base 5.8 k [openshift/origin-base] python-backports-ssl_match_hostname [openshift/origin-base] noarch 3.4.0.2-4.el7 base 12 k [openshift/origin-base] python-cephfs x86_64 2:12.2.2-0.el7 centos-ceph-luminous 81 k [openshift/origin-base] python-prettytable noarch 0.7.2-3.el7 base 37 k [openshift/origin-base] python-rados x86_64 2:12.2.2-0.el7 centos-ceph-luminous 171 k [openshift/origin-base] python-rbd x86_64 2:12.2.2-0.el7 centos-ceph-luminous 105 k [openshift/origin-base] python-requests noarch 2.6.0-1.el7_1 base 94 k [openshift/origin-base] python-rgw x86_64 2:12.2.2-0.el7 centos-ceph-luminous 72 k [openshift/origin-base] python-six noarch 1.9.0-2.el7 base 29 k [openshift/origin-base] python-urllib3 noarch 1.10.2-3.el7 base 101 k [openshift/origin-base] rdma-core x86_64 13-7.el7 base 43 k [openshift/origin-base] rsync x86_64 3.0.9-18.el7 base 360 k [openshift/origin-base] snappy x86_64 1.1.0-3.el7 base 40 k [openshift/origin-base] tcp_wrappers-libs x86_64 7.6-77.el7 base 66 k [openshift/origin-base] userspace-rcu x86_64 0.10.0-3.el7 centos-ceph-luminous 93 k [openshift/origin-base] Transaction Summary [openshift/origin-base] ================================================================================ [openshift/origin-base] Install 15 Packages (+80 Dependent packages) [openshift/origin-base] Total download size: 50 M [openshift/origin-base] Installed size: 177 M [openshift/origin-base] Downloading packages: [openshift/origin-base] warning: /var/cache/yum/x86_64/7/centos-ceph-luminous/packages/leveldb-1.12.0-5.el7.1.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID e451e5b5: NOKEY [openshift/origin-base] Public key for leveldb-1.12.0-5.el7.1.x86_64.rpm is not installed [openshift/origin-base] -------------------------------------------------------------------------------- [openshift/origin-base] Total 7.0 MB/s | 50 MB 00:07 [openshift/origin-base] Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage [openshift/origin-base] Importing GPG key 0xE451E5B5: [openshift/origin-base] Userid : "CentOS Storage SIG (http://wiki.centos.org/SpecialInterestGroup/Storage) <security@centos.org>" [openshift/origin-base] Fingerprint: 7412 9c0b 173b 071a 3775 951a d4a2 e50b e451 e5b5 [openshift/origin-base] Package : centos-release-storage-common-1-2.el7.centos.noarch (@extras) [openshift/origin-base] From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage [openshift/origin-base] Running transaction check [openshift/origin-base] Running transaction test [openshift/origin-base] Transaction test succeeded [openshift/origin-base] Running transaction [openshift/origin-base] Installing : fipscheck-1.4.1-6.el7.x86_64 1/95 [openshift/origin-base] Installing : fipscheck-lib-1.4.1-6.el7.x86_64 2/95 [openshift/origin-base] Installing : libaio-0.3.109-13.el7.x86_64 3/95 [openshift/origin-base] Installing : snappy-1.1.0-3.el7.x86_64 4/95 [openshift/origin-base] Installing : lzo-2.06-8.el7.x86_64 5/95 [openshift/origin-base] Installing : libmnl-1.0.3-7.el7.x86_64 6/95 [openshift/origin-base] Installing : libnfnetlink-1.0.1-4.el7.x86_64 7/95 [openshift/origin-base] Installing : groff-base-1.22.2-8.el7.x86_64 8/95 [openshift/origin-base] Installing : 1:perl-parent-0.225-244.el7.noarch 9/95 [openshift/origin-base] Installing : perl-HTTP-Tiny-0.033-3.el7.noarch 10/95 [openshift/origin-base] Installing : perl-podlators-2.5.1-3.el7.noarch 11/95 [openshift/origin-base] Installing : perl-Pod-Perldoc-3.20-4.el7.noarch 12/95 [openshift/origin-base] Installing : 1:perl-Pod-Escapes-1.04-292.el7.noarch 13/95 [openshift/origin-base] Installing : perl-Text-ParseWords-3.29-4.el7.noarch 14/95 [openshift/origin-base] Installing : perl-Encode-2.51-7.el7.x86_64 15/95 [openshift/origin-base] Installing : perl-Pod-Usage-1.63-3.el7.noarch 16/95 [openshift/origin-base] Installing : 4:perl-libs-5.16.3-292.el7.x86_64 17/95 [openshift/origin-base] Installing : 4:perl-macros-5.16.3-292.el7.x86_64 18/95 [openshift/origin-base] Installing : perl-Socket-2.010-4.el7.x86_64 19/95 [openshift/origin-base] Installing : 4:perl-Time-HiRes-1.9725-3.el7.x86_64 20/95 [openshift/origin-base] Installing : perl-threads-1.87-4.el7.x86_64 21/95 [openshift/origin-base] Installing : perl-Storable-2.45-3.el7.x86_64 22/95 [openshift/origin-base] Installing : perl-Carp-1.26-244.el7.noarch 23/95 [openshift/origin-base] Installing : perl-Filter-1.49-3.el7.x86_64 24/95 [openshift/origin-base] Installing : perl-Exporter-5.68-3.el7.noarch 25/95 [openshift/origin-base] Installing : perl-constant-1.27-2.el7.noarch 26/95 [openshift/origin-base] Installing : perl-Time-Local-1.2300-2.el7.noarch 27/95 [openshift/origin-base] Installing : perl-threads-shared-1.43-6.el7.x86_64 28/95 [openshift/origin-base] Installing : perl-File-Temp-0.23.01-3.el7.noarch 29/95 [openshift/origin-base] Installing : perl-File-Path-2.09-2.el7.noarch 30/95 [openshift/origin-base] Installing : perl-PathTools-3.40-5.el7.x86_64 31/95 [openshift/origin-base] Installing : perl-Scalar-List-Utils-1.27-248.el7.x86_64 32/95 [openshift/origin-base] Installing : 1:perl-Pod-Simple-3.28-4.el7.noarch 33/95 [openshift/origin-base] Installing : perl-Getopt-Long-2.40-2.el7.noarch 34/95 [openshift/origin-base] Installing : 4:perl-5.16.3-292.el7.x86_64 35/95 [openshift/origin-base] Installing : 1:perl-Error-0.17020-2.el7.noarch 36/95 [openshift/origin-base] Installing : perl-TermReadKey-2.30-20.el7.x86_64 37/95 [openshift/origin-base] Installing : less-458-9.el7.x86_64 38/95 [openshift/origin-base] Installing : libnetfilter_conntrack-1.0.6-1.el7_3.x86_64 39/95 [openshift/origin-base] Installing : iptables-1.4.21-18.3.el7_4.x86_64 40/95 [openshift/origin-base] Installing : iproute-3.10.0-87.el7.x86_64 41/95 [openshift/origin-base] Installing : libarchive-3.1.2-10.el7_2.x86_64 42/95 [openshift/origin-base] Installing : leveldb-1.12.0-5.el7.1.x86_64 43/95 [openshift/origin-base] Installing : openssh-7.4p1-13.el7_4.x86_64 44/95 [openshift/origin-base] Installing : hwdata-0.252-8.6.el7.x86_64 45/95 [openshift/origin-base] Installing : fuse-libs-2.9.2-8.el7.x86_64 46/95 [openshift/origin-base] Installing : userspace-rcu-0.10.0-3.el7.x86_64 47/95 [openshift/origin-base] Installing : lttng-ust-2.10.0-1.el7.x86_64 48/95 [openshift/origin-base] Installing : python-six-1.9.0-2.el7.noarch 49/95 [openshift/origin-base] Installing : libss-1.42.9-10.el7.x86_64 50/95 [openshift/origin-base] Installing : libnl3-3.2.28-4.el7.x86_64 51/95 [openshift/origin-base] Installing : 14:libpcap-1.5.3-9.el7.x86_64 52/95 [openshift/origin-base] Installing : python-prettytable-0.7.2-3.el7.noarch 53/95 [openshift/origin-base] Installing : 2:libunwind-1.2-2.el7.x86_64 54/95 [openshift/origin-base] Installing : gperftools-libs-2.4-8.el7.x86_64 55/95 [openshift/origin-base] Installing : e2fsprogs-libs-1.42.9-10.el7.x86_64 56/95 [openshift/origin-base] Installing : libedit-3.0-12.20121213cvs.el7.x86_64 57/95 [openshift/origin-base] Installing : openssh-clients-7.4p1-13.el7_4.x86_64 58/95 [openshift/origin-base] Installing : tcp_wrappers-libs-7.6-77.el7.x86_64 59/95 [openshift/origin-base] Installing : python-backports-1.0-8.el7.x86_64 60/95 [openshift/origin-base] Installing : python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch 61/95 [openshift/origin-base] Installing : python-urllib3-1.10.2-3.el7.noarch 62/95 [openshift/origin-base] Installing : python-requests-2.6.0-1.el7_1.noarch 63/95 [openshift/origin-base] Installing : rsync-3.0.9-18.el7.x86_64 64/95 [openshift/origin-base] Installing : pciutils-libs-3.5.1-2.el7.x86_64 65/95 [openshift/origin-base] Installing : pciutils-3.5.1-2.el7.x86_64 66/95 [openshift/origin-base] Installing : sysvinit-tools-2.88-14.dsf.el7.x86_64 67/95 [openshift/origin-base] Installing : initscripts-9.49.39-1.el7_4.1.x86_64 68/95 [openshift/origin-base] Installing : rdma-core-13-7.el7.x86_64 69/95 [openshift/origin-base] Installing : libibverbs-13-7.el7.x86_64 70/95 [openshift/origin-base] Installing : 2:librados2-12.2.2-0.el7.x86_64 71/95 [openshift/origin-base] Installing : 2:python-rados-12.2.2-0.el7.x86_64 72/95 [openshift/origin-base] Installing : 2:libcephfs2-12.2.2-0.el7.x86_64 73/95 [openshift/origin-base] Installing : 2:librbd1-12.2.2-0.el7.x86_64 74/95 [openshift/origin-base] Installing : 2:python-rbd-12.2.2-0.el7.x86_64 75/95 [openshift/origin-base] Installing : 2:python-cephfs-12.2.2-0.el7.x86_64 76/95 [openshift/origin-base] Installing : 2:libradosstriper1-12.2.2-0.el7.x86_64 77/95 [openshift/origin-base] Installing : 2:librgw2-12.2.2-0.el7.x86_64 78/95 [openshift/origin-base] Installing : 2:python-rgw-12.2.2-0.el7.x86_64 79/95 [openshift/origin-base] Installing : libbabeltrace-1.2.4-3.1.el7.x86_64 80/95 [openshift/origin-base] Installing : libgnome-keyring-3.12.0-1.el7.x86_64 81/95 [openshift/origin-base] Installing : perl-Git-1.8.3.1-12.el7_4.noarch 82/95 [openshift/origin-base] Installing : git-1.8.3.1-12.el7_4.x86_64 83/95 [openshift/origin-base] Installing : 2:ceph-common-12.2.2-0.el7.x86_64 84/95 [openshift/origin-base] Installing : socat-1.7.3.2-2.el7.x86_64 85/95 [openshift/origin-base] Installing : e2fsprogs-1.42.9-10.el7.x86_64 86/95 [openshift/origin-base] Installing : 2:nmap-ncat-6.40-7.el7.x86_64 87/95 [openshift/origin-base] Installing : bsdtar-3.1.2-10.el7_2.x86_64 88/95 [openshift/origin-base] Installing : device-mapper-persistent-data-0.7.0-0.1.rc6.el7_4.1.x86_ 89/95 [openshift/origin-base] Installing : wget-1.14-15.el7_4.1.x86_64 90/95 [openshift/origin-base] install-info: No such file or directory for /usr/share/info/wget.info.gz [openshift/origin-base] Installing : xfsprogs-4.5.0-12.el7.x86_64 91/95 [openshift/origin-base] Installing : tree-1.6.0-10.el7.x86_64 92/95 [openshift/origin-base] Installing : lsof-4.87-4.el7.x86_64 93/95 [openshift/origin-base] Installing : which-2.20-7.el7.x86_64 94/95 [openshift/origin-base] install-info: No such file or directory for /usr/share/info/which.info.gz [openshift/origin-base] Installing : 2:ethtool-4.8-1.el7.x86_64 95/95 [openshift/origin-base] Verifying : fipscheck-lib-1.4.1-6.el7.x86_64 1/95 [openshift/origin-base] Verifying : perl-HTTP-Tiny-0.033-3.el7.noarch 2/95 [openshift/origin-base] Verifying : python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch 3/95 [openshift/origin-base] Verifying : libgnome-keyring-3.12.0-1.el7.x86_64 4/95 [openshift/origin-base] Verifying : leveldb-1.12.0-5.el7.1.x86_64 5/95 [openshift/origin-base] Verifying : libbabeltrace-1.2.4-3.1.el7.x86_64 6/95 [openshift/origin-base] Verifying : sysvinit-tools-2.88-14.dsf.el7.x86_64 7/95 [openshift/origin-base] Verifying : pciutils-libs-3.5.1-2.el7.x86_64 8/95 [openshift/origin-base] Verifying : rsync-3.0.9-18.el7.x86_64 9/95 [openshift/origin-base] Verifying : 2:librados2-12.2.2-0.el7.x86_64 10/95 [openshift/origin-base] Verifying : 4:perl-5.16.3-292.el7.x86_64 11/95 [openshift/origin-base] Verifying : 2:ethtool-4.8-1.el7.x86_64 12/95 [openshift/origin-base] Verifying : perl-TermReadKey-2.30-20.el7.x86_64 13/95 [openshift/origin-base] Verifying : which-2.20-7.el7.x86_64 14/95 [openshift/origin-base] Verifying : groff-base-1.22.2-8.el7.x86_64 15/95 [openshift/origin-base] Verifying : perl-File-Temp-0.23.01-3.el7.noarch 16/95 [openshift/origin-base] Verifying : perl-Socket-2.010-4.el7.x86_64 17/95 [openshift/origin-base] Verifying : gperftools-libs-2.4-8.el7.x86_64 18/95 [openshift/origin-base] Verifying : perl-threads-shared-1.43-6.el7.x86_64 19/95 [openshift/origin-base] Verifying : fipscheck-1.4.1-6.el7.x86_64 20/95 [openshift/origin-base] Verifying : python-backports-1.0-8.el7.x86_64 21/95 [openshift/origin-base] Verifying : 1:perl-Pod-Escapes-1.04-292.el7.noarch 22/95 [openshift/origin-base] Verifying : libibverbs-13-7.el7.x86_64 23/95 [openshift/origin-base] Verifying : libnfnetlink-1.0.1-4.el7.x86_64 24/95 [openshift/origin-base] Verifying : tcp_wrappers-libs-7.6-77.el7.x86_64 25/95 [openshift/origin-base] Verifying : perl-File-Path-2.09-2.el7.noarch 26/95 [openshift/origin-base] Verifying : libedit-3.0-12.20121213cvs.el7.x86_64 27/95 [openshift/origin-base] Verifying : lsof-4.87-4.el7.x86_64 28/95 [openshift/origin-base] Verifying : perl-Text-ParseWords-3.29-4.el7.noarch 29/95 [openshift/origin-base] Verifying : iptables-1.4.21-18.3.el7_4.x86_64 30/95 [openshift/origin-base] Verifying : socat-1.7.3.2-2.el7.x86_64 31/95 [openshift/origin-base] Verifying : 4:perl-Time-HiRes-1.9725-3.el7.x86_64 32/95 [openshift/origin-base] Verifying : 2:python-rados-12.2.2-0.el7.x86_64 33/95 [openshift/origin-base] Verifying : git-1.8.3.1-12.el7_4.x86_64 34/95 [openshift/origin-base] Verifying : 2:nmap-ncat-6.40-7.el7.x86_64 35/95 [openshift/origin-base] Verifying : python-urllib3-1.10.2-3.el7.noarch 36/95 [openshift/origin-base] Verifying : libarchive-3.1.2-10.el7_2.x86_64 37/95 [openshift/origin-base] Verifying : openssh-7.4p1-13.el7_4.x86_64 38/95 [openshift/origin-base] Verifying : tree-1.6.0-10.el7.x86_64 39/95 [openshift/origin-base] Verifying : 4:perl-libs-5.16.3-292.el7.x86_64 40/95 [openshift/origin-base] Verifying : 2:python-rbd-12.2.2-0.el7.x86_64 41/95 [openshift/origin-base] Verifying : device-mapper-persistent-data-0.7.0-0.1.rc6.el7_4.1.x86_ 42/95 [openshift/origin-base] Verifying : bsdtar-3.1.2-10.el7_2.x86_64 43/95 [openshift/origin-base] Verifying : e2fsprogs-libs-1.42.9-10.el7.x86_64 44/95 [openshift/origin-base] Verifying : 2:libunwind-1.2-2.el7.x86_64 45/95 [openshift/origin-base] Verifying : perl-Pod-Usage-1.63-3.el7.noarch 46/95 [openshift/origin-base] Verifying : perl-Encode-2.51-7.el7.x86_64 47/95 [openshift/origin-base] Verifying : python-prettytable-0.7.2-3.el7.noarch 48/95 [openshift/origin-base] Verifying : 2:libcephfs2-12.2.2-0.el7.x86_64 49/95 [openshift/origin-base] Verifying : perl-threads-1.87-4.el7.x86_64 50/95 [openshift/origin-base] Verifying : libmnl-1.0.3-7.el7.x86_64 51/95 [openshift/origin-base] Verifying : 2:ceph-common-12.2.2-0.el7.x86_64 52/95 [openshift/origin-base] Verifying : perl-Storable-2.45-3.el7.x86_64 53/95 [openshift/origin-base] Verifying : lttng-ust-2.10.0-1.el7.x86_64 54/95 [openshift/origin-base] Verifying : 14:libpcap-1.5.3-9.el7.x86_64 55/95 [openshift/origin-base] Verifying : 4:perl-macros-5.16.3-292.el7.x86_64 56/95 [openshift/origin-base] Verifying : lzo-2.06-8.el7.x86_64 57/95 [openshift/origin-base] Verifying : 1:perl-parent-0.225-244.el7.noarch 58/95 [openshift/origin-base] Verifying : snappy-1.1.0-3.el7.x86_64 59/95 [openshift/origin-base] Verifying : 2:python-cephfs-12.2.2-0.el7.x86_64 60/95 [openshift/origin-base] Verifying : libaio-0.3.109-13.el7.x86_64 61/95 [openshift/origin-base] Verifying : pciutils-3.5.1-2.el7.x86_64 62/95 [openshift/origin-base] Verifying : perl-Carp-1.26-244.el7.noarch 63/95 [openshift/origin-base] Verifying : perl-Git-1.8.3.1-12.el7_4.noarch 64/95 [openshift/origin-base] Verifying : perl-Pod-Perldoc-3.20-4.el7.noarch 65/95 [openshift/origin-base] Verifying : perl-PathTools-3.40-5.el7.x86_64 66/95 [openshift/origin-base] Verifying : libnl3-3.2.28-4.el7.x86_64 67/95 [openshift/origin-base] Verifying : perl-Filter-1.49-3.el7.x86_64 68/95 [openshift/origin-base] Verifying : xfsprogs-4.5.0-12.el7.x86_64 69/95 [openshift/origin-base] Verifying : less-458-9.el7.x86_64 70/95 [openshift/origin-base] Verifying : libss-1.42.9-10.el7.x86_64 71/95 [openshift/origin-base] Verifying : initscripts-9.49.39-1.el7_4.1.x86_64 72/95 [openshift/origin-base] Verifying : rdma-core-13-7.el7.x86_64 73/95 [openshift/origin-base] Verifying : perl-Exporter-5.68-3.el7.noarch 74/95 [openshift/origin-base] Verifying : perl-constant-1.27-2.el7.noarch 75/95 [openshift/origin-base] Verifying : 2:librbd1-12.2.2-0.el7.x86_64 76/95 [openshift/origin-base] Verifying : libnetfilter_conntrack-1.0.6-1.el7_3.x86_64 77/95 [openshift/origin-base] Verifying : iproute-3.10.0-87.el7.x86_64 78/95 [openshift/origin-base] Verifying : 1:perl-Pod-Simple-3.28-4.el7.noarch 79/95 [openshift/origin-base] Verifying : perl-Time-Local-1.2300-2.el7.noarch 80/95 [openshift/origin-base] Verifying : openssh-clients-7.4p1-13.el7_4.x86_64 81/95 [openshift/origin-base] Verifying : python-six-1.9.0-2.el7.noarch 82/95 [openshift/origin-base] Verifying : wget-1.14-15.el7_4.1.x86_64 83/95 [openshift/origin-base] Verifying : 1:perl-Error-0.17020-2.el7.noarch 84/95 [openshift/origin-base] Verifying : 2:libradosstriper1-12.2.2-0.el7.x86_64 85/95 [openshift/origin-base] Verifying : userspace-rcu-0.10.0-3.el7.x86_64 86/95 [openshift/origin-base] Verifying : perl-Scalar-List-Utils-1.27-248.el7.x86_64 87/95 [openshift/origin-base] Verifying : fuse-libs-2.9.2-8.el7.x86_64 88/95 [openshift/origin-base] Verifying : 2:librgw2-12.2.2-0.el7.x86_64 89/95 [openshift/origin-base] Verifying : perl-podlators-2.5.1-3.el7.noarch 90/95 [openshift/origin-base] Verifying : python-requests-2.6.0-1.el7_1.noarch 91/95 [openshift/origin-base] Verifying : 2:python-rgw-12.2.2-0.el7.x86_64 92/95 [openshift/origin-base] Verifying : e2fsprogs-1.42.9-10.el7.x86_64 93/95 [openshift/origin-base] Verifying : hwdata-0.252-8.6.el7.x86_64 94/95 [openshift/origin-base] Verifying : perl-Getopt-Long-2.40-2.el7.noarch 95/95 [openshift/origin-base] Installed: [openshift/origin-base] bsdtar.x86_64 0:3.1.2-10.el7_2 [openshift/origin-base] ceph-common.x86_64 2:12.2.2-0.el7 [openshift/origin-base] device-mapper-persistent-data.x86_64 0:0.7.0-0.1.rc6.el7_4.1 [openshift/origin-base] e2fsprogs.x86_64 0:1.42.9-10.el7 [openshift/origin-base] ethtool.x86_64 2:4.8-1.el7 [openshift/origin-base] git.x86_64 0:1.8.3.1-12.el7_4 [openshift/origin-base] iptables.x86_64 0:1.4.21-18.3.el7_4 [openshift/origin-base] lsof.x86_64 0:4.87-4.el7 [openshift/origin-base] nmap-ncat.x86_64 2:6.40-7.el7 [openshift/origin-base] socat.x86_64 0:1.7.3.2-2.el7 [openshift/origin-base] sysvinit-tools.x86_64 0:2.88-14.dsf.el7 [openshift/origin-base] tree.x86_64 0:1.6.0-10.el7 [openshift/origin-base] wget.x86_64 0:1.14-15.el7_4.1 [openshift/origin-base] which.x86_64 0:2.20-7.el7 [openshift/origin-base] xfsprogs.x86_64 0:4.5.0-12.el7 [openshift/origin-base] Dependency Installed: [openshift/origin-base] e2fsprogs-libs.x86_64 0:1.42.9-10.el7 [openshift/origin-base] fipscheck.x86_64 0:1.4.1-6.el7 [openshift/origin-base] fipscheck-lib.x86_64 0:1.4.1-6.el7 [openshift/origin-base] fuse-libs.x86_64 0:2.9.2-8.el7 [openshift/origin-base] gperftools-libs.x86_64 0:2.4-8.el7 [openshift/origin-base] groff-base.x86_64 0:1.22.2-8.el7 [openshift/origin-base] hwdata.x86_64 0:0.252-8.6.el7 [openshift/origin-base] initscripts.x86_64 0:9.49.39-1.el7_4.1 [openshift/origin-base] iproute.x86_64 0:3.10.0-87.el7 [openshift/origin-base] less.x86_64 0:458-9.el7 [openshift/origin-base] leveldb.x86_64 0:1.12.0-5.el7.1 [openshift/origin-base] libaio.x86_64 0:0.3.109-13.el7 [openshift/origin-base] libarchive.x86_64 0:3.1.2-10.el7_2 [openshift/origin-base] libbabeltrace.x86_64 0:1.2.4-3.1.el7 [openshift/origin-base] libcephfs2.x86_64 2:12.2.2-0.el7 [openshift/origin-base] libedit.x86_64 0:3.0-12.20121213cvs.el7 [openshift/origin-base] libgnome-keyring.x86_64 0:3.12.0-1.el7 [openshift/origin-base] libibverbs.x86_64 0:13-7.el7 [openshift/origin-base] libmnl.x86_64 0:1.0.3-7.el7 [openshift/origin-base] libnetfilter_conntrack.x86_64 0:1.0.6-1.el7_3 [openshift/origin-base] libnfnetlink.x86_64 0:1.0.1-4.el7 [openshift/origin-base] libnl3.x86_64 0:3.2.28-4.el7 [openshift/origin-base] libpcap.x86_64 14:1.5.3-9.el7 [openshift/origin-base] librados2.x86_64 2:12.2.2-0.el7 [openshift/origin-base] libradosstriper1.x86_64 2:12.2.2-0.el7 [openshift/origin-base] librbd1.x86_64 2:12.2.2-0.el7 [openshift/origin-base] librgw2.x86_64 2:12.2.2-0.el7 [openshift/origin-base] libss.x86_64 0:1.42.9-10.el7 [openshift/origin-base] libunwind.x86_64 2:1.2-2.el7 [openshift/origin-base] lttng-ust.x86_64 0:2.10.0-1.el7 [openshift/origin-base] lzo.x86_64 0:2.06-8.el7 [openshift/origin-base] openssh.x86_64 0:7.4p1-13.el7_4 [openshift/origin-base] openssh-clients.x86_64 0:7.4p1-13.el7_4 [openshift/origin-base] pciutils.x86_64 0:3.5.1-2.el7 [openshift/origin-base] pciutils-libs.x86_64 0:3.5.1-2.el7 [openshift/origin-base] perl.x86_64 4:5.16.3-292.el7 [openshift/origin-base] perl-Carp.noarch 0:1.26-244.el7 [openshift/origin-base] perl-Encode.x86_64 0:2.51-7.el7 [openshift/origin-base] perl-Error.noarch 1:0.17020-2.el7 [openshift/origin-base] perl-Exporter.noarch 0:5.68-3.el7 [openshift/origin-base] perl-File-Path.noarch 0:2.09-2.el7 [openshift/origin-base] perl-File-Temp.noarch 0:0.23.01-3.el7 [openshift/origin-base] perl-Filter.x86_64 0:1.49-3.el7 [openshift/origin-base] perl-Getopt-Long.noarch 0:2.40-2.el7 [openshift/origin-base] perl-Git.noarch 0:1.8.3.1-12.el7_4 [openshift/origin-base] perl-HTTP-Tiny.noarch 0:0.033-3.el7 [openshift/origin-base] perl-PathTools.x86_64 0:3.40-5.el7 [openshift/origin-base] perl-Pod-Escapes.noarch 1:1.04-292.el7 [openshift/origin-base] perl-Pod-Perldoc.noarch 0:3.20-4.el7 [openshift/origin-base] perl-Pod-Simple.noarch 1:3.28-4.el7 [openshift/origin-base] perl-Pod-Usage.noarch 0:1.63-3.el7 [openshift/origin-base] perl-Scalar-List-Utils.x86_64 0:1.27-248.el7 [openshift/origin-base] perl-Socket.x86_64 0:2.010-4.el7 [openshift/origin-base] perl-Storable.x86_64 0:2.45-3.el7 [openshift/origin-base] perl-TermReadKey.x86_64 0:2.30-20.el7 [openshift/origin-base] perl-Text-ParseWords.noarch 0:3.29-4.el7 [openshift/origin-base] perl-Time-HiRes.x86_64 4:1.9725-3.el7 [openshift/origin-base] perl-Time-Local.noarch 0:1.2300-2.el7 [openshift/origin-base] perl-constant.noarch 0:1.27-2.el7 [openshift/origin-base] perl-libs.x86_64 4:5.16.3-292.el7 [openshift/origin-base] perl-macros.x86_64 4:5.16.3-292.el7 [openshift/origin-base] perl-parent.noarch 1:0.225-244.el7 [openshift/origin-base] perl-podlators.noarch 0:2.5.1-3.el7 [openshift/origin-base] perl-threads.x86_64 0:1.87-4.el7 [openshift/origin-base] perl-threads-shared.x86_64 0:1.43-6.el7 [openshift/origin-base] python-backports.x86_64 0:1.0-8.el7 [openshift/origin-base] python-backports-ssl_match_hostname.noarch 0:3.4.0.2-4.el7 [openshift/origin-base] python-cephfs.x86_64 2:12.2.2-0.el7 [openshift/origin-base] python-prettytable.noarch 0:0.7.2-3.el7 [openshift/origin-base] python-rados.x86_64 2:12.2.2-0.el7 [openshift/origin-base] python-rbd.x86_64 2:12.2.2-0.el7 [openshift/origin-base] python-requests.noarch 0:2.6.0-1.el7_1 [openshift/origin-base] python-rgw.x86_64 2:12.2.2-0.el7 [openshift/origin-base] python-six.noarch 0:1.9.0-2.el7 [openshift/origin-base] python-urllib3.noarch 0:1.10.2-3.el7 [openshift/origin-base] rdma-core.x86_64 0:13-7.el7 [openshift/origin-base] rsync.x86_64 0:3.0.9-18.el7 [openshift/origin-base] snappy.x86_64 0:1.1.0-3.el7 [openshift/origin-base] tcp_wrappers-libs.x86_64 0:7.6-77.el7 [openshift/origin-base] userspace-rcu.x86_64 0:0.10.0-3.el7 [openshift/origin-base] Complete! [openshift/origin-base] Loaded plugins: fastestmirror, ovl [openshift/origin-base] Cleaning repos: base cbs-paas7-openshift-multiarch-el7-build [openshift/origin-base] : centos-ceph-luminous extras updates [openshift/origin-base] Cleaning up everything [openshift/origin-base] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/origin-base] Cleaning up list of fastest mirrors [openshift/origin-base] --> LABEL io.k8s.display-name="OpenShift Origin CentOS 7 Base" io.k8s.description="This is the base image from which all OpenShift Origin images inherit." io.openshift.tags="openshift,base" [openshift/origin-base] --> Committing changes to openshift/origin-base:4253ab3 ... [openshift/origin-base] --> Tagged as openshift/origin-base:latest [openshift/origin-base] --> Done hack/build-base-images.sh took 137 seconds + [[ master == release-1.[4-5] ]] + OS_BUILD_ENV_PULL_IMAGE=true + hack/env make release BUILD_TESTS=1 [INFO] [19:29:21+0000] Pulling the openshift/origin-release:golang-1.9 image to update it... Trying to pull repository registry.access.redhat.com/openshift/origin-release ... Trying to pull repository docker.io/openshift/origin-release ... golang-1.9: Pulling from docker.io/openshift/origin-release 85432449fd0f: Already exists 80f62a878d6e: Already exists af5b7adfe8ed: Already exists 05bb4cf7c38d: Pulling fs layer 05bb4cf7c38d: Verifying Checksum 05bb4cf7c38d: Download complete 05bb4cf7c38d: Pull complete Digest: sha256:646266f0500fc7415ee66a42f3037b4219f996a8437c191f0921ff10ed8c7ba8 Status: Downloaded newer image for docker.io/openshift/origin-release:golang-1.9 OS_ONLY_BUILD_PLATFORMS='linux/amd64' hack/build-rpms.sh [INFO] [19:30:37+0000] Building release RPMs for /go/src/github.com/openshift/origin/origin.spec ... Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.0c7LUG + umask 022 + cd /tmp/openshift/build-rpms/rpm/BUILD + cd /tmp/openshift/build-rpms/rpm/BUILD + rm -rf origin-3.10.0 + /usr/bin/gzip -dc /tmp/openshift/build-rpms/rpm/SOURCES/origin-3.10.0.tar.gz + /usr/bin/tar -xf - + STATUS=0 + '[' 0 -ne 0 ']' + cd origin-3.10.0 + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.FRetau + umask 022 + cd /tmp/openshift/build-rpms/rpm/BUILD + cd origin-3.10.0 + BUILD_PLATFORM=linux/amd64 + OS_ONLY_BUILD_PLATFORMS=linux/amd64 + OS_GIT_COMMIT=4253ab3 + OS_GIT_TREE_STATE=clean + OS_GIT_VERSION=v3.10.0-alpha.0+4253ab3-549 + OS_GIT_MAJOR=3 + OS_GIT_MINOR=10+ + OS_GIT_PATCH=0 + KUBE_GIT_COMMIT=a0ce1bc + KUBE_GIT_VERSION=v1.9.1+a0ce1bc657 + ETCD_GIT_VERSION=v3.2.16 + ETCD_GIT_COMMIT=121edf0 + OS_GIT_CATALOG_VERSION= + OS_BUILD_RELEASE_ARCHIVES=n + make build-cross make[1]: Entering directory `/tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0' hack/build-cross.sh ++ Building go targets for linux/amd64: images/pod examples/hello-openshift ++ Building go targets for linux/amd64: pkg/network/sdn-cni-plugin vendor/github.com/containernetworking/plugins/plugins/ipam/host-local vendor/github.com/containernetworking/plugins/plugins/main/loopback ++ Building go targets for linux/amd64: cmd/hypershift cmd/openshift cmd/oc cmd/oadm cmd/template-service-broker vendor/k8s.io/kubernetes/cmd/hyperkube ++ Building go targets for linux/amd64: test/extended/extended.test hack/build-cross.sh took 453 seconds make[1]: Leaving directory `/tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0' + OS_ONLY_BUILD_PLATFORMS=linux/amd64 + OS_GIT_COMMIT=4253ab3 + OS_GIT_TREE_STATE=clean + OS_GIT_VERSION=v3.10.0-alpha.0+4253ab3-549 + OS_GIT_MAJOR=3 + OS_GIT_MINOR=10+ + OS_GIT_PATCH=0 + KUBE_GIT_COMMIT=a0ce1bc + KUBE_GIT_VERSION=v1.9.1+a0ce1bc657 + ETCD_GIT_VERSION=v3.2.16 + ETCD_GIT_COMMIT=121edf0 + OS_GIT_CATALOG_VERSION= + hack/build-go.sh vendor/github.com/onsi/ginkgo/ginkgo ++ Building go targets for linux/amd64: vendor/github.com/onsi/ginkgo/ginkgo [INFO] [19:38:46+0000] /go/src/github.com/openshift/origin/hack/build-go.sh exited with code 0 after 00h 00m 02s + OS_ONLY_BUILD_PLATFORMS=linux/amd64 + OS_GIT_COMMIT=4253ab3 + OS_GIT_TREE_STATE=clean + OS_GIT_VERSION=v3.10.0-alpha.0+4253ab3-549 + OS_GIT_MAJOR=3 + OS_GIT_MINOR=10+ + OS_GIT_PATCH=0 + KUBE_GIT_COMMIT=a0ce1bc + KUBE_GIT_VERSION=v1.9.1+a0ce1bc657 + ETCD_GIT_VERSION=v3.2.16 + ETCD_GIT_COMMIT=121edf0 + OS_GIT_CATALOG_VERSION= + unset GOPATH + cmd/cluster-capacity/go/src/github.com/kubernetes-incubator/cluster-capacity/hack/build-cross.sh ++ Building go targets for linux/amd64: cmd/hypercc cmd/cluster-capacity/go/src/github.com/kubernetes-incubator/cluster-capacity/hack/build-cross.sh took 81 seconds + OS_GIT_COMMIT=4253ab3 + OS_GIT_TREE_STATE=clean + OS_GIT_VERSION=v3.10.0-alpha.0+4253ab3-549 + OS_GIT_MAJOR=3 + OS_GIT_MINOR=10+ + OS_GIT_PATCH=0 + KUBE_GIT_COMMIT=a0ce1bc + KUBE_GIT_VERSION=v1.9.1+a0ce1bc657 + ETCD_GIT_VERSION=v3.2.16 + ETCD_GIT_COMMIT=121edf0 + OS_GIT_CATALOG_VERSION= + hack/generate-docs.sh [INFO] [19:40:07+0000] No compiled `gendocs` binary was found. Attempting to build one using: [INFO] [19:40:07+0000] $ hack/build-go.sh tools/gendocs ++ Building go targets for linux/amd64: tools/gendocs [INFO] [19:40:25+0000] hack/build-go.sh exited with code 0 after 00h 00m 18s [INFO] [19:40:25+0000] No compiled `genman` binary was found. Attempting to build one using: [INFO] [19:40:25+0000] $ hack/build-go.sh tools/genman ++ Building go targets for linux/amd64: tools/genman [INFO] [19:40:48+0000] hack/build-go.sh exited with code 0 after 00h 00m 23s [INFO] [19:40:48+0000] [CLEANUP] Cleaning up temporary directories Assets generated in /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0//docs/generated [INFO] [19:40:49+0000] [CLEANUP] Cleaning up temporary directories Assets generated in /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0//docs/man/man1 [INFO] [19:41:03+0000] [CLEANUP] Cleaning up temporary directories Assets generated in /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0//docs/man/man1 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.i0sTfi + exit 0 + umask 022 + cd /tmp/openshift/build-rpms/rpm/BUILD + '[' /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64 '!=' / ']' + rm -rf /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64 ++ dirname /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64 + mkdir -p /tmp/openshift/build-rpms/rpm/BUILDROOT + mkdir /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64 + cd origin-3.10.0 ++ go env GOHOSTOS ++ go env GOHOSTARCH + PLATFORM=linux/amd64 + install -d /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin +++ INSTALLING oc + for bin in oc oadm openshift hypershift template-service-broker + echo '+++ INSTALLING oc' + install -p -m 755 _output/local/bin/linux/amd64/oc /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/oc + for bin in oc oadm openshift hypershift template-service-broker + echo '+++ INSTALLING oadm' + install -p -m 755 _output/local/bin/linux/amd64/oadm /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/oadm +++ INSTALLING oadm + for bin in oc oadm openshift hypershift template-service-broker + echo '+++ INSTALLING openshift' + install -p -m 755 _output/local/bin/linux/amd64/openshift /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/openshift +++ INSTALLING openshift +++ INSTALLING hypershift + for bin in oc oadm openshift hypershift template-service-broker + echo '+++ INSTALLING hypershift' + install -p -m 755 _output/local/bin/linux/amd64/hypershift /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/hypershift +++ INSTALLING template-service-broker + for bin in oc oadm openshift hypershift template-service-broker + echo '+++ INSTALLING template-service-broker' + install -p -m 755 _output/local/bin/linux/amd64/template-service-broker /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/template-service-broker + install -d /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/libexec/origin + install -p -m 755 _output/local/bin/linux/amd64/extended.test /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/libexec/origin/ + install -p -m 755 _output/local/bin/linux/amd64/ginkgo /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/libexec/origin/ + install -p -m 755 _output/local/bin/linux/amd64/hyperkube /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/ + install -p -m 755 cmd/cluster-capacity/go/src/github.com/kubernetes-incubator/cluster-capacity/_output/local/bin/linux/amd64/hypercc /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/ + ln -s hypercc /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/cluster-capacity + install -p -m 755 _output/local/bin/linux/amd64/pod /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/ + install -d -m 0755 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/lib/systemd/system + mkdir -p /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/etc/sysconfig + for cmd in openshift-deploy openshift-docker-build openshift-sti-build openshift-git-clone openshift-manage-dockerfile openshift-extract-image-content openshift-f5-router openshift-recycle openshift-router origin + ln -s openshift /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/openshift-deploy + for cmd in openshift-deploy openshift-docker-build openshift-sti-build openshift-git-clone openshift-manage-dockerfile openshift-extract-image-content openshift-f5-router openshift-recycle openshift-router origin + ln -s openshift /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/openshift-docker-build + for cmd in openshift-deploy openshift-docker-build openshift-sti-build openshift-git-clone openshift-manage-dockerfile openshift-extract-image-content openshift-f5-router openshift-recycle openshift-router origin + ln -s openshift /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/openshift-sti-build + for cmd in openshift-deploy openshift-docker-build openshift-sti-build openshift-git-clone openshift-manage-dockerfile openshift-extract-image-content openshift-f5-router openshift-recycle openshift-router origin + ln -s openshift /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/openshift-git-clone + for cmd in openshift-deploy openshift-docker-build openshift-sti-build openshift-git-clone openshift-manage-dockerfile openshift-extract-image-content openshift-f5-router openshift-recycle openshift-router origin + ln -s openshift /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/openshift-manage-dockerfile + for cmd in openshift-deploy openshift-docker-build openshift-sti-build openshift-git-clone openshift-manage-dockerfile openshift-extract-image-content openshift-f5-router openshift-recycle openshift-router origin + ln -s openshift /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/openshift-extract-image-content + for cmd in openshift-deploy openshift-docker-build openshift-sti-build openshift-git-clone openshift-manage-dockerfile openshift-extract-image-content openshift-f5-router openshift-recycle openshift-router origin + ln -s openshift /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/openshift-f5-router + for cmd in openshift-deploy openshift-docker-build openshift-sti-build openshift-git-clone openshift-manage-dockerfile openshift-extract-image-content openshift-f5-router openshift-recycle openshift-router origin + ln -s openshift /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/openshift-recycle + for cmd in openshift-deploy openshift-docker-build openshift-sti-build openshift-git-clone openshift-manage-dockerfile openshift-extract-image-content openshift-f5-router openshift-recycle openshift-router origin + ln -s openshift /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/openshift-router + for cmd in openshift-deploy openshift-docker-build openshift-sti-build openshift-git-clone openshift-manage-dockerfile openshift-extract-image-content openshift-f5-router openshift-recycle openshift-router origin + ln -s openshift /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/origin + ln -s oc /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/kubectl + install -d -m 0755 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/etc/origin/master /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/etc/origin/node + install -m 0644 contrib/systemd/origin-master.service /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/lib/systemd/system/origin-master.service + install -m 0644 contrib/systemd/origin-node.service /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/lib/systemd/system/origin-node.service + install -m 0644 contrib/systemd/origin-master.sysconfig /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/etc/sysconfig/origin-master + install -m 0644 contrib/systemd/origin-node.sysconfig /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/etc/sysconfig/origin-node + install -d -m 0755 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/man/man1 + install -m 0644 docs/man/man1/oc-adm-build-chain.1 docs/man/man1/oc-adm-ca-create-key-pair.1 docs/man/man1/oc-adm-ca-create-master-certs.1 docs/man/man1/oc-adm-ca-create-server-cert.1 docs/man/man1/oc-adm-ca-create-signer-cert.1 docs/man/man1/oc-adm-ca-decrypt.1 docs/man/man1/oc-adm-ca-encrypt.1 docs/man/man1/oc-adm-ca.1 docs/man/man1/oc-adm-certificate-approve.1 docs/man/man1/oc-adm-certificate-deny.1 docs/man/man1/oc-adm-certificate.1 docs/man/man1/oc-adm-completion.1 docs/man/man1/oc-adm-config-current-context.1 docs/man/man1/oc-adm-config-delete-cluster.1 docs/man/man1/oc-adm-config-delete-context.1 docs/man/man1/oc-adm-config-get-clusters.1 docs/man/man1/oc-adm-config-get-contexts.1 docs/man/man1/oc-adm-config-rename-context.1 docs/man/man1/oc-adm-config-set-cluster.1 docs/man/man1/oc-adm-config-set-context.1 docs/man/man1/oc-adm-config-set-credentials.1 docs/man/man1/oc-adm-config-set.1 docs/man/man1/oc-adm-config-unset.1 docs/man/man1/oc-adm-config-use-context.1 docs/man/man1/oc-adm-config-view.1 docs/man/man1/oc-adm-config.1 docs/man/man1/oc-adm-cordon.1 docs/man/man1/oc-adm-create-api-client-config.1 docs/man/man1/oc-adm-create-bootstrap-policy-file.1 docs/man/man1/oc-adm-create-bootstrap-project-template.1 docs/man/man1/oc-adm-create-error-template.1 docs/man/man1/oc-adm-create-key-pair.1 docs/man/man1/oc-adm-create-kubeconfig.1 docs/man/man1/oc-adm-create-login-template.1 docs/man/man1/oc-adm-create-master-certs.1 docs/man/man1/oc-adm-create-node-config.1 docs/man/man1/oc-adm-create-provider-selection-template.1 docs/man/man1/oc-adm-create-server-cert.1 docs/man/man1/oc-adm-create-signer-cert.1 docs/man/man1/oc-adm-diagnostics-aggregatedlogging.1 docs/man/man1/oc-adm-diagnostics-all.1 docs/man/man1/oc-adm-diagnostics-analyzelogs.1 docs/man/man1/oc-adm-diagnostics-appcreate.1 docs/man/man1/oc-adm-diagnostics-clusterregistry.1 docs/man/man1/oc-adm-diagnostics-clusterrolebindings.1 docs/man/man1/oc-adm-diagnostics-clusterroles.1 docs/man/man1/oc-adm-diagnostics-clusterrouter.1 docs/man/man1/oc-adm-diagnostics-configcontexts.1 docs/man/man1/oc-adm-diagnostics-diagnosticpod.1 docs/man/man1/oc-adm-diagnostics-etcdwritevolume.1 docs/man/man1/oc-adm-diagnostics-inpod-networkcheck.1 docs/man/man1/oc-adm-diagnostics-inpod-poddiagnostic.1 docs/man/man1/oc-adm-diagnostics-masterconfigcheck.1 docs/man/man1/oc-adm-diagnostics-masternode.1 docs/man/man1/oc-adm-diagnostics-metricsapiproxy.1 docs/man/man1/oc-adm-diagnostics-networkcheck.1 docs/man/man1/oc-adm-diagnostics-nodeconfigcheck.1 docs/man/man1/oc-adm-diagnostics-nodedefinitions.1 docs/man/man1/oc-adm-diagnostics-routecertificatevalidation.1 docs/man/man1/oc-adm-diagnostics-serviceexternalips.1 docs/man/man1/oc-adm-diagnostics-unitstatus.1 docs/man/man1/oc-adm-diagnostics.1 docs/man/man1/oc-adm-drain.1 docs/man/man1/oc-adm-groups-add-users.1 docs/man/man1/oc-adm-groups-new.1 docs/man/man1/oc-adm-groups-prune.1 docs/man/man1/oc-adm-groups-remove-users.1 docs/man/man1/oc-adm-groups-sync.1 docs/man/man1/oc-adm-groups.1 docs/man/man1/oc-adm-ipfailover.1 docs/man/man1/oc-adm-manage-node.1 docs/man/man1/oc-adm-migrate-authorization.1 docs/man/man1/oc-adm-migrate-etcd-ttl.1 docs/man/man1/oc-adm-migrate-image-references.1 docs/man/man1/oc-adm-migrate-legacy-hpa.1 docs/man/man1/oc-adm-migrate-storage.1 docs/man/man1/oc-adm-migrate.1 docs/man/man1/oc-adm-new-project.1 docs/man/man1/oc-adm-options.1 docs/man/man1/oc-adm-pod-network-isolate-projects.1 docs/man/man1/oc-adm-pod-network-join-projects.1 docs/man/man1/oc-adm-pod-network-make-projects-global.1 docs/man/man1/oc-adm-pod-network.1 docs/man/man1/oc-adm-policy-add-cluster-role-to-group.1 docs/man/man1/oc-adm-policy-add-cluster-role-to-user.1 docs/man/man1/oc-adm-policy-add-role-to-group.1 docs/man/man1/oc-adm-policy-add-role-to-user.1 docs/man/man1/oc-adm-policy-add-scc-to-group.1 docs/man/man1/oc-adm-policy-add-scc-to-user.1 docs/man/man1/oc-adm-policy-reconcile-cluster-role-bindings.1 docs/man/man1/oc-adm-policy-reconcile-cluster-roles.1 docs/man/man1/oc-adm-policy-reconcile-sccs.1 docs/man/man1/oc-adm-policy-remove-cluster-role-from-group.1 docs/man/man1/oc-adm-policy-remove-cluster-role-from-user.1 docs/man/man1/oc-adm-policy-remove-group.1 docs/man/man1/oc-adm-policy-remove-role-from-group.1 docs/man/man1/oc-adm-policy-remove-role-from-user.1 docs/man/man1/oc-adm-policy-remove-scc-from-group.1 docs/man/man1/oc-adm-policy-remove-scc-from-user.1 docs/man/man1/oc-adm-policy-remove-user.1 docs/man/man1/oc-adm-policy-scc-review.1 docs/man/man1/oc-adm-policy-scc-subject-review.1 docs/man/man1/oc-adm-policy-who-can.1 docs/man/man1/oc-adm-policy.1 docs/man/man1/oc-adm-prune-builds.1 docs/man/man1/oc-adm-prune-deployments.1 docs/man/man1/oc-adm-prune-groups.1 docs/man/man1/oc-adm-prune-images.1 docs/man/man1/oc-adm-prune.1 docs/man/man1/oc-adm-registry.1 docs/man/man1/oc-adm-router.1 docs/man/man1/oc-adm-taint.1 docs/man/man1/oc-adm-top-images.1 docs/man/man1/oc-adm-top-imagestreams.1 docs/man/man1/oc-adm-top-node.1 docs/man/man1/oc-adm-top-pod.1 docs/man/man1/oc-adm-top.1 docs/man/man1/oc-adm-uncordon.1 docs/man/man1/oc-adm-verify-image-signature.1 docs/man/man1/oc-adm.1 docs/man/man1/oc-annotate.1 docs/man/man1/oc-apply-edit-last-applied.1 docs/man/man1/oc-apply-set-last-applied.1 docs/man/man1/oc-apply-view-last-applied.1 docs/man/man1/oc-apply.1 docs/man/man1/oc-attach.1 docs/man/man1/oc-auth-can-i.1 docs/man/man1/oc-auth-reconcile.1 docs/man/man1/oc-auth.1 docs/man/man1/oc-autoscale.1 docs/man/man1/oc-build-logs.1 docs/man/man1/oc-cancel-build.1 docs/man/man1/oc-cluster-add.1 docs/man/man1/oc-cluster-down.1 docs/man/man1/oc-cluster-status.1 docs/man/man1/oc-cluster-up.1 docs/man/man1/oc-cluster.1 docs/man/man1/oc-completion.1 docs/man/man1/oc-config-current-context.1 docs/man/man1/oc-config-delete-cluster.1 docs/man/man1/oc-config-delete-context.1 docs/man/man1/oc-config-get-clusters.1 docs/man/man1/oc-config-get-contexts.1 docs/man/man1/oc-config-rename-context.1 docs/man/man1/oc-config-set-cluster.1 docs/man/man1/oc-config-set-context.1 docs/man/man1/oc-config-set-credentials.1 docs/man/man1/oc-config-set.1 docs/man/man1/oc-config-unset.1 docs/man/man1/oc-config-use-context.1 docs/man/man1/oc-config-view.1 docs/man/man1/oc-config.1 docs/man/man1/oc-convert.1 docs/man/man1/oc-cp.1 docs/man/man1/oc-create-clusterresourcequota.1 docs/man/man1/oc-create-clusterrole.1 docs/man/man1/oc-create-clusterrolebinding.1 docs/man/man1/oc-create-configmap.1 docs/man/man1/oc-create-deployment.1 docs/man/man1/oc-create-deploymentconfig.1 docs/man/man1/oc-create-identity.1 docs/man/man1/oc-create-imagestream.1 docs/man/man1/oc-create-imagestreamtag.1 docs/man/man1/oc-create-namespace.1 docs/man/man1/oc-create-poddisruptionbudget.1 docs/man/man1/oc-create-policybinding.1 docs/man/man1/oc-create-priorityclass.1 docs/man/man1/oc-create-quota.1 docs/man/man1/oc-create-role.1 docs/man/man1/oc-create-rolebinding.1 docs/man/man1/oc-create-route-edge.1 docs/man/man1/oc-create-route-passthrough.1 docs/man/man1/oc-create-route-reencrypt.1 docs/man/man1/oc-create-route.1 docs/man/man1/oc-create-secret-docker-registry.1 docs/man/man1/oc-create-secret-generic.1 docs/man/man1/oc-create-secret-tls.1 docs/man/man1/oc-create-secret.1 docs/man/man1/oc-create-service-clusterip.1 docs/man/man1/oc-create-service-externalname.1 docs/man/man1/oc-create-service-loadbalancer.1 docs/man/man1/oc-create-service-nodeport.1 docs/man/man1/oc-create-service.1 docs/man/man1/oc-create-serviceaccount.1 docs/man/man1/oc-create-user.1 docs/man/man1/oc-create-useridentitymapping.1 docs/man/man1/oc-create.1 docs/man/man1/oc-debug.1 docs/man/man1/oc-delete.1 docs/man/man1/oc-deploy.1 docs/man/man1/oc-describe.1 docs/man/man1/oc-edit.1 docs/man/man1/oc-env.1 docs/man/man1/oc-ex-build-chain.1 docs/man/man1/oc-ex-config-patch.1 docs/man/man1/oc-ex-config.1 docs/man/man1/oc-ex-diagnostics-aggregatedlogging.1 docs/man/man1/oc-ex-diagnostics-all.1 docs/man/man1/oc-ex-diagnostics-analyzelogs.1 docs/man/man1/oc-ex-diagnostics-appcreate.1 docs/man/man1/oc-ex-diagnostics-clusterregistry.1 docs/man/man1/oc-ex-diagnostics-clusterrolebindings.1 docs/man/man1/oc-ex-diagnostics-clusterroles.1 docs/man/man1/oc-ex-diagnostics-clusterrouter.1 docs/man/man1/oc-ex-diagnostics-configcontexts.1 docs/man/man1/oc-ex-diagnostics-diagnosticpod.1 docs/man/man1/oc-ex-diagnostics-etcdwritevolume.1 docs/man/man1/oc-ex-diagnostics-inpod-networkcheck.1 docs/man/man1/oc-ex-diagnostics-inpod-poddiagnostic.1 docs/man/man1/oc-ex-diagnostics-masterconfigcheck.1 docs/man/man1/oc-ex-diagnostics-masternode.1 docs/man/man1/oc-ex-diagnostics-metricsapiproxy.1 docs/man/man1/oc-ex-diagnostics-networkcheck.1 docs/man/man1/oc-ex-diagnostics-nodeconfigcheck.1 docs/man/man1/oc-ex-diagnostics-nodedefinitions.1 docs/man/man1/oc-ex-diagnostics-routecertificatevalidation.1 docs/man/man1/oc-ex-diagnostics-serviceexternalips.1 docs/man/man1/oc-ex-diagnostics-unitstatus.1 docs/man/man1/oc-ex-diagnostics.1 docs/man/man1/oc-ex-dockergc.1 docs/man/man1/oc-ex-ipfailover.1 docs/man/man1/oc-ex-options.1 docs/man/man1/oc-ex-prune-groups.1 docs/man/man1/oc-ex-sync-groups.1 docs/man/man1/oc-ex-validate-master-config.1 docs/man/man1/oc-ex-validate-node-config.1 docs/man/man1/oc-ex-validate.1 docs/man/man1/oc-ex.1 docs/man/man1/oc-exec.1 docs/man/man1/oc-explain.1 docs/man/man1/oc-export.1 docs/man/man1/oc-expose.1 docs/man/man1/oc-extract.1 docs/man/man1/oc-get.1 docs/man/man1/oc-idle.1 docs/man/man1/oc-image-mirror.1 docs/man/man1/oc-image.1 docs/man/man1/oc-import-app.json.1 docs/man/man1/oc-import-image.1 docs/man/man1/oc-import.1 docs/man/man1/oc-label.1 docs/man/man1/oc-login.1 docs/man/man1/oc-logout.1 docs/man/man1/oc-logs.1 docs/man/man1/oc-new-app.1 docs/man/man1/oc-new-build.1 docs/man/man1/oc-new-project.1 docs/man/man1/oc-observe.1 docs/man/man1/oc-options.1 docs/man/man1/oc-patch.1 docs/man/man1/oc-plugin.1 docs/man/man1/oc-policy-add-role-to-group.1 docs/man/man1/oc-policy-add-role-to-user.1 docs/man/man1/oc-policy-can-i.1 docs/man/man1/oc-policy-remove-group.1 docs/man/man1/oc-policy-remove-role-from-group.1 docs/man/man1/oc-policy-remove-role-from-user.1 docs/man/man1/oc-policy-remove-user.1 docs/man/man1/oc-policy-scc-review.1 docs/man/man1/oc-policy-scc-subject-review.1 docs/man/man1/oc-policy-who-can.1 docs/man/man1/oc-policy.1 docs/man/man1/oc-port-forward.1 docs/man/man1/oc-process.1 docs/man/man1/oc-project.1 docs/man/man1/oc-projects.1 docs/man/man1/oc-proxy.1 docs/man/man1/oc-registry-info.1 docs/man/man1/oc-registry-login.1 docs/man/man1/oc-registry.1 docs/man/man1/oc-replace.1 docs/man/man1/oc-rollback.1 docs/man/man1/oc-rollout-cancel.1 docs/man/man1/oc-rollout-history.1 docs/man/man1/oc-rollout-latest.1 docs/man/man1/oc-rollout-pause.1 docs/man/man1/oc-rollout-resume.1 docs/man/man1/oc-rollout-retry.1 docs/man/man1/oc-rollout-status.1 docs/man/man1/oc-rollout-undo.1 docs/man/man1/oc-rollout.1 docs/man/man1/oc-rsh.1 docs/man/man1/oc-rsync.1 docs/man/man1/oc-run.1 docs/man/man1/oc-scale.1 docs/man/man1/oc-secrets-add.1 docs/man/man1/oc-secrets-link.1 docs/man/man1/oc-secrets-new-basicauth.1 docs/man/man1/oc-secrets-new-dockercfg.1 docs/man/man1/oc-secrets-new-sshauth.1 docs/man/man1/oc-secrets-new.1 docs/man/man1/oc-secrets-unlink.1 docs/man/man1/oc-secrets.1 docs/man/man1/oc-serviceaccounts-create-kubeconfig.1 docs/man/man1/oc-serviceaccounts-get-token.1 docs/man/man1/oc-serviceaccounts-new-token.1 docs/man/man1/oc-serviceaccounts.1 docs/man/man1/oc-set-build-hook.1 docs/man/man1/oc-set-build-secret.1 docs/man/man1/oc-set-deployment-hook.1 docs/man/man1/oc-set-env.1 docs/man/man1/oc-set-image-lookup.1 docs/man/man1/oc-set-image.1 docs/man/man1/oc-set-probe.1 docs/man/man1/oc-set-resources.1 docs/man/man1/oc-set-route-backends.1 docs/man/man1/oc-set-triggers.1 docs/man/man1/oc-set-volumes.1 docs/man/man1/oc-set.1 docs/man/man1/oc-start-build.1 docs/man/man1/oc-status.1 docs/man/man1/oc-tag.1 docs/man/man1/oc-types.1 docs/man/man1/oc-version.1 docs/man/man1/oc-volumes.1 docs/man/man1/oc-whoami.1 docs/man/man1/oc.1 docs/man/man1/openshift-completion.1 docs/man/man1/openshift-options.1 docs/man/man1/openshift-start-etcd.1 docs/man/man1/openshift-start-master-api.1 docs/man/man1/openshift-start-master-controllers.1 docs/man/man1/openshift-start-master.1 docs/man/man1/openshift-start-network.1 docs/man/man1/openshift-start-node.1 docs/man/man1/openshift-start-template-service-broker.1 docs/man/man1/openshift-start.1 docs/man/man1/openshift-version.1 docs/man/man1/openshift.1 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/man/man1/ + mkdir -p /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/var/lib/origin + install -d -m 0755 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/etc/cni/net.d + install -d -m 0755 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/opt/cni/bin + install -p -m 0755 _output/local/bin/linux/amd64/sdn-cni-plugin /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/opt/cni/bin/openshift-sdn + install -p -m 0755 _output/local/bin/linux/amd64/host-local /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/opt/cni/bin + install -p -m 0755 _output/local/bin/linux/amd64/loopback /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/opt/cni/bin + install -d -m 0755 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/lib/systemd/system/origin-node.service.d + install -p -m 0644 contrib/systemd/openshift-sdn-ovs.conf /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/lib/systemd/system/origin-node.service.d/openshift-sdn-ovs.conf + install -d -m 755 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/etc/bash_completion.d/ +++ INSTALLING BASH COMPLETIONS FOR oc + for bin in oc openshift + echo '+++ INSTALLING BASH COMPLETIONS FOR oc ' + /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/oc completion bash + chmod 644 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/etc/bash_completion.d/oc +++ INSTALLING BASH COMPLETIONS FOR openshift + for bin in oc openshift + echo '+++ INSTALLING BASH COMPLETIONS FOR openshift ' + /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/bin/openshift completion bash + chmod 644 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/etc/bash_completion.d/openshift + install -d -m 755 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/etc/systemd/system.conf.d/ + install -p -m 644 contrib/systemd/origin-accounting.conf /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/etc/systemd/system.conf.d/ + mkdir -p /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/sbin + OS_CONF_FILE=/etc/yum.conf + sed 's|@@CONF_FILE-VARIABLE@@|/etc/yum.conf|' contrib/excluder/excluder-template + sed -i 's|@@PACKAGE_LIST-VARIABLE@@|origin origin-clients origin-clients-redistributable origin-master origin-node origin-pod origin-recycle origin-sdn-ovs origin-tests|' /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/sbin/origin-excluder + chmod 0744 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/sbin/origin-excluder + sed 's|@@CONF_FILE-VARIABLE@@|/etc/yum.conf|' contrib/excluder/excluder-template + sed -i 's|@@PACKAGE_LIST-VARIABLE@@|docker*1.14* docker*1.15* docker*1.16* docker*1.17* docker*1.18* docker*1.19* docker*1.20*|' /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/sbin/origin-docker-excluder + chmod 0744 /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/sbin/origin-docker-excluder + install -d /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/origin/migration + install -p -m 755 contrib/migration/fix-3.4-paths.sh contrib/migration/migrate-image-manifests.sh contrib/migration/migrate-network-policy.sh contrib/migration/unmigrate-network-policy.sh /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/origin/migration/ + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/brp-compress Processing files: origin-3.10.0-0.alpha.0.549.4253ab3.x86_64 Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.cktWIe + umask 022 + cd /tmp/openshift/build-rpms/rpm/BUILD + cd origin-3.10.0 + DOCDIR=/tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/doc/origin-3.10.0 + export DOCDIR + /usr/bin/mkdir -p /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/doc/origin-3.10.0 + cp -pr README.md /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/doc/origin-3.10.0 + exit 0 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.N8JLcb + umask 022 + cd /tmp/openshift/build-rpms/rpm/BUILD + cd origin-3.10.0 + LICENSEDIR=/tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/licenses/origin-3.10.0 + export LICENSEDIR + /usr/bin/mkdir -p /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/licenses/origin-3.10.0 + cp -pr LICENSE /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/licenses/origin-3.10.0 + exit 0 warning: File listed twice: /etc/origin Provides: config(origin) = 3.10.0-0.alpha.0.549.4253ab3 origin = 3.10.0-0.alpha.0.549.4253ab3 origin(x86-64) = 3.10.0-0.alpha.0.549.4253ab3 Requires(interp): /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(pre): /bin/sh Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libdl.so.2()(64bit) libdl.so.2(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) rtld(GNU_HASH) Obsoletes: openshift < 3.0.2.900 Processing files: origin-master-3.10.0-0.alpha.0.549.4253ab3.x86_64 Provides: config(origin-master) = 3.10.0-0.alpha.0.549.4253ab3 origin-master = 3.10.0-0.alpha.0.549.4253ab3 origin-master(x86-64) = 3.10.0-0.alpha.0.549.4253ab3 Requires(interp): /bin/sh /bin/sh /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(post): /bin/sh systemd Requires(preun): /bin/sh systemd Requires(postun): /bin/sh systemd Requires: /bin/bash Obsoletes: openshift-master < 3.0.2.900 Processing files: origin-tests-3.10.0-0.alpha.0.549.4253ab3.x86_64 warning: File listed twice: /usr/libexec/origin/extended.test Provides: origin-tests = 3.10.0-0.alpha.0.549.4253ab3 origin-tests(x86-64) = 3.10.0-0.alpha.0.549.4253ab3 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libdl.so.2()(64bit) libdl.so.2(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) rtld(GNU_HASH) Processing files: origin-node-3.10.0-0.alpha.0.549.4253ab3.x86_64 Provides: config(origin-node) = 3.10.0-0.alpha.0.549.4253ab3 origin-node = 3.10.0-0.alpha.0.549.4253ab3 origin-node(x86-64) = 3.10.0-0.alpha.0.549.4253ab3 tuned-profiles-origin-node Requires(interp): /bin/sh /bin/sh /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(post): /bin/sh systemd Requires(preun): /bin/sh systemd Requires(postun): /bin/sh systemd Obsoletes: openshift-node < 3.0.2.900 tuned-profiles-origin-node Processing files: origin-clients-3.10.0-0.alpha.0.549.4253ab3.x86_64 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.02xCHq + umask 022 + cd /tmp/openshift/build-rpms/rpm/BUILD + cd origin-3.10.0 + LICENSEDIR=/tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/licenses/origin-clients-3.10.0 + export LICENSEDIR + /usr/bin/mkdir -p /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/licenses/origin-clients-3.10.0 + cp -pr LICENSE /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64/usr/share/licenses/origin-clients-3.10.0 + exit 0 Provides: origin-clients = 3.10.0-0.alpha.0.549.4253ab3 origin-clients(x86-64) = 3.10.0-0.alpha.0.549.4253ab3 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libdl.so.2()(64bit) libdl.so.2(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) rtld(GNU_HASH) Obsoletes: openshift-clients < 3.0.2.900 Processing files: origin-pod-3.10.0-0.alpha.0.549.4253ab3.x86_64 Provides: origin-pod = 3.10.0-0.alpha.0.549.4253ab3 origin-pod(x86-64) = 3.10.0-0.alpha.0.549.4253ab3 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Processing files: origin-sdn-ovs-3.10.0-0.alpha.0.549.4253ab3.x86_64 Provides: origin-sdn-ovs = 3.10.0-0.alpha.0.549.4253ab3 origin-sdn-ovs(x86-64) = 3.10.0-0.alpha.0.549.4253ab3 Requires(interp): /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(posttrans): /bin/sh Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) rtld(GNU_HASH) Obsoletes: openshift-sdn-ovs < 3.0.2.900 Processing files: origin-federation-services-3.10.0-0.alpha.0.549.4253ab3.x86_64 Provides: origin-federation-services = 3.10.0-0.alpha.0.549.4253ab3 origin-federation-services(x86-64) = 3.10.0-0.alpha.0.549.4253ab3 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libdl.so.2()(64bit) libdl.so.2(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) rtld(GNU_HASH) Processing files: origin-template-service-broker-3.10.0-0.alpha.0.549.4253ab3.x86_64 Provides: origin-template-service-broker = 3.10.0-0.alpha.0.549.4253ab3 origin-template-service-broker(x86-64) = 3.10.0-0.alpha.0.549.4253ab3 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) Processing files: origin-cluster-capacity-3.10.0-0.alpha.0.549.4253ab3.x86_64 Provides: origin-cluster-capacity = 3.10.0-0.alpha.0.549.4253ab3 origin-cluster-capacity(x86-64) = 3.10.0-0.alpha.0.549.4253ab3 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) Processing files: origin-excluder-3.10.0-0.alpha.0.549.4253ab3.noarch Provides: origin-excluder = 3.10.0-0.alpha.0.549.4253ab3 Requires(interp): /bin/sh /bin/sh /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(preun): /bin/sh Requires(pretrans): /bin/sh Requires(posttrans): /bin/sh Requires: /bin/bash Processing files: origin-docker-excluder-3.10.0-0.alpha.0.549.4253ab3.noarch Provides: origin-docker-excluder = 3.10.0-0.alpha.0.549.4253ab3 Requires(interp): /bin/sh /bin/sh /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(preun): /bin/sh Requires(pretrans): /bin/sh Requires(posttrans): /bin/sh Requires: /bin/bash Checking for unpackaged file(s): /usr/lib/rpm/check-files /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64 Wrote: /tmp/openshift/build-rpms/rpm/RPMS/x86_64/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64.rpm Wrote: /tmp/openshift/build-rpms/rpm/RPMS/x86_64/origin-master-3.10.0-0.alpha.0.549.4253ab3.x86_64.rpm Wrote: /tmp/openshift/build-rpms/rpm/RPMS/x86_64/origin-tests-3.10.0-0.alpha.0.549.4253ab3.x86_64.rpm Wrote: /tmp/openshift/build-rpms/rpm/RPMS/x86_64/origin-node-3.10.0-0.alpha.0.549.4253ab3.x86_64.rpm Wrote: /tmp/openshift/build-rpms/rpm/RPMS/x86_64/origin-clients-3.10.0-0.alpha.0.549.4253ab3.x86_64.rpm Wrote: /tmp/openshift/build-rpms/rpm/RPMS/x86_64/origin-pod-3.10.0-0.alpha.0.549.4253ab3.x86_64.rpm Wrote: /tmp/openshift/build-rpms/rpm/RPMS/x86_64/origin-sdn-ovs-3.10.0-0.alpha.0.549.4253ab3.x86_64.rpm Wrote: /tmp/openshift/build-rpms/rpm/RPMS/x86_64/origin-federation-services-3.10.0-0.alpha.0.549.4253ab3.x86_64.rpm Wrote: /tmp/openshift/build-rpms/rpm/RPMS/x86_64/origin-template-service-broker-3.10.0-0.alpha.0.549.4253ab3.x86_64.rpm Wrote: /tmp/openshift/build-rpms/rpm/RPMS/x86_64/origin-cluster-capacity-3.10.0-0.alpha.0.549.4253ab3.x86_64.rpm Wrote: /tmp/openshift/build-rpms/rpm/RPMS/noarch/origin-excluder-3.10.0-0.alpha.0.549.4253ab3.noarch.rpm Wrote: /tmp/openshift/build-rpms/rpm/RPMS/noarch/origin-docker-excluder-3.10.0-0.alpha.0.549.4253ab3.noarch.rpm Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.YZJolF + umask 022 + cd /tmp/openshift/build-rpms/rpm/BUILD + cd origin-3.10.0 + /usr/bin/rm -rf /tmp/openshift/build-rpms/rpm/BUILDROOT/origin-3.10.0-0.alpha.0.549.4253ab3.x86_64 + exit 0 make[1]: Entering directory `/go/src/github.com/openshift/origin' rm -rf _output make[1]: Leaving directory `/go/src/github.com/openshift/origin' Spawning worker 0 with 3 pkgs Spawning worker 1 with 3 pkgs Spawning worker 2 with 3 pkgs Spawning worker 3 with 3 pkgs Workers Finished Saving Primary metadata Saving file lists metadata Saving other metadata Generating sqlite DBs Sqlite DBs complete [INFO] [19:43:46+0000] Repository file for `yum` or `dnf` placed at /go/src/github.com/openshift/origin/_output/local/releases/rpms/local-release.repo [INFO] [19:43:46+0000] Install it with: [INFO] [19:43:46+0000] $ mv '/go/src/github.com/openshift/origin/_output/local/releases/rpms/local-release.repo' '/etc/yum.repos.d [INFO] [19:43:47+0000] hack/build-rpms.sh exited with code 0 after 00h 13m 17s hack/build-images.sh [openshift/origin-pod] --> FROM openshift/origin-source [openshift/origin-pod] --> RUN INSTALL_PKGS="origin-pod" && yum --enablerepo=origin-local-release install -y ${INSTALL_PKGS} && rpm -V ${INSTALL_PKGS} && yum clean all [openshift/origin-pod] Loaded plugins: fastestmirror, ovl [openshift/origin-pod] Determining fastest mirrors [openshift/origin-pod] * base: mirror.atlanticmetro.net [openshift/origin-pod] * extras: centos.aol.com [openshift/origin-pod] * updates: mirror.cs.pitt.edu [openshift/origin-pod] Resolving Dependencies [openshift/origin-pod] --> Running transaction check [openshift/origin-pod] ---> Package origin-pod.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 will be installed [openshift/origin-pod] --> Finished Dependency Resolution [openshift/origin-pod] Dependencies Resolved [openshift/origin-pod] ================================================================================ [openshift/origin-pod] Package Arch Version Repository Size [openshift/origin-pod] ================================================================================ [openshift/origin-pod] Installing: [openshift/origin-pod] origin-pod x86_64 3.10.0-0.alpha.0.549.4253ab3 origin-local-release 387 k [openshift/origin-pod] Transaction Summary [openshift/origin-pod] ================================================================================ [openshift/origin-pod] Install 1 Package [openshift/origin-pod] Total download size: 387 k [openshift/origin-pod] Installed size: 1.1 M [openshift/origin-pod] Downloading packages: [openshift/origin-pod] Running transaction check [openshift/origin-pod] Running transaction test [openshift/origin-pod] Transaction test succeeded [openshift/origin-pod] Running transaction [openshift/origin-pod] Installing : origin-pod-3.10.0-0.alpha.0.549.4253ab3.x86_64 1/1 [openshift/origin-pod] Verifying : origin-pod-3.10.0-0.alpha.0.549.4253ab3.x86_64 1/1 [openshift/origin-pod] Installed: [openshift/origin-pod] origin-pod.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 [openshift/origin-pod] Complete! [openshift/origin-pod] Loaded plugins: fastestmirror, ovl [openshift/origin-pod] Cleaning repos: base extras updates [openshift/origin-pod] Cleaning up everything [openshift/origin-pod] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/origin-pod] Cleaning up list of fastest mirrors [openshift/origin-pod] --> LABEL io.k8s.display-name="OpenShift Origin Pod Infrastructure" io.k8s.description="This is a component of OpenShift Origin and holds on to the shared Linux namespaces within a Pod." io.openshift.tags="openshift,pod" [openshift/origin-pod] --> USER 1001 [openshift/origin-pod] --> ENTRYPOINT ["/usr/bin/pod"] [openshift/origin-pod] --> Committing changes to openshift/origin-pod:4253ab3 ... [openshift/origin-pod] --> Tagged as openshift/origin-pod:latest [openshift/origin-pod] --> Done [openshift/origin-cluster-capacity] --> FROM openshift/origin-source [openshift/origin-cluster-capacity] --> RUN INSTALL_PKGS="origin-cluster-capacity" && yum --enablerepo=origin-local-release install -y ${INSTALL_PKGS} && rpm -V ${INSTALL_PKGS} && yum clean all [openshift/origin-cluster-capacity] Loaded plugins: fastestmirror, ovl [openshift/origin-cluster-capacity] Determining fastest mirrors [openshift/origin-cluster-capacity] * base: mirror.math.princeton.edu [openshift/origin-cluster-capacity] * extras: mirror.teklinks.com [openshift/origin-cluster-capacity] * updates: mirror.datto.com [openshift/origin-cluster-capacity] Resolving Dependencies [openshift/origin-cluster-capacity] --> Running transaction check [openshift/origin-cluster-capacity] ---> Package origin-cluster-capacity.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 will be installed [openshift/origin-cluster-capacity] --> Finished Dependency Resolution [openshift/origin-cluster-capacity] Dependencies Resolved [openshift/origin-cluster-capacity] ================================================================================ [openshift/origin-cluster-capacity] Package Arch Version Repository Size [openshift/origin-cluster-capacity] ================================================================================ [openshift/origin-cluster-capacity] Installing: [openshift/origin-cluster-capacity] origin-cluster-capacity [openshift/origin-cluster-capacity] x86_64 3.10.0-0.alpha.0.549.4253ab3 origin-local-release 11 M [openshift/origin-cluster-capacity] Transaction Summary [openshift/origin-cluster-capacity] ================================================================================ [openshift/origin-cluster-capacity] Install 1 Package [openshift/origin-cluster-capacity] Total download size: 11 M [openshift/origin-cluster-capacity] Installed size: 69 M [openshift/origin-cluster-capacity] Downloading packages: [openshift/origin-cluster-capacity] Running transaction check [openshift/origin-cluster-capacity] Running transaction test [openshift/origin-cluster-capacity] Transaction test succeeded [openshift/origin-cluster-capacity] Running transaction [openshift/origin-cluster-capacity] Installing : origin-cluster-capacity-3.10.0-0.alpha.0.549.4253ab3.x86_6 1/1 [openshift/origin-cluster-capacity] Verifying : origin-cluster-capacity-3.10.0-0.alpha.0.549.4253ab3.x86_6 1/1 [openshift/origin-cluster-capacity] Installed: [openshift/origin-cluster-capacity] origin-cluster-capacity.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 [openshift/origin-cluster-capacity] Complete! [openshift/origin-cluster-capacity] Loaded plugins: fastestmirror, ovl [openshift/origin-cluster-capacity] Cleaning repos: base extras updates [openshift/origin-cluster-capacity] Cleaning up everything [openshift/origin-cluster-capacity] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/origin-cluster-capacity] Cleaning up list of fastest mirrors [openshift/origin-cluster-capacity] --> LABEL io.k8s.display-name="OpenShift Origin Cluster Capacity" io.k8s.description="This is a component of OpenShift Origin and runs cluster capacity analysis tool." [openshift/origin-cluster-capacity] --> CMD ["/usr/bin/cluster-capacity --help"] [openshift/origin-cluster-capacity] --> Committing changes to openshift/origin-cluster-capacity:4253ab3 ... [openshift/origin-cluster-capacity] --> Tagged as openshift/origin-cluster-capacity:latest [openshift/origin-cluster-capacity] --> Done [openshift/origin-template-service-broker] --> FROM openshift/origin-source [openshift/origin-template-service-broker] --> RUN INSTALL_PKGS="origin-template-service-broker" && yum --enablerepo=origin-local-release install -y ${INSTALL_PKGS} && rpm -V ${INSTALL_PKGS} && yum clean all [openshift/origin-template-service-broker] Loaded plugins: fastestmirror, ovl [openshift/origin-template-service-broker] Determining fastest mirrors [openshift/origin-template-service-broker] * base: mirror.atlanticmetro.net [openshift/origin-template-service-broker] * extras: mirror.teklinks.com [openshift/origin-template-service-broker] * updates: mirror.datto.com [openshift/origin-template-service-broker] Resolving Dependencies [openshift/origin-template-service-broker] --> Running transaction check [openshift/origin-template-service-broker] ---> Package origin-template-service-broker.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 will be installed [openshift/origin-template-service-broker] --> Finished Dependency Resolution [openshift/origin-template-service-broker] Dependencies Resolved [openshift/origin-template-service-broker] ================================================================================ [openshift/origin-template-service-broker] Package Arch Version Repository Size [openshift/origin-template-service-broker] ================================================================================ [openshift/origin-template-service-broker] Installing: [openshift/origin-template-service-broker] origin-template-service-broker [openshift/origin-template-service-broker] x86_64 3.10.0-0.alpha.0.549.4253ab3 origin-local-release 14 M [openshift/origin-template-service-broker] Transaction Summary [openshift/origin-template-service-broker] ================================================================================ [openshift/origin-template-service-broker] Install 1 Package [openshift/origin-template-service-broker] Total download size: 14 M [openshift/origin-template-service-broker] Installed size: 82 M [openshift/origin-template-service-broker] Downloading packages: [openshift/origin-template-service-broker] Running transaction check [openshift/origin-template-service-broker] Running transaction test [openshift/origin-template-service-broker] Transaction test succeeded [openshift/origin-template-service-broker] Running transaction [openshift/origin-template-service-broker] Installing : origin-template-service-broker-3.10.0-0.alpha.0.549.4253ab 1/1 [openshift/origin-template-service-broker] Verifying : origin-template-service-broker-3.10.0-0.alpha.0.549.4253ab 1/1 [openshift/origin-template-service-broker] Installed: [openshift/origin-template-service-broker] origin-template-service-broker.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 [openshift/origin-template-service-broker] Complete! [openshift/origin-template-service-broker] Loaded plugins: fastestmirror, ovl [openshift/origin-template-service-broker] Cleaning repos: base extras updates [openshift/origin-template-service-broker] Cleaning up everything [openshift/origin-template-service-broker] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/origin-template-service-broker] Cleaning up list of fastest mirrors [openshift/origin-template-service-broker] --> CMD [ "/usr/bin/template-service-broker" ] [openshift/origin-template-service-broker] --> Committing changes to openshift/origin-template-service-broker:4253ab3 ... [openshift/origin-template-service-broker] --> Tagged as openshift/origin-template-service-broker:latest [openshift/origin-template-service-broker] --> Done [openshift/origin-egress-router] --> FROM openshift/origin-base [openshift/origin-egress-router] --> RUN INSTALL_PKGS="iproute iputils" && yum install -y $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all [openshift/origin-egress-router] Loaded plugins: fastestmirror, ovl [openshift/origin-egress-router] Determining fastest mirrors [openshift/origin-egress-router] * base: mirror.atlanticmetro.net [openshift/origin-egress-router] * extras: mirror.teklinks.com [openshift/origin-egress-router] * updates: mirror.cs.pitt.edu [openshift/origin-egress-router] Package iproute-3.10.0-87.el7.x86_64 already installed and latest version [openshift/origin-egress-router] Package iputils-20160308-10.el7.x86_64 already installed and latest version [openshift/origin-egress-router] Nothing to do [openshift/origin-egress-router] Loaded plugins: fastestmirror, ovl [openshift/origin-egress-router] Cleaning repos: base cbs-paas7-openshift-multiarch-el7-build [openshift/origin-egress-router] : centos-ceph-luminous extras updates [openshift/origin-egress-router] Cleaning up everything [openshift/origin-egress-router] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/origin-egress-router] Cleaning up list of fastest mirrors [openshift/origin-egress-router] --> ADD egress-router.sh /bin/egress-router.sh [openshift/origin-egress-router] --> LABEL io.k8s.display-name="OpenShift Origin Egress Router" io.k8s.description="This is a component of OpenShift Origin and contains an egress router." io.openshift.tags="openshift,router,egress" [openshift/origin-egress-router] --> ENTRYPOINT /bin/egress-router.sh [openshift/origin-egress-router] --> Committing changes to openshift/origin-egress-router:4253ab3 ... [openshift/origin-egress-router] --> Tagged as openshift/origin-egress-router:latest [openshift/origin-egress-router] --> Done [openshift/origin-federation] --> FROM openshift/origin-base [openshift/origin-federation] --> RUN INSTALL_PKGS="origin-federation-services" && yum --enablerepo=origin-local-release install -y ${INSTALL_PKGS} && rpm -V ${INSTALL_PKGS} && yum clean all && ln -s /usr/bin/hyperkube /hyperkube [openshift/origin-federation] Loaded plugins: fastestmirror, ovl [openshift/origin-federation] Determining fastest mirrors [openshift/origin-federation] * base: mirror.math.princeton.edu [openshift/origin-federation] * extras: centos.aol.com [openshift/origin-federation] * updates: mirror.cs.pitt.edu [openshift/origin-federation] Resolving Dependencies [openshift/origin-federation] --> Running transaction check [openshift/origin-federation] ---> Package origin-federation-services.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 will be installed [openshift/origin-federation] --> Finished Dependency Resolution [openshift/origin-federation] Dependencies Resolved [openshift/origin-federation] ================================================================================ [openshift/origin-federation] Package Arch Version Repository Size [openshift/origin-federation] ================================================================================ [openshift/origin-federation] Installing: [openshift/origin-federation] origin-federation-services [openshift/origin-federation] x86_64 3.10.0-0.alpha.0.549.4253ab3 origin-local-release 31 M [openshift/origin-federation] Transaction Summary [openshift/origin-federation] ================================================================================ [openshift/origin-federation] Install 1 Package [openshift/origin-federation] Total download size: 31 M [openshift/origin-federation] Installed size: 259 M [openshift/origin-federation] Downloading packages: [openshift/origin-federation] Running transaction check [openshift/origin-federation] Running transaction test [openshift/origin-federation] Transaction test succeeded [openshift/origin-federation] Running transaction [openshift/origin-federation] Installing : origin-federation-services-3.10.0-0.alpha.0.549.4253ab3.x8 1/1 [openshift/origin-federation] Verifying : origin-federation-services-3.10.0-0.alpha.0.549.4253ab3.x8 1/1 [openshift/origin-federation] Installed: [openshift/origin-federation] origin-federation-services.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 [openshift/origin-federation] Complete! [openshift/origin-federation] Loaded plugins: fastestmirror, ovl [openshift/origin-federation] Cleaning repos: base cbs-paas7-openshift-multiarch-el7-build [openshift/origin-federation] : centos-ceph-luminous extras updates [openshift/origin-federation] Cleaning up everything [openshift/origin-federation] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/origin-federation] Cleaning up list of fastest mirrors [openshift/origin-federation] --> LABEL io.k8s.display-name="OpenShift Origin Federation" io.k8s.description="This is a component of OpenShift Origin and contains the software for running federation servers." [openshift/origin-federation] --> Committing changes to openshift/origin-federation:4253ab3 ... [openshift/origin-federation] --> Tagged as openshift/origin-federation:latest [openshift/origin-federation] --> Done [openshift/origin-egress-dns-proxy] --> FROM openshift/origin-base [openshift/origin-egress-dns-proxy] --> RUN INSTALL_PKGS="haproxy18 rsyslog" && yum install -y $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all && mkdir -p /var/lib/haproxy/{run,log} && mkdir -p /etc/haproxy && setcap 'cap_net_bind_service=ep' /usr/sbin/haproxy && chown -R :0 /var/lib/haproxy && chmod -R g+w /var/lib/haproxy && touch /etc/haproxy/haproxy.cfg [openshift/origin-egress-dns-proxy] Loaded plugins: fastestmirror, ovl [openshift/origin-egress-dns-proxy] Determining fastest mirrors [openshift/origin-egress-dns-proxy] * base: mirror.atlanticmetro.net [openshift/origin-egress-dns-proxy] * extras: centos.aol.com [openshift/origin-egress-dns-proxy] * updates: mirror.cs.pitt.edu [openshift/origin-egress-dns-proxy] Resolving Dependencies [openshift/origin-egress-dns-proxy] --> Running transaction check [openshift/origin-egress-dns-proxy] ---> Package haproxy18.x86_64 0:1.8.1-5.el7 will be installed [openshift/origin-egress-dns-proxy] ---> Package rsyslog.x86_64 0:8.24.0-12.el7 will be installed [openshift/origin-egress-dns-proxy] --> Processing Dependency: logrotate >= 3.5.2 for package: rsyslog-8.24.0-12.el7.x86_64 [openshift/origin-egress-dns-proxy] --> Processing Dependency: libestr >= 0.1.9 for package: rsyslog-8.24.0-12.el7.x86_64 [openshift/origin-egress-dns-proxy] --> Processing Dependency: libfastjson.so.4()(64bit) for package: rsyslog-8.24.0-12.el7.x86_64 [openshift/origin-egress-dns-proxy] --> Processing Dependency: libestr.so.0()(64bit) for package: rsyslog-8.24.0-12.el7.x86_64 [openshift/origin-egress-dns-proxy] --> Running transaction check [openshift/origin-egress-dns-proxy] ---> Package libestr.x86_64 0:0.1.9-2.el7 will be installed [openshift/origin-egress-dns-proxy] ---> Package libfastjson.x86_64 0:0.99.4-2.el7 will be installed [openshift/origin-egress-dns-proxy] ---> Package logrotate.x86_64 0:3.8.6-14.el7 will be installed [openshift/origin-egress-dns-proxy] --> Finished Dependency Resolution [openshift/origin-egress-dns-proxy] Dependencies Resolved [openshift/origin-egress-dns-proxy] ================================================================================ [openshift/origin-egress-dns-proxy] Package Arch Version Repository Size [openshift/origin-egress-dns-proxy] ================================================================================ [openshift/origin-egress-dns-proxy] Installing: [openshift/origin-egress-dns-proxy] haproxy18 x86_64 1.8.1-5.el7 cbs-paas7-openshift-multiarch-el7-build 1.2 M [openshift/origin-egress-dns-proxy] rsyslog x86_64 8.24.0-12.el7 base 605 k [openshift/origin-egress-dns-proxy] Installing for dependencies: [openshift/origin-egress-dns-proxy] libestr x86_64 0.1.9-2.el7 base 20 k [openshift/origin-egress-dns-proxy] libfastjson x86_64 0.99.4-2.el7 base 27 k [openshift/origin-egress-dns-proxy] logrotate x86_64 3.8.6-14.el7 base 69 k [openshift/origin-egress-dns-proxy] Transaction Summary [openshift/origin-egress-dns-proxy] ================================================================================ [openshift/origin-egress-dns-proxy] Install 2 Packages (+3 Dependent packages) [openshift/origin-egress-dns-proxy] Total download size: 1.9 M [openshift/origin-egress-dns-proxy] Installed size: 5.9 M [openshift/origin-egress-dns-proxy] Downloading packages: [openshift/origin-egress-dns-proxy] -------------------------------------------------------------------------------- [openshift/origin-egress-dns-proxy] Total 3.3 MB/s | 1.9 MB 00:00 [openshift/origin-egress-dns-proxy] Running transaction check [openshift/origin-egress-dns-proxy] Running transaction test [openshift/origin-egress-dns-proxy] Transaction test succeeded [openshift/origin-egress-dns-proxy] Running transaction [openshift/origin-egress-dns-proxy] Installing : logrotate-3.8.6-14.el7.x86_64 1/5 [openshift/origin-egress-dns-proxy] Installing : libestr-0.1.9-2.el7.x86_64 2/5 [openshift/origin-egress-dns-proxy] Installing : libfastjson-0.99.4-2.el7.x86_64 3/5 [openshift/origin-egress-dns-proxy] Installing : rsyslog-8.24.0-12.el7.x86_64 4/5 [openshift/origin-egress-dns-proxy] Installing : haproxy18-1.8.1-5.el7.x86_64 5/5 [openshift/origin-egress-dns-proxy] Verifying : libfastjson-0.99.4-2.el7.x86_64 1/5 [openshift/origin-egress-dns-proxy] Verifying : libestr-0.1.9-2.el7.x86_64 2/5 [openshift/origin-egress-dns-proxy] Verifying : rsyslog-8.24.0-12.el7.x86_64 3/5 [openshift/origin-egress-dns-proxy] Verifying : haproxy18-1.8.1-5.el7.x86_64 4/5 [openshift/origin-egress-dns-proxy] Verifying : logrotate-3.8.6-14.el7.x86_64 5/5 [openshift/origin-egress-dns-proxy] Installed: [openshift/origin-egress-dns-proxy] haproxy18.x86_64 0:1.8.1-5.el7 rsyslog.x86_64 0:8.24.0-12.el7 [openshift/origin-egress-dns-proxy] Dependency Installed: [openshift/origin-egress-dns-proxy] libestr.x86_64 0:0.1.9-2.el7 libfastjson.x86_64 0:0.99.4-2.el7 [openshift/origin-egress-dns-proxy] logrotate.x86_64 0:3.8.6-14.el7 [openshift/origin-egress-dns-proxy] Complete! [openshift/origin-egress-dns-proxy] Loaded plugins: fastestmirror, ovl [openshift/origin-egress-dns-proxy] Cleaning repos: base cbs-paas7-openshift-multiarch-el7-build [openshift/origin-egress-dns-proxy] : centos-ceph-luminous extras updates [openshift/origin-egress-dns-proxy] Cleaning up everything [openshift/origin-egress-dns-proxy] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/origin-egress-dns-proxy] Cleaning up list of fastest mirrors [openshift/origin-egress-dns-proxy] --> ADD egress-dns-proxy.sh /bin/egress-dns-proxy.sh [openshift/origin-egress-dns-proxy] --> ENTRYPOINT /bin/egress-dns-proxy.sh [openshift/origin-egress-dns-proxy] --> Committing changes to openshift/origin-egress-dns-proxy:4253ab3 ... [openshift/origin-egress-dns-proxy] --> Tagged as openshift/origin-egress-dns-proxy:latest [openshift/origin-egress-dns-proxy] --> Done [openshift/origin-egress-http-proxy] --> FROM openshift/origin-base [openshift/origin-egress-http-proxy] --> RUN INSTALL_PKGS="squid" && yum install -y $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all && rmdir /var/log/squid /var/spool/squid && rm -f /etc/squid/squid.conf [openshift/origin-egress-http-proxy] Loaded plugins: fastestmirror, ovl [openshift/origin-egress-http-proxy] Determining fastest mirrors [openshift/origin-egress-http-proxy] * base: mirror.atlanticmetro.net [openshift/origin-egress-http-proxy] * extras: centos.aol.com [openshift/origin-egress-http-proxy] * updates: mirror.cs.pitt.edu [openshift/origin-egress-http-proxy] Resolving Dependencies [openshift/origin-egress-http-proxy] --> Running transaction check [openshift/origin-egress-http-proxy] ---> Package squid.x86_64 7:3.5.20-10.el7 will be installed [openshift/origin-egress-http-proxy] --> Processing Dependency: squid-migration-script for package: 7:squid-3.5.20-10.el7.x86_64 [openshift/origin-egress-http-proxy] --> Processing Dependency: perl(Digest::MD5) for package: 7:squid-3.5.20-10.el7.x86_64 [openshift/origin-egress-http-proxy] --> Processing Dependency: perl(Data::Dumper) for package: 7:squid-3.5.20-10.el7.x86_64 [openshift/origin-egress-http-proxy] --> Processing Dependency: perl(DBI) for package: 7:squid-3.5.20-10.el7.x86_64 [openshift/origin-egress-http-proxy] --> Processing Dependency: libltdl.so.7()(64bit) for package: 7:squid-3.5.20-10.el7.x86_64 [openshift/origin-egress-http-proxy] --> Processing Dependency: libecap.so.3()(64bit) for package: 7:squid-3.5.20-10.el7.x86_64 [openshift/origin-egress-http-proxy] --> Running transaction check [openshift/origin-egress-http-proxy] ---> Package libecap.x86_64 0:1.0.0-1.el7 will be installed [openshift/origin-egress-http-proxy] ---> Package libtool-ltdl.x86_64 0:2.4.2-22.el7_3 will be installed [openshift/origin-egress-http-proxy] ---> Package perl-DBI.x86_64 0:1.627-4.el7 will be installed [openshift/origin-egress-http-proxy] --> Processing Dependency: perl(RPC::PlServer) >= 0.2001 for package: perl-DBI-1.627-4.el7.x86_64 [openshift/origin-egress-http-proxy] --> Processing Dependency: perl(RPC::PlClient) >= 0.2000 for package: perl-DBI-1.627-4.el7.x86_64 [openshift/origin-egress-http-proxy] ---> Package perl-Data-Dumper.x86_64 0:2.145-3.el7 will be installed [openshift/origin-egress-http-proxy] ---> Package perl-Digest-MD5.x86_64 0:2.52-3.el7 will be installed [openshift/origin-egress-http-proxy] --> Processing Dependency: perl(Digest::base) >= 1.00 for package: perl-Digest-MD5-2.52-3.el7.x86_64 [openshift/origin-egress-http-proxy] ---> Package squid-migration-script.x86_64 7:3.5.20-10.el7 will be installed [openshift/origin-egress-http-proxy] --> Running transaction check [openshift/origin-egress-http-proxy] ---> Package perl-Digest.noarch 0:1.17-245.el7 will be installed [openshift/origin-egress-http-proxy] ---> Package perl-PlRPC.noarch 0:0.2020-14.el7 will be installed [openshift/origin-egress-http-proxy] --> Processing Dependency: perl(Net::Daemon) >= 0.13 for package: perl-PlRPC-0.2020-14.el7.noarch [openshift/origin-egress-http-proxy] --> Processing Dependency: perl(Net::Daemon::Test) for package: perl-PlRPC-0.2020-14.el7.noarch [openshift/origin-egress-http-proxy] --> Processing Dependency: perl(Net::Daemon::Log) for package: perl-PlRPC-0.2020-14.el7.noarch [openshift/origin-egress-http-proxy] --> Processing Dependency: perl(Compress::Zlib) for package: perl-PlRPC-0.2020-14.el7.noarch [openshift/origin-egress-http-proxy] --> Running transaction check [openshift/origin-egress-http-proxy] ---> Package perl-IO-Compress.noarch 0:2.061-2.el7 will be installed [openshift/origin-egress-http-proxy] --> Processing Dependency: perl(Compress::Raw::Zlib) >= 2.061 for package: perl-IO-Compress-2.061-2.el7.noarch [openshift/origin-egress-http-proxy] --> Processing Dependency: perl(Compress::Raw::Bzip2) >= 2.061 for package: perl-IO-Compress-2.061-2.el7.noarch [openshift/origin-egress-http-proxy] ---> Package perl-Net-Daemon.noarch 0:0.48-5.el7 will be installed [openshift/origin-egress-http-proxy] --> Running transaction check [openshift/origin-egress-http-proxy] ---> Package perl-Compress-Raw-Bzip2.x86_64 0:2.061-3.el7 will be installed [openshift/origin-egress-http-proxy] ---> Package perl-Compress-Raw-Zlib.x86_64 1:2.061-4.el7 will be installed [openshift/origin-egress-http-proxy] --> Finished Dependency Resolution [openshift/origin-egress-http-proxy] Dependencies Resolved [openshift/origin-egress-http-proxy] ================================================================================ [openshift/origin-egress-http-proxy] Package Arch Version Repository [openshift/origin-egress-http-proxy] Size [openshift/origin-egress-http-proxy] ================================================================================ [openshift/origin-egress-http-proxy] Installing: [openshift/origin-egress-http-proxy] squid x86_64 7:3.5.20-10.el7 base 3.1 M [openshift/origin-egress-http-proxy] Installing for dependencies: [openshift/origin-egress-http-proxy] libecap x86_64 1.0.0-1.el7 base 21 k [openshift/origin-egress-http-proxy] libtool-ltdl x86_64 2.4.2-22.el7_3 base 49 k [openshift/origin-egress-http-proxy] perl-Compress-Raw-Bzip2 x86_64 2.061-3.el7 base 32 k [openshift/origin-egress-http-proxy] perl-Compress-Raw-Zlib x86_64 1:2.061-4.el7 base 57 k [openshift/origin-egress-http-proxy] perl-DBI x86_64 1.627-4.el7 base 802 k [openshift/origin-egress-http-proxy] perl-Data-Dumper x86_64 2.145-3.el7 base 47 k [openshift/origin-egress-http-proxy] perl-Digest noarch 1.17-245.el7 base 23 k [openshift/origin-egress-http-proxy] perl-Digest-MD5 x86_64 2.52-3.el7 base 30 k [openshift/origin-egress-http-proxy] perl-IO-Compress noarch 2.061-2.el7 base 260 k [openshift/origin-egress-http-proxy] perl-Net-Daemon noarch 0.48-5.el7 base 51 k [openshift/origin-egress-http-proxy] perl-PlRPC noarch 0.2020-14.el7 base 36 k [openshift/origin-egress-http-proxy] squid-migration-script x86_64 7:3.5.20-10.el7 base 48 k [openshift/origin-egress-http-proxy] Transaction Summary [openshift/origin-egress-http-proxy] ================================================================================ [openshift/origin-egress-http-proxy] Install 1 Package (+12 Dependent packages) [openshift/origin-egress-http-proxy] Total download size: 4.5 M [openshift/origin-egress-http-proxy] Installed size: 14 M [openshift/origin-egress-http-proxy] Downloading packages: [openshift/origin-egress-http-proxy] -------------------------------------------------------------------------------- [openshift/origin-egress-http-proxy] Total 5.7 MB/s | 4.5 MB 00:00 [openshift/origin-egress-http-proxy] Running transaction check [openshift/origin-egress-http-proxy] Running transaction test [openshift/origin-egress-http-proxy] Transaction test succeeded [openshift/origin-egress-http-proxy] Running transaction [openshift/origin-egress-http-proxy] Installing : perl-Data-Dumper-2.145-3.el7.x86_64 1/13 [openshift/origin-egress-http-proxy] Installing : perl-Compress-Raw-Bzip2-2.061-3.el7.x86_64 2/13 [openshift/origin-egress-http-proxy] Installing : perl-Digest-1.17-245.el7.noarch 3/13 [openshift/origin-egress-http-proxy] Installing : perl-Digest-MD5-2.52-3.el7.x86_64 4/13 [openshift/origin-egress-http-proxy] Installing : 1:perl-Compress-Raw-Zlib-2.061-4.el7.x86_64 5/13 [openshift/origin-egress-http-proxy] Installing : perl-IO-Compress-2.061-2.el7.noarch 6/13 [openshift/origin-egress-http-proxy] Installing : libtool-ltdl-2.4.2-22.el7_3.x86_64 7/13 [openshift/origin-egress-http-proxy] Installing : 7:squid-migration-script-3.5.20-10.el7.x86_64 8/13 [openshift/origin-egress-http-proxy] Installing : libecap-1.0.0-1.el7.x86_64 9/13 [openshift/origin-egress-http-proxy] Installing : perl-Net-Daemon-0.48-5.el7.noarch 10/13 [openshift/origin-egress-http-proxy] Installing : perl-PlRPC-0.2020-14.el7.noarch 11/13 [openshift/origin-egress-http-proxy] Installing : perl-DBI-1.627-4.el7.x86_64 12/13 [openshift/origin-egress-http-proxy] Installing : 7:squid-3.5.20-10.el7.x86_64 13/13 [openshift/origin-egress-http-proxy] Verifying : perl-Net-Daemon-0.48-5.el7.noarch 1/13 [openshift/origin-egress-http-proxy] Verifying : perl-Data-Dumper-2.145-3.el7.x86_64 2/13 [openshift/origin-egress-http-proxy] Verifying : libecap-1.0.0-1.el7.x86_64 3/13 [openshift/origin-egress-http-proxy] Verifying : perl-Digest-MD5-2.52-3.el7.x86_64 4/13 [openshift/origin-egress-http-proxy] Verifying : 7:squid-migration-script-3.5.20-10.el7.x86_64 5/13 [openshift/origin-egress-http-proxy] Verifying : perl-IO-Compress-2.061-2.el7.noarch 6/13 [openshift/origin-egress-http-proxy] Verifying : libtool-ltdl-2.4.2-22.el7_3.x86_64 7/13 [openshift/origin-egress-http-proxy] Verifying : 1:perl-Compress-Raw-Zlib-2.061-4.el7.x86_64 8/13 [openshift/origin-egress-http-proxy] Verifying : perl-Digest-1.17-245.el7.noarch 9/13 [openshift/origin-egress-http-proxy] Verifying : perl-DBI-1.627-4.el7.x86_64 10/13 [openshift/origin-egress-http-proxy] Verifying : perl-Compress-Raw-Bzip2-2.061-3.el7.x86_64 11/13 [openshift/origin-egress-http-proxy] Verifying : perl-PlRPC-0.2020-14.el7.noarch 12/13 [openshift/origin-egress-http-proxy] Verifying : 7:squid-3.5.20-10.el7.x86_64 13/13 [openshift/origin-egress-http-proxy] Installed: [openshift/origin-egress-http-proxy] squid.x86_64 7:3.5.20-10.el7 [openshift/origin-egress-http-proxy] Dependency Installed: [openshift/origin-egress-http-proxy] libecap.x86_64 0:1.0.0-1.el7 [openshift/origin-egress-http-proxy] libtool-ltdl.x86_64 0:2.4.2-22.el7_3 [openshift/origin-egress-http-proxy] perl-Compress-Raw-Bzip2.x86_64 0:2.061-3.el7 [openshift/origin-egress-http-proxy] perl-Compress-Raw-Zlib.x86_64 1:2.061-4.el7 [openshift/origin-egress-http-proxy] perl-DBI.x86_64 0:1.627-4.el7 [openshift/origin-egress-http-proxy] perl-Data-Dumper.x86_64 0:2.145-3.el7 [openshift/origin-egress-http-proxy] perl-Digest.noarch 0:1.17-245.el7 [openshift/origin-egress-http-proxy] perl-Digest-MD5.x86_64 0:2.52-3.el7 [openshift/origin-egress-http-proxy] perl-IO-Compress.noarch 0:2.061-2.el7 [openshift/origin-egress-http-proxy] perl-Net-Daemon.noarch 0:0.48-5.el7 [openshift/origin-egress-http-proxy] perl-PlRPC.noarch 0:0.2020-14.el7 [openshift/origin-egress-http-proxy] squid-migration-script.x86_64 7:3.5.20-10.el7 [openshift/origin-egress-http-proxy] Complete! [openshift/origin-egress-http-proxy] Loaded plugins: fastestmirror, ovl [openshift/origin-egress-http-proxy] Cleaning repos: base cbs-paas7-openshift-multiarch-el7-build [openshift/origin-egress-http-proxy] : centos-ceph-luminous extras updates [openshift/origin-egress-http-proxy] Cleaning up everything [openshift/origin-egress-http-proxy] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/origin-egress-http-proxy] Cleaning up list of fastest mirrors [openshift/origin-egress-http-proxy] --> ADD egress-http-proxy.sh /bin/egress-http-proxy.sh [openshift/origin-egress-http-proxy] --> ENTRYPOINT /bin/egress-http-proxy.sh [openshift/origin-egress-http-proxy] --> Committing changes to openshift/origin-egress-http-proxy:4253ab3 ... [openshift/origin-egress-http-proxy] --> Tagged as openshift/origin-egress-http-proxy:latest [openshift/origin-egress-http-proxy] --> Done [openshift/origin] --> FROM openshift/origin-base [openshift/origin] --> COPY system-container/system-container-wrapper.sh /usr/local/bin/ [openshift/origin] --> COPY system-container/config.json.template system-container/manifest.json system-container/service.template system-container/tmpfiles.template /exports/ [openshift/origin] --> RUN INSTALL_PKGS="origin" && yum --enablerepo=origin-local-release install -y ${INSTALL_PKGS} && rpm -V ${INSTALL_PKGS} && yum clean all && setcap 'cap_net_bind_service=ep' /usr/bin/openshift [openshift/origin] Loaded plugins: fastestmirror, ovl [openshift/origin] Determining fastest mirrors [openshift/origin] * base: mirror.atlanticmetro.net [openshift/origin] * extras: centos.aol.com [openshift/origin] * updates: mirror.cs.pitt.edu [openshift/origin] Resolving Dependencies [openshift/origin] --> Running transaction check [openshift/origin] ---> Package origin.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 will be installed [openshift/origin] --> Processing Dependency: origin-clients = 3.10.0-0.alpha.0.549.4253ab3 for package: origin-3.10.0-0.alpha.0.549.4253ab3.x86_64 [openshift/origin] --> Running transaction check [openshift/origin] ---> Package origin-clients.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 will be installed [openshift/origin] --> Processing Dependency: bash-completion for package: origin-clients-3.10.0-0.alpha.0.549.4253ab3.x86_64 [openshift/origin] --> Running transaction check [openshift/origin] ---> Package bash-completion.noarch 1:2.1-6.el7 will be installed [openshift/origin] --> Finished Dependency Resolution [openshift/origin] Dependencies Resolved [openshift/origin] ================================================================================ [openshift/origin] Package Arch Version Repository Size [openshift/origin] ================================================================================ [openshift/origin] Installing: [openshift/origin] origin x86_64 3.10.0-0.alpha.0.549.4253ab3 origin-local-release 101 M [openshift/origin] Installing for dependencies: [openshift/origin] bash-completion noarch 1:2.1-6.el7 base 85 k [openshift/origin] origin-clients x86_64 3.10.0-0.alpha.0.549.4253ab3 origin-local-release 33 M [openshift/origin] Transaction Summary [openshift/origin] ================================================================================ [openshift/origin] Install 1 Package (+2 Dependent packages) [openshift/origin] Total download size: 134 M [openshift/origin] Installed size: 1.1 G [openshift/origin] Downloading packages: [openshift/origin] -------------------------------------------------------------------------------- [openshift/origin] Total 199 MB/s | 134 MB 00:00 [openshift/origin] Running transaction check [openshift/origin] Running transaction test [openshift/origin] Transaction test succeeded [openshift/origin] Running transaction [openshift/origin] Installing : 1:bash-completion-2.1-6.el7.noarch 1/3 [openshift/origin] Installing : origin-clients-3.10.0-0.alpha.0.549.4253ab3.x86_64 2/3 [openshift/origin] Installing : origin-3.10.0-0.alpha.0.549.4253ab3.x86_64 3/3 [openshift/origin] Verifying : origin-3.10.0-0.alpha.0.549.4253ab3.x86_64 1/3 [openshift/origin] Verifying : origin-clients-3.10.0-0.alpha.0.549.4253ab3.x86_64 2/3 [openshift/origin] Verifying : 1:bash-completion-2.1-6.el7.noarch 3/3 [openshift/origin] Installed: [openshift/origin] origin.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 [openshift/origin] Dependency Installed: [openshift/origin] bash-completion.noarch 1:2.1-6.el7 [openshift/origin] origin-clients.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 [openshift/origin] Complete! [openshift/origin] Loaded plugins: fastestmirror, ovl [openshift/origin] Cleaning repos: base cbs-paas7-openshift-multiarch-el7-build [openshift/origin] : centos-ceph-luminous extras updates [openshift/origin] Cleaning up everything [openshift/origin] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/origin] Cleaning up list of fastest mirrors [openshift/origin] --> LABEL io.k8s.display-name="OpenShift Origin Application Platform" io.k8s.description="OpenShift Origin is a platform for developing, building, and deploying containerized applications." io.openshift.tags="openshift,core" [openshift/origin] --> ENV HOME=/root OPENSHIFT_CONTAINERIZED=true KUBECONFIG=/var/lib/origin/openshift.local.config/master/admin.kubeconfig [openshift/origin] --> WORKDIR /var/lib/origin [openshift/origin] --> EXPOSE 8443 53 [openshift/origin] --> ENTRYPOINT ["/usr/bin/openshift"] [openshift/origin] --> Committing changes to openshift/origin:4253ab3 ... [openshift/origin] --> Tagged as openshift/origin:latest [openshift/origin] --> Done [openshift/origin-sti-builder] --> FROM openshift/origin [openshift/origin-sti-builder] --> LABEL io.k8s.display-name="OpenShift Origin S2I Builder" io.k8s.description="This is a component of OpenShift Origin and is responsible for executing source-to-image (s2i) image builds." io.openshift.tags="openshift,sti,builder" [openshift/origin-sti-builder] --> ENTRYPOINT ["/usr/bin/openshift-sti-build"] [openshift/origin-sti-builder] --> Committing changes to openshift/origin-sti-builder:4253ab3 ... [openshift/origin-sti-builder] --> Tagged as openshift/origin-sti-builder:latest [openshift/origin-sti-builder] --> Done [openshift/origin-deployer] --> FROM openshift/origin [openshift/origin-deployer] --> LABEL io.k8s.display-name="OpenShift Origin Deployer" io.k8s.description="This is a component of OpenShift Origin and executes the user deployment process to roll out new containers. It may be used as a base image for building your own custom deployer image." io.openshift.tags="openshift,deployer" [openshift/origin-deployer] --> USER 1001 [openshift/origin-deployer] --> ENTRYPOINT ["/usr/bin/openshift-deploy"] [openshift/origin-deployer] --> Committing changes to openshift/origin-deployer:4253ab3 ... [openshift/origin-deployer] --> Tagged as openshift/origin-deployer:latest [openshift/origin-deployer] --> Done [openshift/origin-f5-router] --> FROM openshift/origin [openshift/origin-f5-router] --> LABEL io.k8s.display-name="OpenShift Origin F5 Router" io.k8s.description="This is a component of OpenShift Origin and programs a BigIP F5 router to expose services within the cluster." io.openshift.tags="openshift,router,f5" [openshift/origin-f5-router] --> ENTRYPOINT ["/usr/bin/openshift-f5-router"] [openshift/origin-f5-router] --> Committing changes to openshift/origin-f5-router:4253ab3 ... [openshift/origin-f5-router] --> Tagged as openshift/origin-f5-router:latest [openshift/origin-f5-router] --> Done [openshift/origin-recycler] --> FROM openshift/origin [openshift/origin-recycler] --> LABEL io.k8s.display-name="OpenShift Origin Volume Recycler" io.k8s.description="This is a component of OpenShift Origin and is used to prepare persistent volumes for reuse after they are deleted." io.openshift.tags="openshift,recycler" [openshift/origin-recycler] --> ENTRYPOINT ["/usr/bin/openshift-recycle"] [openshift/origin-recycler] --> Committing changes to openshift/origin-recycler:4253ab3 ... [openshift/origin-recycler] --> Tagged as openshift/origin-recycler:latest [openshift/origin-recycler] --> Done [openshift/origin-docker-builder] --> FROM openshift/origin [openshift/origin-docker-builder] --> LABEL io.k8s.display-name="OpenShift Origin Docker Builder" io.k8s.description="This is a component of OpenShift Origin and is responsible for executing Docker image builds." io.openshift.tags="openshift,builder" [openshift/origin-docker-builder] --> ENTRYPOINT ["/usr/bin/openshift-docker-build"] [openshift/origin-docker-builder] --> Committing changes to openshift/origin-docker-builder:4253ab3 ... [openshift/origin-docker-builder] --> Tagged as openshift/origin-docker-builder:latest [openshift/origin-docker-builder] --> Done [openshift/origin-haproxy-router] --> FROM openshift/origin [openshift/origin-haproxy-router] --> RUN INSTALL_PKGS="haproxy18" && yum install -y $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all && mkdir -p /var/lib/haproxy/router/{certs,cacerts} && mkdir -p /var/lib/haproxy/{conf,run,bin,log} && touch /var/lib/haproxy/conf/{{os_http_be,os_edge_reencrypt_be,os_tcp_be,os_sni_passthrough,os_route_http_redirect,cert_config,os_wildcard_domain}.map,haproxy.config} && setcap 'cap_net_bind_service=ep' /usr/sbin/haproxy && chown -R :0 /var/lib/haproxy && chmod -R g+w /var/lib/haproxy [openshift/origin-haproxy-router] Loaded plugins: fastestmirror, ovl [openshift/origin-haproxy-router] Determining fastest mirrors [openshift/origin-haproxy-router] * base: mirror.atlanticmetro.net [openshift/origin-haproxy-router] * extras: centos.aol.com [openshift/origin-haproxy-router] * updates: mirror.cs.pitt.edu [openshift/origin-haproxy-router] Resolving Dependencies [openshift/origin-haproxy-router] --> Running transaction check [openshift/origin-haproxy-router] ---> Package haproxy18.x86_64 0:1.8.1-5.el7 will be installed [openshift/origin-haproxy-router] --> Finished Dependency Resolution [openshift/origin-haproxy-router] Dependencies Resolved [openshift/origin-haproxy-router] ================================================================================ [openshift/origin-haproxy-router] Package Arch Version Repository Size [openshift/origin-haproxy-router] ================================================================================ [openshift/origin-haproxy-router] Installing: [openshift/origin-haproxy-router] haproxy18 x86_64 1.8.1-5.el7 cbs-paas7-openshift-multiarch-el7-build 1.2 M [openshift/origin-haproxy-router] Transaction Summary [openshift/origin-haproxy-router] ================================================================================ [openshift/origin-haproxy-router] Install 1 Package [openshift/origin-haproxy-router] Total download size: 1.2 M [openshift/origin-haproxy-router] Installed size: 3.8 M [openshift/origin-haproxy-router] Downloading packages: [openshift/origin-haproxy-router] Running transaction check [openshift/origin-haproxy-router] Running transaction test [openshift/origin-haproxy-router] Transaction test succeeded [openshift/origin-haproxy-router] Running transaction [openshift/origin-haproxy-router] Installing : haproxy18-1.8.1-5.el7.x86_64 1/1 [openshift/origin-haproxy-router] Verifying : haproxy18-1.8.1-5.el7.x86_64 1/1 [openshift/origin-haproxy-router] Installed: [openshift/origin-haproxy-router] haproxy18.x86_64 0:1.8.1-5.el7 [openshift/origin-haproxy-router] Complete! [openshift/origin-haproxy-router] Loaded plugins: fastestmirror, ovl [openshift/origin-haproxy-router] Cleaning repos: base cbs-paas7-openshift-multiarch-el7-build [openshift/origin-haproxy-router] : centos-ceph-luminous extras updates [openshift/origin-haproxy-router] Cleaning up everything [openshift/origin-haproxy-router] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/origin-haproxy-router] Cleaning up list of fastest mirrors [openshift/origin-haproxy-router] --> COPY . /var/lib/haproxy/ [openshift/origin-haproxy-router] --> LABEL io.k8s.display-name="OpenShift Origin HAProxy Router" io.k8s.description="This is a component of OpenShift Origin and contains an HAProxy instance that automatically exposes services within the cluster through routes, and offers TLS termination, reencryption, or SNI-passthrough on ports 80 and 443." io.openshift.tags="openshift,router,haproxy" [openshift/origin-haproxy-router] --> USER 1001 [openshift/origin-haproxy-router] --> EXPOSE 80 443 [openshift/origin-haproxy-router] --> WORKDIR /var/lib/haproxy/conf [openshift/origin-haproxy-router] --> ENV TEMPLATE_FILE=/var/lib/haproxy/conf/haproxy-config.template RELOAD_SCRIPT=/var/lib/haproxy/reload-haproxy [openshift/origin-haproxy-router] --> ENTRYPOINT ["/usr/bin/openshift-router"] [openshift/origin-haproxy-router] --> Committing changes to openshift/origin-haproxy-router:4253ab3 ... [openshift/origin-haproxy-router] --> Tagged as openshift/origin-haproxy-router:latest [openshift/origin-haproxy-router] --> Done [openshift/origin-keepalived-ipfailover] --> FROM openshift/origin [openshift/origin-keepalived-ipfailover] --> RUN INSTALL_PKGS="kmod keepalived iproute psmisc nmap-ncat net-tools" && yum install -y $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all [openshift/origin-keepalived-ipfailover] Loaded plugins: fastestmirror, ovl [openshift/origin-keepalived-ipfailover] Determining fastest mirrors [openshift/origin-keepalived-ipfailover] * base: mirror.math.princeton.edu [openshift/origin-keepalived-ipfailover] * extras: mirror.teklinks.com [openshift/origin-keepalived-ipfailover] * updates: mirror.datto.com [openshift/origin-keepalived-ipfailover] Package kmod-20-15.el7_4.7.x86_64 already installed and latest version [openshift/origin-keepalived-ipfailover] Package iproute-3.10.0-87.el7.x86_64 already installed and latest version [openshift/origin-keepalived-ipfailover] Package 2:nmap-ncat-6.40-7.el7.x86_64 already installed and latest version [openshift/origin-keepalived-ipfailover] Resolving Dependencies [openshift/origin-keepalived-ipfailover] --> Running transaction check [openshift/origin-keepalived-ipfailover] ---> Package keepalived.x86_64 0:1.3.5-1.el7 will be installed [openshift/origin-keepalived-ipfailover] --> Processing Dependency: libnetsnmpmibs.so.31()(64bit) for package: keepalived-1.3.5-1.el7.x86_64 [openshift/origin-keepalived-ipfailover] --> Processing Dependency: libnetsnmpagent.so.31()(64bit) for package: keepalived-1.3.5-1.el7.x86_64 [openshift/origin-keepalived-ipfailover] --> Processing Dependency: libnetsnmp.so.31()(64bit) for package: keepalived-1.3.5-1.el7.x86_64 [openshift/origin-keepalived-ipfailover] ---> Package net-tools.x86_64 0:2.0-0.22.20131004git.el7 will be installed [openshift/origin-keepalived-ipfailover] ---> Package psmisc.x86_64 0:22.20-15.el7 will be installed [openshift/origin-keepalived-ipfailover] --> Running transaction check [openshift/origin-keepalived-ipfailover] ---> Package net-snmp-agent-libs.x86_64 1:5.7.2-28.el7_4.1 will be installed [openshift/origin-keepalived-ipfailover] --> Processing Dependency: libsensors.so.4()(64bit) for package: 1:net-snmp-agent-libs-5.7.2-28.el7_4.1.x86_64 [openshift/origin-keepalived-ipfailover] ---> Package net-snmp-libs.x86_64 1:5.7.2-28.el7_4.1 will be installed [openshift/origin-keepalived-ipfailover] --> Running transaction check [openshift/origin-keepalived-ipfailover] ---> Package lm_sensors-libs.x86_64 0:3.4.0-4.20160601gitf9185e5.el7 will be installed [openshift/origin-keepalived-ipfailover] --> Finished Dependency Resolution [openshift/origin-keepalived-ipfailover] Dependencies Resolved [openshift/origin-keepalived-ipfailover] ================================================================================ [openshift/origin-keepalived-ipfailover] Package Arch Version Repository [openshift/origin-keepalived-ipfailover] Size [openshift/origin-keepalived-ipfailover] ================================================================================ [openshift/origin-keepalived-ipfailover] Installing: [openshift/origin-keepalived-ipfailover] keepalived x86_64 1.3.5-1.el7 base 327 k [openshift/origin-keepalived-ipfailover] net-tools x86_64 2.0-0.22.20131004git.el7 base 305 k [openshift/origin-keepalived-ipfailover] psmisc x86_64 22.20-15.el7 base 141 k [openshift/origin-keepalived-ipfailover] Installing for dependencies: [openshift/origin-keepalived-ipfailover] lm_sensors-libs x86_64 3.4.0-4.20160601gitf9185e5.el7 base 41 k [openshift/origin-keepalived-ipfailover] net-snmp-agent-libs x86_64 1:5.7.2-28.el7_4.1 updates 704 k [openshift/origin-keepalived-ipfailover] net-snmp-libs x86_64 1:5.7.2-28.el7_4.1 updates 748 k [openshift/origin-keepalived-ipfailover] Transaction Summary [openshift/origin-keepalived-ipfailover] ================================================================================ [openshift/origin-keepalived-ipfailover] Install 3 Packages (+3 Dependent packages) [openshift/origin-keepalived-ipfailover] Total download size: 2.2 M [openshift/origin-keepalived-ipfailover] Installed size: 7.4 M [openshift/origin-keepalived-ipfailover] Downloading packages: [openshift/origin-keepalived-ipfailover] -------------------------------------------------------------------------------- [openshift/origin-keepalived-ipfailover] Total 3.5 MB/s | 2.2 MB 00:00 [openshift/origin-keepalived-ipfailover] Running transaction check [openshift/origin-keepalived-ipfailover] Running transaction test [openshift/origin-keepalived-ipfailover] Transaction test succeeded [openshift/origin-keepalived-ipfailover] Running transaction [openshift/origin-keepalived-ipfailover] Installing : 1:net-snmp-libs-5.7.2-28.el7_4.1.x86_64 1/6 [openshift/origin-keepalived-ipfailover] Installing : lm_sensors-libs-3.4.0-4.20160601gitf9185e5.el7.x86_64 2/6 [openshift/origin-keepalived-ipfailover] Installing : 1:net-snmp-agent-libs-5.7.2-28.el7_4.1.x86_64 3/6 [openshift/origin-keepalived-ipfailover] Installing : keepalived-1.3.5-1.el7.x86_64 4/6 [openshift/origin-keepalived-ipfailover] Installing : psmisc-22.20-15.el7.x86_64 5/6 [openshift/origin-keepalived-ipfailover] Installing : net-tools-2.0-0.22.20131004git.el7.x86_64 6/6 [openshift/origin-keepalived-ipfailover] Verifying : net-tools-2.0-0.22.20131004git.el7.x86_64 1/6 [openshift/origin-keepalived-ipfailover] Verifying : 1:net-snmp-libs-5.7.2-28.el7_4.1.x86_64 2/6 [openshift/origin-keepalived-ipfailover] Verifying : keepalived-1.3.5-1.el7.x86_64 3/6 [openshift/origin-keepalived-ipfailover] Verifying : 1:net-snmp-agent-libs-5.7.2-28.el7_4.1.x86_64 4/6 [openshift/origin-keepalived-ipfailover] Verifying : lm_sensors-libs-3.4.0-4.20160601gitf9185e5.el7.x86_64 5/6 [openshift/origin-keepalived-ipfailover] Verifying : psmisc-22.20-15.el7.x86_64 6/6 [openshift/origin-keepalived-ipfailover] Installed: [openshift/origin-keepalived-ipfailover] keepalived.x86_64 0:1.3.5-1.el7 net-tools.x86_64 0:2.0-0.22.20131004git.el7 [openshift/origin-keepalived-ipfailover] psmisc.x86_64 0:22.20-15.el7 [openshift/origin-keepalived-ipfailover] Dependency Installed: [openshift/origin-keepalived-ipfailover] lm_sensors-libs.x86_64 0:3.4.0-4.20160601gitf9185e5.el7 [openshift/origin-keepalived-ipfailover] net-snmp-agent-libs.x86_64 1:5.7.2-28.el7_4.1 [openshift/origin-keepalived-ipfailover] net-snmp-libs.x86_64 1:5.7.2-28.el7_4.1 [openshift/origin-keepalived-ipfailover] Complete! [openshift/origin-keepalived-ipfailover] Loaded plugins: fastestmirror, ovl [openshift/origin-keepalived-ipfailover] Cleaning repos: base cbs-paas7-openshift-multiarch-el7-build [openshift/origin-keepalived-ipfailover] : centos-ceph-luminous extras updates [openshift/origin-keepalived-ipfailover] Cleaning up everything [openshift/origin-keepalived-ipfailover] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/origin-keepalived-ipfailover] Cleaning up list of fastest mirrors [openshift/origin-keepalived-ipfailover] --> COPY . /var/lib/ipfailover/keepalived/ [openshift/origin-keepalived-ipfailover] --> LABEL io.k8s.display-name="OpenShift Origin IP Failover" io.k8s.description="This is a component of OpenShift Origin and runs a clustered keepalived instance across multiple hosts to allow highly available IP addresses." io.openshift.tags="openshift,ha,ip,failover" [openshift/origin-keepalived-ipfailover] --> EXPOSE 1985 [openshift/origin-keepalived-ipfailover] --> WORKDIR /var/lib/ipfailover [openshift/origin-keepalived-ipfailover] --> ENTRYPOINT ["/var/lib/ipfailover/keepalived/monitor.sh"] [openshift/origin-keepalived-ipfailover] --> Committing changes to openshift/origin-keepalived-ipfailover:4253ab3 ... [openshift/origin-keepalived-ipfailover] --> Tagged as openshift/origin-keepalived-ipfailover:latest [openshift/origin-keepalived-ipfailover] --> Done [openshift/node] --> FROM openshift/origin [openshift/node] --> COPY scripts/* /usr/local/bin/ [openshift/node] --> COPY system-container/system-container-wrapper.sh /usr/local/bin/ [openshift/node] --> COPY system-container/manifest.json system-container/config.json.template system-container/service.template system-container/tmpfiles.template /exports/ [openshift/node] --> RUN INSTALL_PKGS="origin-sdn-ovs libmnl libnetfilter_conntrack conntrack-tools openvswitch libnfnetlink iptables iproute bridge-utils procps-ng ethtool socat openssl binutils xz kmod-libs kmod sysvinit-tools device-mapper-libs dbus iscsi-initiator-utils bind-utils" && yum --enablerepo=origin-local-release install -y $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all && mkdir -p /usr/lib/systemd/system/origin-node.service.d /usr/lib/systemd/system/docker.service.d [openshift/node] Loaded plugins: fastestmirror, ovl [openshift/node] Determining fastest mirrors [openshift/node] * base: mirror.atlanticmetro.net [openshift/node] * extras: centos.aol.com [openshift/node] * updates: mirror.cs.pitt.edu [openshift/node] Package libmnl-1.0.3-7.el7.x86_64 already installed and latest version [openshift/node] Package libnetfilter_conntrack-1.0.6-1.el7_3.x86_64 already installed and latest version [openshift/node] Package libnfnetlink-1.0.1-4.el7.x86_64 already installed and latest version [openshift/node] Package iptables-1.4.21-18.3.el7_4.x86_64 already installed and latest version [openshift/node] Package iproute-3.10.0-87.el7.x86_64 already installed and latest version [openshift/node] Package procps-ng-3.3.10-16.el7.x86_64 already installed and latest version [openshift/node] Package 2:ethtool-4.8-1.el7.x86_64 already installed and latest version [openshift/node] Package socat-1.7.3.2-2.el7.x86_64 already installed and latest version [openshift/node] Package binutils-2.25.1-32.base.el7_4.2.x86_64 already installed and latest version [openshift/node] Package xz-5.2.2-1.el7.x86_64 already installed and latest version [openshift/node] Package kmod-libs-20-15.el7_4.7.x86_64 already installed and latest version [openshift/node] Package kmod-20-15.el7_4.7.x86_64 already installed and latest version [openshift/node] Package sysvinit-tools-2.88-14.dsf.el7.x86_64 already installed and latest version [openshift/node] Package 7:device-mapper-libs-1.02.140-8.el7.x86_64 already installed and latest version [openshift/node] Package 1:dbus-1.6.12-17.el7.x86_64 already installed and latest version [openshift/node] Resolving Dependencies [openshift/node] --> Running transaction check [openshift/node] ---> Package bind-utils.x86_64 32:9.9.4-51.el7_4.2 will be installed [openshift/node] --> Processing Dependency: bind-libs = 32:9.9.4-51.el7_4.2 for package: 32:bind-utils-9.9.4-51.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: liblwres.so.90()(64bit) for package: 32:bind-utils-9.9.4-51.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: libisccfg.so.90()(64bit) for package: 32:bind-utils-9.9.4-51.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: libisccc.so.90()(64bit) for package: 32:bind-utils-9.9.4-51.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: libisc.so.95()(64bit) for package: 32:bind-utils-9.9.4-51.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: libdns.so.100()(64bit) for package: 32:bind-utils-9.9.4-51.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: libbind9.so.90()(64bit) for package: 32:bind-utils-9.9.4-51.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: libGeoIP.so.1()(64bit) for package: 32:bind-utils-9.9.4-51.el7_4.2.x86_64 [openshift/node] ---> Package bridge-utils.x86_64 0:1.5-9.el7 will be installed [openshift/node] ---> Package conntrack-tools.x86_64 0:1.4.4-3.el7_3 will be installed [openshift/node] --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-3.el7_3.x86_64 [openshift/node] --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-3.el7_3.x86_64 [openshift/node] --> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-3.el7_3.x86_64 [openshift/node] --> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-3.el7_3.x86_64 [openshift/node] --> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-3.el7_3.x86_64 [openshift/node] --> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-3.el7_3.x86_64 [openshift/node] ---> Package iscsi-initiator-utils.x86_64 0:6.2.0.874-4.el7 will be installed [openshift/node] --> Processing Dependency: iscsi-initiator-utils-iscsiuio >= 6.2.0.874-4.el7 for package: iscsi-initiator-utils-6.2.0.874-4.el7.x86_64 [openshift/node] ---> Package openssl.x86_64 1:1.0.2k-8.el7 will be installed [openshift/node] --> Processing Dependency: make for package: 1:openssl-1.0.2k-8.el7.x86_64 [openshift/node] ---> Package openvswitch.x86_64 0:2.7.0-1.el7 will be installed [openshift/node] ---> Package origin-sdn-ovs.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 will be installed [openshift/node] --> Processing Dependency: origin-node = 3.10.0-0.alpha.0.549.4253ab3 for package: origin-sdn-ovs-3.10.0-0.alpha.0.549.4253ab3.x86_64 [openshift/node] --> Running transaction check [openshift/node] ---> Package GeoIP.x86_64 0:1.5.0-11.el7 will be installed [openshift/node] ---> Package bind-libs.x86_64 32:9.9.4-51.el7_4.2 will be installed [openshift/node] ---> Package iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.874-4.el7 will be installed [openshift/node] ---> Package libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 will be installed [openshift/node] ---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 will be installed [openshift/node] ---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed [openshift/node] ---> Package make.x86_64 1:3.82-23.el7 will be installed [openshift/node] ---> Package origin-node.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 will be installed [openshift/node] --> Processing Dependency: docker >= 1.12 for package: origin-node-3.10.0-0.alpha.0.549.4253ab3.x86_64 [openshift/node] --> Processing Dependency: nfs-utils for package: origin-node-3.10.0-0.alpha.0.549.4253ab3.x86_64 [openshift/node] --> Processing Dependency: cifs-utils for package: origin-node-3.10.0-0.alpha.0.549.4253ab3.x86_64 [openshift/node] --> Running transaction check [openshift/node] ---> Package cifs-utils.x86_64 0:6.2-10.el7 will be installed [openshift/node] --> Processing Dependency: libwbclient.so.0(WBCLIENT_0.9)(64bit) for package: cifs-utils-6.2-10.el7.x86_64 [openshift/node] --> Processing Dependency: libtalloc.so.2(TALLOC_2.0.2)(64bit) for package: cifs-utils-6.2-10.el7.x86_64 [openshift/node] --> Processing Dependency: keyutils for package: cifs-utils-6.2-10.el7.x86_64 [openshift/node] --> Processing Dependency: libwbclient.so.0()(64bit) for package: cifs-utils-6.2-10.el7.x86_64 [openshift/node] --> Processing Dependency: libtalloc.so.2()(64bit) for package: cifs-utils-6.2-10.el7.x86_64 [openshift/node] ---> Package docker.x86_64 2:1.13.1-53.git774336d.el7.centos will be installed [openshift/node] --> Processing Dependency: docker-common = 2:1.13.1-53.git774336d.el7.centos for package: 2:docker-1.13.1-53.git774336d.el7.centos.x86_64 [openshift/node] --> Processing Dependency: docker-client = 2:1.13.1-53.git774336d.el7.centos for package: 2:docker-1.13.1-53.git774336d.el7.centos.x86_64 [openshift/node] --> Processing Dependency: libseccomp.so.2()(64bit) for package: 2:docker-1.13.1-53.git774336d.el7.centos.x86_64 [openshift/node] ---> Package nfs-utils.x86_64 1:1.3.0-0.48.el7_4.2 will be installed [openshift/node] --> Processing Dependency: libtirpc >= 0.2.4-0.7 for package: 1:nfs-utils-1.3.0-0.48.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: gssproxy >= 0.7.0-3 for package: 1:nfs-utils-1.3.0-0.48.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: rpcbind for package: 1:nfs-utils-1.3.0-0.48.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: quota for package: 1:nfs-utils-1.3.0-0.48.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: libnfsidmap for package: 1:nfs-utils-1.3.0-0.48.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: libevent for package: 1:nfs-utils-1.3.0-0.48.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: libtirpc.so.1()(64bit) for package: 1:nfs-utils-1.3.0-0.48.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: libnfsidmap.so.0()(64bit) for package: 1:nfs-utils-1.3.0-0.48.el7_4.2.x86_64 [openshift/node] --> Processing Dependency: libevent-2.0.so.5()(64bit) for package: 1:nfs-utils-1.3.0-0.48.el7_4.2.x86_64 [openshift/node] --> Running transaction check [openshift/node] ---> Package docker-client.x86_64 2:1.13.1-53.git774336d.el7.centos will be installed [openshift/node] ---> Package docker-common.x86_64 2:1.13.1-53.git774336d.el7.centos will be installed [openshift/node] --> Processing Dependency: skopeo-containers >= 1:0.1.26-2 for package: 2:docker-common-1.13.1-53.git774336d.el7.centos.x86_64 [openshift/node] --> Processing Dependency: oci-umount >= 2:2.0.0-1 for package: 2:docker-common-1.13.1-53.git774336d.el7.centos.x86_64 [openshift/node] --> Processing Dependency: oci-systemd-hook >= 1:0.1.4-9 for package: 2:docker-common-1.13.1-53.git774336d.el7.centos.x86_64 [openshift/node] --> Processing Dependency: oci-register-machine >= 1:0-5.13 for package: 2:docker-common-1.13.1-53.git774336d.el7.centos.x86_64 [openshift/node] --> Processing Dependency: lvm2 >= 2.02.112 for package: 2:docker-common-1.13.1-53.git774336d.el7.centos.x86_64 [openshift/node] --> Processing Dependency: container-storage-setup >= 0.7.0-1 for package: 2:docker-common-1.13.1-53.git774336d.el7.centos.x86_64 [openshift/node] --> Processing Dependency: container-selinux >= 2:2.21-2 for package: 2:docker-common-1.13.1-53.git774336d.el7.centos.x86_64 [openshift/node] ---> Package gssproxy.x86_64 0:0.7.0-4.el7 will be installed [openshift/node] --> Processing Dependency: libverto-module-base for package: gssproxy-0.7.0-4.el7.x86_64 [openshift/node] --> Processing Dependency: libref_array.so.1(REF_ARRAY_0.1.1)(64bit) for package: gssproxy-0.7.0-4.el7.x86_64 [openshift/node] --> Processing Dependency: libini_config.so.3(INI_CONFIG_1.2.0)(64bit) for package: gssproxy-0.7.0-4.el7.x86_64 [openshift/node] --> Processing Dependency: libini_config.so.3(INI_CONFIG_1.1.0)(64bit) for package: gssproxy-0.7.0-4.el7.x86_64 [openshift/node] --> Processing Dependency: libref_array.so.1()(64bit) for package: gssproxy-0.7.0-4.el7.x86_64 [openshift/node] --> Processing Dependency: libini_config.so.3()(64bit) for package: gssproxy-0.7.0-4.el7.x86_64 [openshift/node] --> Processing Dependency: libcollection.so.2()(64bit) for package: gssproxy-0.7.0-4.el7.x86_64 [openshift/node] --> Processing Dependency: libbasicobjects.so.0()(64bit) for package: gssproxy-0.7.0-4.el7.x86_64 [openshift/node] ---> Package keyutils.x86_64 0:1.5.8-3.el7 will be installed [openshift/node] ---> Package libevent.x86_64 0:2.0.21-4.el7 will be installed [openshift/node] ---> Package libnfsidmap.x86_64 0:0.25-17.el7 will be installed [openshift/node] ---> Package libseccomp.x86_64 0:2.3.1-3.el7 will be installed [openshift/node] ---> Package libtalloc.x86_64 0:2.1.9-1.el7 will be installed [openshift/node] ---> Package libtirpc.x86_64 0:0.2.4-0.10.el7 will be installed [openshift/node] ---> Package libwbclient.x86_64 0:4.6.2-12.el7_4 will be installed [openshift/node] --> Processing Dependency: samba-client-libs = 4.6.2-12.el7_4 for package: libwbclient-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libreplace-samba4.so(SAMBA_4.6.2)(64bit) for package: libwbclient-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libreplace-samba4.so()(64bit) for package: libwbclient-4.6.2-12.el7_4.x86_64 [openshift/node] ---> Package quota.x86_64 1:4.01-14.el7 will be installed [openshift/node] --> Processing Dependency: quota-nls = 1:4.01-14.el7 for package: 1:quota-4.01-14.el7.x86_64 [openshift/node] --> Processing Dependency: tcp_wrappers for package: 1:quota-4.01-14.el7.x86_64 [openshift/node] ---> Package rpcbind.x86_64 0:0.2.0-42.el7 will be installed [openshift/node] --> Processing Dependency: systemd-sysv for package: rpcbind-0.2.0-42.el7.x86_64 [openshift/node] --> Running transaction check [openshift/node] ---> Package container-selinux.noarch 2:2.42-1.gitad8f0f7.el7 will be installed [openshift/node] --> Processing Dependency: selinux-policy-targeted >= 3.13.1-39 for package: 2:container-selinux-2.42-1.gitad8f0f7.el7.noarch [openshift/node] --> Processing Dependency: selinux-policy-base >= 3.13.1-39 for package: 2:container-selinux-2.42-1.gitad8f0f7.el7.noarch [openshift/node] --> Processing Dependency: selinux-policy >= 3.13.1-39 for package: 2:container-selinux-2.42-1.gitad8f0f7.el7.noarch [openshift/node] --> Processing Dependency: policycoreutils >= 2.5-11 for package: 2:container-selinux-2.42-1.gitad8f0f7.el7.noarch [openshift/node] --> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.42-1.gitad8f0f7.el7.noarch [openshift/node] --> Processing Dependency: libselinux-utils for package: 2:container-selinux-2.42-1.gitad8f0f7.el7.noarch [openshift/node] ---> Package container-storage-setup.noarch 0:0.8.0-3.git1d27ecf.el7 will be installed [openshift/node] --> Processing Dependency: parted for package: container-storage-setup-0.8.0-3.git1d27ecf.el7.noarch [openshift/node] ---> Package libbasicobjects.x86_64 0:0.1.1-27.el7 will be installed [openshift/node] ---> Package libcollection.x86_64 0:0.6.2-27.el7 will be installed [openshift/node] ---> Package libini_config.x86_64 0:1.3.0-27.el7 will be installed [openshift/node] --> Processing Dependency: libpath_utils.so.1(PATH_UTILS_0.2.1)(64bit) for package: libini_config-1.3.0-27.el7.x86_64 [openshift/node] --> Processing Dependency: libpath_utils.so.1()(64bit) for package: libini_config-1.3.0-27.el7.x86_64 [openshift/node] ---> Package libref_array.x86_64 0:0.1.5-27.el7 will be installed [openshift/node] ---> Package libverto-libevent.x86_64 0:0.2.5-4.el7 will be installed [openshift/node] ---> Package lvm2.x86_64 7:2.02.171-8.el7 will be installed [openshift/node] --> Processing Dependency: lvm2-libs = 7:2.02.171-8.el7 for package: 7:lvm2-2.02.171-8.el7.x86_64 [openshift/node] --> Processing Dependency: liblvm2app.so.2.2(Base)(64bit) for package: 7:lvm2-2.02.171-8.el7.x86_64 [openshift/node] --> Processing Dependency: libdevmapper-event.so.1.02(Base)(64bit) for package: 7:lvm2-2.02.171-8.el7.x86_64 [openshift/node] --> Processing Dependency: liblvm2app.so.2.2()(64bit) for package: 7:lvm2-2.02.171-8.el7.x86_64 [openshift/node] --> Processing Dependency: libdevmapper-event.so.1.02()(64bit) for package: 7:lvm2-2.02.171-8.el7.x86_64 [openshift/node] ---> Package oci-register-machine.x86_64 1:0-6.git2b44233.el7 will be installed [openshift/node] ---> Package oci-systemd-hook.x86_64 1:0.1.15-2.gitc04483d.el7 will be installed [openshift/node] --> Processing Dependency: libyajl.so.2()(64bit) for package: 1:oci-systemd-hook-0.1.15-2.gitc04483d.el7.x86_64 [openshift/node] ---> Package oci-umount.x86_64 2:2.3.3-3.gite3c9055.el7 will be installed [openshift/node] ---> Package quota-nls.noarch 1:4.01-14.el7 will be installed [openshift/node] ---> Package samba-client-libs.x86_64 0:4.6.2-12.el7_4 will be installed [openshift/node] --> Processing Dependency: samba-common = 4.6.2-12.el7_4 for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: samba-common = 4.6.2-12.el7_4 for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtevent.so.0(TEVENT_0.9.9)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtevent.so.0(TEVENT_0.9.31)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtevent.so.0(TEVENT_0.9.30)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtevent.so.0(TEVENT_0.9.21)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtevent.so.0(TEVENT_0.9.20)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtevent.so.0(TEVENT_0.9.16)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtevent.so.0(TEVENT_0.9.14)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtevent.so.0(TEVENT_0.9.13)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtevent.so.0(TEVENT_0.9.12)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtdb.so.1(TDB_1.3.11)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtdb.so.1(TDB_1.3.0)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtdb.so.1(TDB_1.2.5)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtdb.so.1(TDB_1.2.2)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtdb.so.1(TDB_1.2.1)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libldb.so.1(LDB_1.1.19)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libldb.so.1(LDB_1.1.1)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libldb.so.1(LDB_0.9.23)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libldb.so.1(LDB_0.9.15)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libldb.so.1(LDB_0.9.10)(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtevent.so.0()(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libtdb.so.1()(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libldb.so.1()(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] --> Processing Dependency: libcups.so.2()(64bit) for package: samba-client-libs-4.6.2-12.el7_4.x86_64 [openshift/node] ---> Package skopeo-containers.x86_64 1:0.1.28-1.git0270e56.el7 will be installed [openshift/node] ---> Package systemd-sysv.x86_64 0:219-42.el7_4.10 will be installed [openshift/node] --> Processing Dependency: systemd = 219-42.el7_4.10 for package: systemd-sysv-219-42.el7_4.10.x86_64 [openshift/node] ---> Package tcp_wrappers.x86_64 0:7.6-77.el7 will be installed [openshift/node] --> Running transaction check [openshift/node] ---> Package cups-libs.x86_64 1:1.6.3-29.el7 will be installed [openshift/node] --> Processing Dependency: libavahi-common.so.3()(64bit) for package: 1:cups-libs-1.6.3-29.el7.x86_64 [openshift/node] --> Processing Dependency: libavahi-client.so.3()(64bit) for package: 1:cups-libs-1.6.3-29.el7.x86_64 [openshift/node] ---> Package device-mapper-event-libs.x86_64 7:1.02.140-8.el7 will be installed [openshift/node] ---> Package libldb.x86_64 0:1.1.29-1.el7 will be installed [openshift/node] ---> Package libpath_utils.x86_64 0:0.2.1-27.el7 will be installed [openshift/node] ---> Package libselinux-utils.x86_64 0:2.5-11.el7 will be installed [openshift/node] ---> Package libtdb.x86_64 0:1.3.12-2.el7 will be installed [openshift/node] ---> Package libtevent.x86_64 0:0.9.31-2.el7_4 will be installed [openshift/node] ---> Package lvm2-libs.x86_64 7:2.02.171-8.el7 will be installed [openshift/node] --> Processing Dependency: device-mapper-event = 7:1.02.140-8.el7 for package: 7:lvm2-libs-2.02.171-8.el7.x86_64 [openshift/node] ---> Package parted.x86_64 0:3.1-28.el7 will be installed [openshift/node] ---> Package policycoreutils.x86_64 0:2.5-17.1.el7 will be installed [openshift/node] ---> Package policycoreutils-python.x86_64 0:2.5-17.1.el7 will be installed [openshift/node] --> Processing Dependency: setools-libs >= 3.3.8-1 for package: policycoreutils-python-2.5-17.1.el7.x86_64 [openshift/node] --> Processing Dependency: libsemanage-python >= 2.5-5 for package: policycoreutils-python-2.5-17.1.el7.x86_64 [openshift/node] --> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-17.1.el7.x86_64 [openshift/node] --> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-17.1.el7.x86_64 [openshift/node] --> Processing Dependency: libselinux-python for package: policycoreutils-python-2.5-17.1.el7.x86_64 [openshift/node] --> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-17.1.el7.x86_64 [openshift/node] --> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-17.1.el7.x86_64 [openshift/node] --> Processing Dependency: libcgroup for package: policycoreutils-python-2.5-17.1.el7.x86_64 [openshift/node] --> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-17.1.el7.x86_64 [openshift/node] --> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-17.1.el7.x86_64 [openshift/node] --> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-17.1.el7.x86_64 [openshift/node] --> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-17.1.el7.x86_64 [openshift/node] ---> Package samba-common.noarch 0:4.6.2-12.el7_4 will be installed [openshift/node] ---> Package selinux-policy.noarch 0:3.13.1-166.el7_4.9 will be installed [openshift/node] ---> Package selinux-policy-targeted.noarch 0:3.13.1-166.el7_4.9 will be installed [openshift/node] ---> Package systemd.x86_64 0:219-42.el7_4.7 will be updated [openshift/node] ---> Package systemd.x86_64 0:219-42.el7_4.10 will be an update [openshift/node] --> Processing Dependency: systemd-libs = 219-42.el7_4.10 for package: systemd-219-42.el7_4.10.x86_64 [openshift/node] ---> Package yajl.x86_64 0:2.0.4-4.el7 will be installed [openshift/node] --> Running transaction check [openshift/node] ---> Package audit-libs-python.x86_64 0:2.7.6-3.el7 will be installed [openshift/node] ---> Package avahi-libs.x86_64 0:0.6.31-17.el7 will be installed [openshift/node] ---> Package checkpolicy.x86_64 0:2.5-4.el7 will be installed [openshift/node] ---> Package device-mapper-event.x86_64 7:1.02.140-8.el7 will be installed [openshift/node] ---> Package libcgroup.x86_64 0:0.41-13.el7 will be installed [openshift/node] ---> Package libselinux-python.x86_64 0:2.5-11.el7 will be installed [openshift/node] ---> Package libsemanage-python.x86_64 0:2.5-8.el7 will be installed [openshift/node] ---> Package python-IPy.noarch 0:0.75-6.el7 will be installed [openshift/node] ---> Package setools-libs.x86_64 0:3.3.8-1.1.el7 will be installed [openshift/node] ---> Package systemd-libs.x86_64 0:219-42.el7_4.7 will be updated [openshift/node] ---> Package systemd-libs.x86_64 0:219-42.el7_4.10 will be an update [openshift/node] --> Finished Dependency Resolution [openshift/node] Dependencies Resolved [openshift/node] ================================================================================ [openshift/node] Package Arch Version Repository [openshift/node] Size [openshift/node] ================================================================================ [openshift/node] Installing: [openshift/node] bind-utils x86_64 32:9.9.4-51.el7_4.2 updates 203 k [openshift/node] bridge-utils x86_64 1.5-9.el7 base 32 k [openshift/node] conntrack-tools x86_64 1.4.4-3.el7_3 base 186 k [openshift/node] iscsi-initiator-utils x86_64 6.2.0.874-4.el7 base 420 k [openshift/node] openssl x86_64 1:1.0.2k-8.el7 base 492 k [openshift/node] openvswitch x86_64 2.7.0-1.el7 cbs-paas7-openshift-multiarch-el7-build [openshift/node] 2.5 M [openshift/node] origin-sdn-ovs x86_64 3.10.0-0.alpha.0.549.4253ab3 origin-local-release [openshift/node] 3.6 M [openshift/node] Installing for dependencies: [openshift/node] GeoIP x86_64 1.5.0-11.el7 base 1.1 M [openshift/node] audit-libs-python x86_64 2.7.6-3.el7 base 73 k [openshift/node] avahi-libs x86_64 0.6.31-17.el7 base 61 k [openshift/node] bind-libs x86_64 32:9.9.4-51.el7_4.2 updates 1.0 M [openshift/node] checkpolicy x86_64 2.5-4.el7 base 290 k [openshift/node] cifs-utils x86_64 6.2-10.el7 base 85 k [openshift/node] container-selinux noarch 2:2.42-1.gitad8f0f7.el7 extras 32 k [openshift/node] container-storage-setup noarch 0.8.0-3.git1d27ecf.el7 extras 33 k [openshift/node] cups-libs x86_64 1:1.6.3-29.el7 base 356 k [openshift/node] device-mapper-event x86_64 7:1.02.140-8.el7 base 180 k [openshift/node] device-mapper-event-libs [openshift/node] x86_64 7:1.02.140-8.el7 base 179 k [openshift/node] docker x86_64 2:1.13.1-53.git774336d.el7.centos extras 16 M [openshift/node] docker-client x86_64 2:1.13.1-53.git774336d.el7.centos extras 3.7 M [openshift/node] docker-common x86_64 2:1.13.1-53.git774336d.el7.centos extras 86 k [openshift/node] gssproxy x86_64 0.7.0-4.el7 base 105 k [openshift/node] iscsi-initiator-utils-iscsiuio [openshift/node] x86_64 6.2.0.874-4.el7 base 90 k [openshift/node] keyutils x86_64 1.5.8-3.el7 base 54 k [openshift/node] libbasicobjects x86_64 0.1.1-27.el7 base 25 k [openshift/node] libcgroup x86_64 0.41-13.el7 base 65 k [openshift/node] libcollection x86_64 0.6.2-27.el7 base 41 k [openshift/node] libevent x86_64 2.0.21-4.el7 base 214 k [openshift/node] libini_config x86_64 1.3.0-27.el7 base 63 k [openshift/node] libldb x86_64 1.1.29-1.el7 base 128 k [openshift/node] libnetfilter_cthelper x86_64 1.0.0-9.el7 base 18 k [openshift/node] libnetfilter_cttimeout x86_64 1.0.0-6.el7 base 18 k [openshift/node] libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k [openshift/node] libnfsidmap x86_64 0.25-17.el7 base 49 k [openshift/node] libpath_utils x86_64 0.2.1-27.el7 base 27 k [openshift/node] libref_array x86_64 0.1.5-27.el7 base 26 k [openshift/node] libseccomp x86_64 2.3.1-3.el7 base 56 k [openshift/node] libselinux-python x86_64 2.5-11.el7 base 234 k [openshift/node] libselinux-utils x86_64 2.5-11.el7 base 151 k [openshift/node] libsemanage-python x86_64 2.5-8.el7 base 104 k [openshift/node] libtalloc x86_64 2.1.9-1.el7 base 33 k [openshift/node] libtdb x86_64 1.3.12-2.el7 base 47 k [openshift/node] libtevent x86_64 0.9.31-2.el7_4 updates 36 k [openshift/node] libtirpc x86_64 0.2.4-0.10.el7 base 88 k [openshift/node] libverto-libevent x86_64 0.2.5-4.el7 base 8.9 k [openshift/node] libwbclient x86_64 4.6.2-12.el7_4 updates 104 k [openshift/node] lvm2 x86_64 7:2.02.171-8.el7 base 1.3 M [openshift/node] lvm2-libs x86_64 7:2.02.171-8.el7 base 1.0 M [openshift/node] make x86_64 1:3.82-23.el7 base 420 k [openshift/node] nfs-utils x86_64 1:1.3.0-0.48.el7_4.2 updates 399 k [openshift/node] oci-register-machine x86_64 1:0-6.git2b44233.el7 extras 1.1 M [openshift/node] oci-systemd-hook x86_64 1:0.1.15-2.gitc04483d.el7 extras 33 k [openshift/node] oci-umount x86_64 2:2.3.3-3.gite3c9055.el7 extras 32 k [openshift/node] origin-node x86_64 3.10.0-0.alpha.0.549.4253ab3 origin-local-release [openshift/node] 7.0 k [openshift/node] parted x86_64 3.1-28.el7 base 607 k [openshift/node] policycoreutils x86_64 2.5-17.1.el7 base 858 k [openshift/node] policycoreutils-python x86_64 2.5-17.1.el7 base 446 k [openshift/node] python-IPy noarch 0.75-6.el7 base 32 k [openshift/node] quota x86_64 1:4.01-14.el7 base 179 k [openshift/node] quota-nls noarch 1:4.01-14.el7 base 90 k [openshift/node] rpcbind x86_64 0.2.0-42.el7 base 59 k [openshift/node] samba-client-libs x86_64 4.6.2-12.el7_4 updates 4.7 M [openshift/node] samba-common noarch 4.6.2-12.el7_4 updates 197 k [openshift/node] selinux-policy noarch 3.13.1-166.el7_4.9 updates 437 k [openshift/node] selinux-policy-targeted noarch 3.13.1-166.el7_4.9 updates 6.5 M [openshift/node] setools-libs x86_64 3.3.8-1.1.el7 base 612 k [openshift/node] skopeo-containers x86_64 1:0.1.28-1.git0270e56.el7 extras 13 k [openshift/node] systemd-sysv x86_64 219-42.el7_4.10 updates 72 k [openshift/node] tcp_wrappers x86_64 7.6-77.el7 base 78 k [openshift/node] yajl x86_64 2.0.4-4.el7 base 39 k [openshift/node] Updating for dependencies: [openshift/node] systemd x86_64 219-42.el7_4.10 updates 5.2 M [openshift/node] systemd-libs x86_64 219-42.el7_4.10 updates 378 k [openshift/node] Transaction Summary [openshift/node] ================================================================================ [openshift/node] Install 7 Packages (+63 Dependent packages) [openshift/node] Upgrade ( 2 Dependent packages) [openshift/node] Total download size: 57 M [openshift/node] Downloading packages: [openshift/node] Delta RPMs disabled because /usr/bin/applydeltarpm not installed. [openshift/node] -------------------------------------------------------------------------------- [openshift/node] Total 8.6 MB/s | 57 MB 00:06 [openshift/node] Running transaction check [openshift/node] Running transaction test [openshift/node] Transaction test succeeded [openshift/node] Running transaction [openshift/node] Updating : systemd-libs-219-42.el7_4.10.x86_64 1/74 [openshift/node] Updating : systemd-219-42.el7_4.10.x86_64 2/74 [openshift/node] Installing : libtalloc-2.1.9-1.el7.x86_64 3/74 [openshift/node] Installing : 7:device-mapper-event-libs-1.02.140-8.el7.x86_64 4/74 [openshift/node] Installing : libtevent-0.9.31-2.el7_4.x86_64 5/74 [openshift/node] Installing : libtdb-1.3.12-2.el7.x86_64 6/74 [openshift/node] Installing : keyutils-1.5.8-3.el7.x86_64 7/74 [openshift/node] Installing : GeoIP-1.5.0-11.el7.x86_64 8/74 [openshift/node] Installing : libcollection-0.6.2-27.el7.x86_64 9/74 [openshift/node] Installing : libevent-2.0.21-4.el7.x86_64 10/74 [openshift/node] Installing : libbasicobjects-0.1.1-27.el7.x86_64 11/74 [openshift/node] Installing : libref_array-0.1.5-27.el7.x86_64 12/74 [openshift/node] Installing : yajl-2.0.4-4.el7.x86_64 13/74 [openshift/node] Installing : libtirpc-0.2.4-0.10.el7.x86_64 14/74 [openshift/node] Installing : libselinux-utils-2.5-11.el7.x86_64 15/74 [openshift/node] Installing : policycoreutils-2.5-17.1.el7.x86_64 16/74 [openshift/node] Installing : selinux-policy-3.13.1-166.el7_4.9.noarch 17/74 [openshift/node] Installing : selinux-policy-targeted-3.13.1-166.el7_4.9.noarch 18/74 [openshift/node] Installing : 1:oci-systemd-hook-0.1.15-2.gitc04483d.el7.x86_64 19/74 [openshift/node] Installing : 2:oci-umount-2.3.3-3.gite3c9055.el7.x86_64 20/74 [openshift/node] Installing : libverto-libevent-0.2.5-4.el7.x86_64 21/74 [openshift/node] Installing : 32:bind-libs-9.9.4-51.el7_4.2.x86_64 22/74 [openshift/node] Installing : 32:bind-utils-9.9.4-51.el7_4.2.x86_64 23/74 [openshift/node] Installing : libldb-1.1.29-1.el7.x86_64 24/74 [openshift/node] Installing : 7:device-mapper-event-1.02.140-8.el7.x86_64 25/74 [openshift/node] Failed to get D-Bus connection: Operation not permitted [openshift/node] warning: %post(device-mapper-event-7:1.02.140-8.el7.x86_64) scriptlet failed, exit status 1 [openshift/node] Non-fatal POSTIN scriptlet failure in rpm package 7:device-mapper-event-1.02.140-8.el7.x86_64 [openshift/node] Installing : 7:lvm2-libs-2.02.171-8.el7.x86_64 26/74 [openshift/node] Installing : 7:lvm2-2.02.171-8.el7.x86_64 27/74 [openshift/node] Failed to get D-Bus connection: Operation not permitted [openshift/node] Failed to get D-Bus connection: Operation not permitted [openshift/node] Created symlink /etc/systemd/system/sysinit.target.wants/lvm2-lvmpolld.socket, pointing to /usr/lib/systemd/system/lvm2-lvmpolld.socket. [openshift/node] Failed to get D-Bus connection: Operation not permitted [openshift/node] warning: %post(lvm2-7:2.02.171-8.el7.x86_64) scriptlet failed, exit status 1 [openshift/node] Non-fatal POSTIN scriptlet failure in rpm package 7:lvm2-2.02.171-8.el7.x86_64 [openshift/node] Installing : systemd-sysv-219-42.el7_4.10.x86_64 28/74 [openshift/node] Installing : rpcbind-0.2.0-42.el7.x86_64 29/74 [openshift/node] Installing : iscsi-initiator-utils-iscsiuio-6.2.0.874-4.el7.x86_64 30/74 [openshift/node] Installing : iscsi-initiator-utils-6.2.0.874-4.el7.x86_64 31/74 [openshift/node] Installing : samba-common-4.6.2-12.el7_4.noarch 32/74 [openshift/node] Installing : libcgroup-0.41-13.el7.x86_64 33/74 [openshift/node] Installing : 1:oci-register-machine-0-6.git2b44233.el7.x86_64 34/74 [openshift/node] Installing : setools-libs-3.3.8-1.1.el7.x86_64 35/74 [openshift/node] Installing : audit-libs-python-2.7.6-3.el7.x86_64 36/74 [openshift/node] Installing : libpath_utils-0.2.1-27.el7.x86_64 37/74 [openshift/node] Installing : libini_config-1.3.0-27.el7.x86_64 38/74 [openshift/node] Installing : gssproxy-0.7.0-4.el7.x86_64 39/74 [openshift/node] Installing : tcp_wrappers-7.6-77.el7.x86_64 40/74 [openshift/node] Installing : libselinux-python-2.5-11.el7.x86_64 41/74 [openshift/node] Installing : checkpolicy-2.5-4.el7.x86_64 42/74 [openshift/node] Installing : 1:make-3.82-23.el7.x86_64 43/74 [openshift/node] Installing : 1:openssl-1.0.2k-8.el7.x86_64 44/74 [openshift/node] Installing : openvswitch-2.7.0-1.el7.x86_64 45/74 [openshift/node] Installing : bridge-utils-1.5-9.el7.x86_64 46/74 [openshift/node] Installing : libnfsidmap-0.25-17.el7.x86_64 47/74 [openshift/node] Installing : libsemanage-python-2.5-8.el7.x86_64 48/74 [openshift/node] Installing : parted-3.1-28.el7.x86_64 49/74 [openshift/node] Installing : container-storage-setup-0.8.0-3.git1d27ecf.el7.noarch 50/74 [openshift/node] Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64 51/74 [openshift/node] Installing : python-IPy-0.75-6.el7.noarch 52/74 [openshift/node] Installing : policycoreutils-python-2.5-17.1.el7.x86_64 53/74 [openshift/node] Installing : 2:container-selinux-2.42-1.gitad8f0f7.el7.noarch 54/74 [openshift/node] setsebool: SELinux is disabled. [openshift/node] Installing : 1:quota-nls-4.01-14.el7.noarch 55/74 [openshift/node] Installing : 1:quota-4.01-14.el7.x86_64 56/74 [openshift/node] Installing : 1:nfs-utils-1.3.0-0.48.el7_4.2.x86_64 57/74 [openshift/node] Installing : avahi-libs-0.6.31-17.el7.x86_64 58/74 [openshift/node] Installing : 1:cups-libs-1.6.3-29.el7.x86_64 59/74 [openshift/node] Installing : libwbclient-4.6.2-12.el7_4.x86_64 60/74 [openshift/node] Installing : samba-client-libs-4.6.2-12.el7_4.x86_64 61/74 [openshift/node] Installing : cifs-utils-6.2-10.el7.x86_64 62/74 [openshift/node] Installing : libnetfilter_cthelper-1.0.0-9.el7.x86_64 63/74 [openshift/node] Installing : libseccomp-2.3.1-3.el7.x86_64 64/74 [openshift/node] Installing : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 65/74 [openshift/node] Installing : conntrack-tools-1.4.4-3.el7_3.x86_64 66/74 [openshift/node] Installing : 1:skopeo-containers-0.1.28-1.git0270e56.el7.x86_64 67/74 [openshift/node] Installing : 2:docker-common-1.13.1-53.git774336d.el7.centos.x86_64 68/74 [openshift/node] Installing : 2:docker-client-1.13.1-53.git774336d.el7.centos.x86_64 69/74 [openshift/node] Installing : 2:docker-1.13.1-53.git774336d.el7.centos.x86_64 70/74 [openshift/node] Installing : origin-node-3.10.0-0.alpha.0.549.4253ab3.x86_64 71/74 [openshift/node] Failed to get D-Bus connection: Operation not permitted [openshift/node] Installing : origin-sdn-ovs-3.10.0-0.alpha.0.549.4253ab3.x86_64 72/74 [openshift/node] Cleanup : systemd-219-42.el7_4.7.x86_64 73/74 [openshift/node] Cleanup : systemd-libs-219-42.el7_4.7.x86_64 74/74 [openshift/node] Verifying : libselinux-utils-2.5-11.el7.x86_64 1/74 [openshift/node] Verifying : 1:quota-4.01-14.el7.x86_64 2/74 [openshift/node] Verifying : 1:skopeo-containers-0.1.28-1.git0270e56.el7.x86_64 3/74 [openshift/node] Verifying : libtirpc-0.2.4-0.10.el7.x86_64 4/74 [openshift/node] Verifying : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 5/74 [openshift/node] Verifying : libtevent-0.9.31-2.el7_4.x86_64 6/74 [openshift/node] Verifying : libini_config-1.3.0-27.el7.x86_64 7/74 [openshift/node] Verifying : yajl-2.0.4-4.el7.x86_64 8/74 [openshift/node] Verifying : openvswitch-2.7.0-1.el7.x86_64 9/74 [openshift/node] Verifying : 1:cups-libs-1.6.3-29.el7.x86_64 10/74 [openshift/node] Verifying : origin-node-3.10.0-0.alpha.0.549.4253ab3.x86_64 11/74 [openshift/node] Verifying : libseccomp-2.3.1-3.el7.x86_64 12/74 [openshift/node] Verifying : policycoreutils-python-2.5-17.1.el7.x86_64 13/74 [openshift/node] Verifying : libnetfilter_cthelper-1.0.0-9.el7.x86_64 14/74 [openshift/node] Verifying : systemd-sysv-219-42.el7_4.10.x86_64 15/74 [openshift/node] Verifying : conntrack-tools-1.4.4-3.el7_3.x86_64 16/74 [openshift/node] Verifying : 32:bind-libs-9.9.4-51.el7_4.2.x86_64 17/74 [openshift/node] Verifying : 7:device-mapper-event-1.02.140-8.el7.x86_64 18/74 [openshift/node] Verifying : iscsi-initiator-utils-6.2.0.874-4.el7.x86_64 19/74 [openshift/node] Verifying : avahi-libs-0.6.31-17.el7.x86_64 20/74 [openshift/node] Verifying : 1:quota-nls-4.01-14.el7.noarch 21/74 [openshift/node] Verifying : selinux-policy-targeted-3.13.1-166.el7_4.9.noarch 22/74 [openshift/node] Verifying : 2:container-selinux-2.42-1.gitad8f0f7.el7.noarch 23/74 [openshift/node] Verifying : 7:device-mapper-event-libs-1.02.140-8.el7.x86_64 24/74 [openshift/node] Verifying : 1:openssl-1.0.2k-8.el7.x86_64 25/74 [openshift/node] Verifying : samba-common-4.6.2-12.el7_4.noarch 26/74 [openshift/node] Verifying : python-IPy-0.75-6.el7.noarch 27/74 [openshift/node] Verifying : 2:docker-client-1.13.1-53.git774336d.el7.centos.x86_64 28/74 [openshift/node] Verifying : 1:nfs-utils-1.3.0-0.48.el7_4.2.x86_64 29/74 [openshift/node] Verifying : libnetfilter_queue-1.0.2-2.el7_2.x86_64 30/74 [openshift/node] Verifying : libwbclient-4.6.2-12.el7_4.x86_64 31/74 [openshift/node] Verifying : libtalloc-2.1.9-1.el7.x86_64 32/74 [openshift/node] Verifying : 2:docker-common-1.13.1-53.git774336d.el7.centos.x86_64 33/74 [openshift/node] Verifying : 2:docker-1.13.1-53.git774336d.el7.centos.x86_64 34/74 [openshift/node] Verifying : libcgroup-0.41-13.el7.x86_64 35/74 [openshift/node] Verifying : 32:bind-utils-9.9.4-51.el7_4.2.x86_64 36/74 [openshift/node] Verifying : parted-3.1-28.el7.x86_64 37/74 [openshift/node] Verifying : gssproxy-0.7.0-4.el7.x86_64 38/74 [openshift/node] Verifying : policycoreutils-2.5-17.1.el7.x86_64 39/74 [openshift/node] Verifying : systemd-libs-219-42.el7_4.10.x86_64 40/74 [openshift/node] Verifying : selinux-policy-3.13.1-166.el7_4.9.noarch 41/74 [openshift/node] Verifying : libsemanage-python-2.5-8.el7.x86_64 42/74 [openshift/node] Verifying : iscsi-initiator-utils-iscsiuio-6.2.0.874-4.el7.x86_64 43/74 [openshift/node] Verifying : libref_array-0.1.5-27.el7.x86_64 44/74 [openshift/node] Verifying : libnfsidmap-0.25-17.el7.x86_64 45/74 [openshift/node] Verifying : systemd-219-42.el7_4.10.x86_64 46/74 [openshift/node] Verifying : libbasicobjects-0.1.1-27.el7.x86_64 47/74 [openshift/node] Verifying : 7:lvm2-libs-2.02.171-8.el7.x86_64 48/74 [openshift/node] Verifying : libevent-2.0.21-4.el7.x86_64 49/74 [openshift/node] Verifying : cifs-utils-6.2-10.el7.x86_64 50/74 [openshift/node] Verifying : bridge-utils-1.5-9.el7.x86_64 51/74 [openshift/node] Verifying : libverto-libevent-0.2.5-4.el7.x86_64 52/74 [openshift/node] Verifying : 1:oci-systemd-hook-0.1.15-2.gitc04483d.el7.x86_64 53/74 [openshift/node] Verifying : 1:make-3.82-23.el7.x86_64 54/74 [openshift/node] Verifying : 1:oci-register-machine-0-6.git2b44233.el7.x86_64 55/74 [openshift/node] Verifying : 7:lvm2-2.02.171-8.el7.x86_64 56/74 [openshift/node] Verifying : checkpolicy-2.5-4.el7.x86_64 57/74 [openshift/node] Verifying : libselinux-python-2.5-11.el7.x86_64 58/74 [openshift/node] Verifying : 2:oci-umount-2.3.3-3.gite3c9055.el7.x86_64 59/74 [openshift/node] Verifying : origin-sdn-ovs-3.10.0-0.alpha.0.549.4253ab3.x86_64 60/74 [openshift/node] Verifying : rpcbind-0.2.0-42.el7.x86_64 61/74 [openshift/node] Verifying : libcollection-0.6.2-27.el7.x86_64 62/74 [openshift/node] Verifying : GeoIP-1.5.0-11.el7.x86_64 63/74 [openshift/node] Verifying : keyutils-1.5.8-3.el7.x86_64 64/74 [openshift/node] Verifying : tcp_wrappers-7.6-77.el7.x86_64 65/74 [openshift/node] Verifying : libpath_utils-0.2.1-27.el7.x86_64 66/74 [openshift/node] Verifying : audit-libs-python-2.7.6-3.el7.x86_64 67/74 [openshift/node] Verifying : libtdb-1.3.12-2.el7.x86_64 68/74 [openshift/node] Verifying : container-storage-setup-0.8.0-3.git1d27ecf.el7.noarch 69/74 [openshift/node] Verifying : libldb-1.1.29-1.el7.x86_64 70/74 [openshift/node] Verifying : samba-client-libs-4.6.2-12.el7_4.x86_64 71/74 [openshift/node] Verifying : setools-libs-3.3.8-1.1.el7.x86_64 72/74 [openshift/node] Verifying : systemd-libs-219-42.el7_4.7.x86_64 73/74 [openshift/node] Verifying : systemd-219-42.el7_4.7.x86_64 74/74 [openshift/node] Installed: [openshift/node] bind-utils.x86_64 32:9.9.4-51.el7_4.2 [openshift/node] bridge-utils.x86_64 0:1.5-9.el7 [openshift/node] conntrack-tools.x86_64 0:1.4.4-3.el7_3 [openshift/node] iscsi-initiator-utils.x86_64 0:6.2.0.874-4.el7 [openshift/node] openssl.x86_64 1:1.0.2k-8.el7 [openshift/node] openvswitch.x86_64 0:2.7.0-1.el7 [openshift/node] origin-sdn-ovs.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 [openshift/node] Dependency Installed: [openshift/node] GeoIP.x86_64 0:1.5.0-11.el7 [openshift/node] audit-libs-python.x86_64 0:2.7.6-3.el7 [openshift/node] avahi-libs.x86_64 0:0.6.31-17.el7 [openshift/node] bind-libs.x86_64 32:9.9.4-51.el7_4.2 [openshift/node] checkpolicy.x86_64 0:2.5-4.el7 [openshift/node] cifs-utils.x86_64 0:6.2-10.el7 [openshift/node] container-selinux.noarch 2:2.42-1.gitad8f0f7.el7 [openshift/node] container-storage-setup.noarch 0:0.8.0-3.git1d27ecf.el7 [openshift/node] cups-libs.x86_64 1:1.6.3-29.el7 [openshift/node] device-mapper-event.x86_64 7:1.02.140-8.el7 [openshift/node] device-mapper-event-libs.x86_64 7:1.02.140-8.el7 [openshift/node] docker.x86_64 2:1.13.1-53.git774336d.el7.centos [openshift/node] docker-client.x86_64 2:1.13.1-53.git774336d.el7.centos [openshift/node] docker-common.x86_64 2:1.13.1-53.git774336d.el7.centos [openshift/node] gssproxy.x86_64 0:0.7.0-4.el7 [openshift/node] iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.874-4.el7 [openshift/node] keyutils.x86_64 0:1.5.8-3.el7 [openshift/node] libbasicobjects.x86_64 0:0.1.1-27.el7 [openshift/node] libcgroup.x86_64 0:0.41-13.el7 [openshift/node] libcollection.x86_64 0:0.6.2-27.el7 [openshift/node] libevent.x86_64 0:2.0.21-4.el7 [openshift/node] libini_config.x86_64 0:1.3.0-27.el7 [openshift/node] libldb.x86_64 0:1.1.29-1.el7 [openshift/node] libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 [openshift/node] libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 [openshift/node] libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 [openshift/node] libnfsidmap.x86_64 0:0.25-17.el7 [openshift/node] libpath_utils.x86_64 0:0.2.1-27.el7 [openshift/node] libref_array.x86_64 0:0.1.5-27.el7 [openshift/node] libseccomp.x86_64 0:2.3.1-3.el7 [openshift/node] libselinux-python.x86_64 0:2.5-11.el7 [openshift/node] libselinux-utils.x86_64 0:2.5-11.el7 [openshift/node] libsemanage-python.x86_64 0:2.5-8.el7 [openshift/node] libtalloc.x86_64 0:2.1.9-1.el7 [openshift/node] libtdb.x86_64 0:1.3.12-2.el7 [openshift/node] libtevent.x86_64 0:0.9.31-2.el7_4 [openshift/node] libtirpc.x86_64 0:0.2.4-0.10.el7 [openshift/node] libverto-libevent.x86_64 0:0.2.5-4.el7 [openshift/node] libwbclient.x86_64 0:4.6.2-12.el7_4 [openshift/node] lvm2.x86_64 7:2.02.171-8.el7 [openshift/node] lvm2-libs.x86_64 7:2.02.171-8.el7 [openshift/node] make.x86_64 1:3.82-23.el7 [openshift/node] nfs-utils.x86_64 1:1.3.0-0.48.el7_4.2 [openshift/node] oci-register-machine.x86_64 1:0-6.git2b44233.el7 [openshift/node] oci-systemd-hook.x86_64 1:0.1.15-2.gitc04483d.el7 [openshift/node] oci-umount.x86_64 2:2.3.3-3.gite3c9055.el7 [openshift/node] origin-node.x86_64 0:3.10.0-0.alpha.0.549.4253ab3 [openshift/node] parted.x86_64 0:3.1-28.el7 [openshift/node] policycoreutils.x86_64 0:2.5-17.1.el7 [openshift/node] policycoreutils-python.x86_64 0:2.5-17.1.el7 [openshift/node] python-IPy.noarch 0:0.75-6.el7 [openshift/node] quota.x86_64 1:4.01-14.el7 [openshift/node] quota-nls.noarch 1:4.01-14.el7 [openshift/node] rpcbind.x86_64 0:0.2.0-42.el7 [openshift/node] samba-client-libs.x86_64 0:4.6.2-12.el7_4 [openshift/node] samba-common.noarch 0:4.6.2-12.el7_4 [openshift/node] selinux-policy.noarch 0:3.13.1-166.el7_4.9 [openshift/node] selinux-policy-targeted.noarch 0:3.13.1-166.el7_4.9 [openshift/node] setools-libs.x86_64 0:3.3.8-1.1.el7 [openshift/node] skopeo-containers.x86_64 1:0.1.28-1.git0270e56.el7 [openshift/node] systemd-sysv.x86_64 0:219-42.el7_4.10 [openshift/node] tcp_wrappers.x86_64 0:7.6-77.el7 [openshift/node] yajl.x86_64 0:2.0.4-4.el7 [openshift/node] Dependency Updated: [openshift/node] systemd.x86_64 0:219-42.el7_4.10 systemd-libs.x86_64 0:219-42.el7_4.10 [openshift/node] Complete! [openshift/node] Loaded plugins: fastestmirror, ovl [openshift/node] Cleaning repos: base cbs-paas7-openshift-multiarch-el7-build [openshift/node] : centos-ceph-luminous extras updates [openshift/node] Cleaning up everything [openshift/node] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/node] Cleaning up list of fastest mirrors [openshift/node] --> RUN if test -e /opt/cni/bin; then mkdir -p /exports/hostfs/opt/cni/bin/ && cp -r /opt/cni/bin/* /exports/hostfs/opt/cni/bin/; fi [openshift/node] --> LABEL io.k8s.display-name="OpenShift Origin Node" io.k8s.description="This is a component of OpenShift Origin and contains the software for individual nodes when using SDN." io.openshift.tags="openshift,node" [openshift/node] --> VOLUME /etc/origin/node [openshift/node] --> ENV KUBECONFIG=/etc/origin/node/node.kubeconfig [openshift/node] --> ENTRYPOINT [ "/usr/local/bin/origin-node-run.sh" ] [openshift/node] --> Committing changes to openshift/node:4253ab3 ... [openshift/node] --> Tagged as openshift/node:latest [openshift/node] --> Done [openshift/hello-openshift] --> FROM scratchaymqybi2vrtliymiz68z25dg [openshift/hello-openshift] --> MAINTAINER Jessica Forrester <jforrest@redhat.com> [openshift/hello-openshift] --> COPY bin/hello-openshift /hello-openshift [openshift/hello-openshift] --> EXPOSE 8080 8888 [openshift/hello-openshift] --> USER 1001 [openshift/hello-openshift] --> ENTRYPOINT ["/hello-openshift"] [openshift/hello-openshift] --> Committing changes to openshift/hello-openshift:4253ab3 ... [openshift/hello-openshift] --> Tagged as openshift/hello-openshift:latest [openshift/hello-openshift] --> Done [openshift/hello-openshift] Removing examples/hello-openshift/bin/hello-openshift [openshift/openvswitch] --> FROM openshift/node [openshift/openvswitch] --> COPY scripts/* /usr/local/bin/ [openshift/openvswitch] --> RUN INSTALL_PKGS="openvswitch" && yum install -y ${INSTALL_PKGS} && rpm -V ${INSTALL_PKGS} && yum clean all [openshift/openvswitch] Loaded plugins: fastestmirror, ovl [openshift/openvswitch] Determining fastest mirrors [openshift/openvswitch] * base: mirror.math.princeton.edu [openshift/openvswitch] * extras: mirror.teklinks.com [openshift/openvswitch] * updates: mirror.cs.pitt.edu [openshift/openvswitch] Package openvswitch-2.7.0-1.el7.x86_64 already installed and latest version [openshift/openvswitch] Nothing to do [openshift/openvswitch] Loaded plugins: fastestmirror, ovl [openshift/openvswitch] Cleaning repos: base cbs-paas7-openshift-multiarch-el7-build [openshift/openvswitch] : centos-ceph-luminous extras updates [openshift/openvswitch] Cleaning up everything [openshift/openvswitch] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [openshift/openvswitch] Cleaning up list of fastest mirrors [openshift/openvswitch] --> LABEL io.openshift.tags="openshift,openvswitch" io.k8s.display-name="OpenShift Origin OpenVSwitch Daemon" io.k8s.description="This is a component of OpenShift Origin and runs an OpenVSwitch daemon process." [openshift/openvswitch] --> VOLUME /etc/openswitch [openshift/openvswitch] --> ENV HOME /root [openshift/openvswitch] --> COPY system-container/system-container-wrapper.sh /usr/local/bin/ [openshift/openvswitch] --> COPY system-container/config.json.template system-container/service.template system-container/tmpfiles.template system-container/manifest.json /exports/ [openshift/openvswitch] --> ENTRYPOINT ["/usr/local/bin/ovs-run.sh"] [openshift/openvswitch] --> Committing changes to openshift/openvswitch:4253ab3 ... [openshift/openvswitch] --> Tagged as openshift/openvswitch:latest [openshift/openvswitch] --> Done [INFO] [19:49:49+0000] hack/build-images.sh exited with code 0 after 00h 06m 02s + sed -i 's|go/src|data/src|' _output/local/releases/rpms/origin-local-release.repo + sudo cp _output/local/releases/rpms/origin-local-release.repo /etc/yum.repos.d/ + sudo systemctl restart docker.service + set +o xtrace ########## FINISHED STAGE: SUCCESS: BUILD AN ORIGIN RELEASE [00h 23m 50s] ########## [workspace] $ /bin/bash /tmp/jenkins5131918183458430628.sh ########## STARTING STAGE: DETERMINE THE RELEASE COMMIT FOR ORIGIN IMAGES AND VERSION FOR RPMS ########## + [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ]] + source /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ unset PYTHON_HOME ++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ mktemp + script=/tmp/tmp.rIf9t6kVa2 + cat + chmod +x /tmp/tmp.rIf9t6kVa2 + scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.rIf9t6kVa2 openshiftdevel:/tmp/tmp.rIf9t6kVa2 + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 600 /tmp/tmp.rIf9t6kVa2"' + cd /data/src/github.com/openshift/origin + jobs_repo=/data/src/github.com/openshift/aos-cd-jobs/ + git log -1 --pretty=%h + source hack/lib/init.sh ++ set -o errexit ++ set -o nounset ++ set -o pipefail +++ date +%s ++ OS_SCRIPT_START_TIME=1522957854 ++ export OS_SCRIPT_START_TIME ++ readonly -f os::util::absolute_path +++ dirname hack/lib/init.sh ++ init_source=hack/lib/../.. +++ os::util::absolute_path hack/lib/../.. +++ local relative_path=hack/lib/../.. +++ local absolute_path +++ pushd hack/lib/../.. ++++ pwd +++ relative_path=/data/src/github.com/openshift/origin +++ [[ -h /data/src/github.com/openshift/origin ]] +++ absolute_path=/data/src/github.com/openshift/origin +++ popd +++ echo /data/src/github.com/openshift/origin ++ OS_ROOT=/data/src/github.com/openshift/origin ++ export OS_ROOT ++ cd /data/src/github.com/openshift/origin +++ find /data/src/github.com/openshift/origin/hack/lib -type f -name '*.sh' -not -path '*/hack/lib/init.sh' ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/build/archive.sh +++ readonly -f os::build::archive::name +++ readonly -f os::build::archive::zip +++ readonly -f os::build::archive::tar +++ readonly -f os::build::archive::internal::is_hardlink_supported +++ readonly -f os::build::archive::extract_tar +++ readonly -f os::build::archive::detect_local_release_tars ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/build/binaries.sh +++ readonly -f os::build::binaries_from_targets +++ readonly -f os::build::host_platform +++ readonly -f os::build::host_platform_friendly +++ readonly -f os::build::platform_arch +++ readonly -f os::build::setup_env +++ readonly -f os::build::build_static_binaries +++ readonly -f os::build::build_binaries +++ readonly -f os::build::export_targets +++ readonly -f os::build::place_bins +++ readonly -f os::build::release_sha +++ readonly -f os::build::make_openshift_binary_symlinks +++ readonly -f os::build::ldflag +++ readonly -f os::build::require_clean_tree +++ readonly -f os::build::commit_range ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/build/environment.sh +++ readonly -f os::build::environment::create +++ readonly -f os::build::environment::release::workingdir +++ readonly -f os::build::environment::cleanup +++ readonly -f os::build::environment::start +++ readonly -f os::build::environment::withsource +++ readonly -f os::build::environment::volume_name +++ readonly -f os::build::environment::remove_volume +++ readonly -f os::build::environment::run ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/build/images.sh +++ readonly -f os::build::image +++ readonly -f os::build::image::internal::generic +++ readonly -f os::build::image::internal::imagebuilder +++ readonly -f os::build::image::internal::docker ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/build/release.sh ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/build/rpm.sh ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/build/version.sh +++ readonly -f os::build::version::get_vars +++ readonly -f os::build::version::git_vars +++ readonly -f os::build::version::etcd_vars +++ readonly -f os::build::version::kubernetes_vars +++ readonly -f os::build::version::save_vars ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/cleanup.sh +++ readonly -f os::cleanup::all +++ readonly -f os::cleanup::dump_etcd +++ readonly -f os::cleanup::internal::dump_etcd_v3 +++ readonly -f os::cleanup::prune_etcd +++ readonly -f os::cleanup::containers +++ readonly -f os::cleanup::dump_container_logs +++ readonly -f os::cleanup::internal::list_our_containers +++ readonly -f os::cleanup::internal::list_k8s_containers +++ readonly -f os::cleanup::internal::list_containers +++ readonly -f os::cleanup::tmpdir +++ readonly -f os::cleanup::dump_events +++ readonly -f os::cleanup::find_cache_alterations +++ readonly -f os::cleanup::dump_pprof_output +++ readonly -f os::cleanup::truncate_large_logs +++ readonly -f os::cleanup::processes ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/cmd.sh +++ readonly -f os::cmd::expect_success +++ readonly -f os::cmd::expect_failure +++ readonly -f os::cmd::expect_success_and_text +++ readonly -f os::cmd::expect_failure_and_text +++ readonly -f os::cmd::expect_success_and_not_text +++ readonly -f os::cmd::expect_failure_and_not_text +++ readonly -f os::cmd::expect_code +++ readonly -f os::cmd::expect_code_and_text +++ readonly -f os::cmd::expect_code_and_not_text +++ millisecond=1 +++ second=1000 +++ minute=60000 +++ readonly -f os::cmd::try_until_success +++ readonly -f os::cmd::try_until_failure +++ readonly -f os::cmd::try_until_text +++ readonly -f os::cmd::try_until_text +++ os_cmd_internal_tmpdir=/tmp/openshift +++ os_cmd_internal_tmpout=/tmp/openshift/tmp_stdout.log +++ os_cmd_internal_tmperr=/tmp/openshift/tmp_stderr.log +++ readonly -f os::cmd::internal::expect_exit_code_run_grep +++ readonly -f os::cmd::internal::init_tempdir +++ readonly -f os::cmd::internal::describe_call +++ readonly -f os::cmd::internal::determine_caller +++ readonly -f os::cmd::internal::describe_expectation +++ readonly -f os::cmd::internal::seconds_since_epoch +++ readonly -f os::cmd::internal::run_collecting_output +++ readonly -f os::cmd::internal::success_func +++ readonly -f os::cmd::internal::failure_func +++ readonly -f os::cmd::internal::specific_code_func +++ readonly -f os::cmd::internal::get_results +++ readonly -f os::cmd::internal::get_last_results +++ readonly -f os::cmd::internal::mark_attempt +++ readonly -f os::cmd::internal::compress_output +++ readonly -f os::cmd::internal::print_results +++ readonly -f os::cmd::internal::assemble_causes +++ readonly -f os::cmd::internal::run_until_exit_code +++ readonly -f os::cmd::internal::run_until_text ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/log/stacktrace.sh +++ readonly -f os::log::stacktrace::install +++ readonly -f os::log::stacktrace::print ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/log/system.sh +++ readonly -f os::log::system::install_cleanup +++ readonly -f os::log::system::clean_up +++ readonly -f os::log::system::internal::prune_datafile +++ readonly -f os::log::system::internal::plot +++ readonly -f os::log::system::start +++ readonly -f os::log::system::internal::run ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/log/output.sh +++ readonly -f os::log::info +++ readonly -f os::log::warning +++ readonly -f os::log::error +++ readonly -f os::log::fatal +++ readonly -f os::log::debug ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/test/junit.sh +++ readonly -f os::test::junit::declare_suite_start +++ readonly -f os::test::junit::declare_suite_end +++ readonly -f os::test::junit::declare_test_start +++ readonly -f os::test::junit::declare_test_end +++ readonly -f os::test::junit::check_test_counters +++ readonly -f os::test::junit::reconcile_output ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/util/docs.sh +++ readonly -f generate_manual_pages +++ readonly -f generate_documentation +++ readonly -f os::util::gen-docs +++ readonly -f os::util::set-man-placeholder +++ readonly -f os::util::set-docs-placeholder ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/util/ensure.sh +++ readonly -f os::util::ensure::system_binary_exists +++ readonly -f os::util::ensure::built_binary_exists +++ readonly -f os::util::ensure::gopath_binary_exists +++ readonly -f os::util::ensure::iptables_privileges_exist ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/util/environment.sh +++ readonly -f os::util::environment::use_sudo +++ readonly -f os::util::environment::setup_time_vars +++ readonly -f os::util::environment::setup_all_server_vars +++ readonly -f os::util::environment::update_path_var +++ readonly -f os::util::environment::setup_tmpdir_vars +++ readonly -f os::util::environment::setup_kubelet_vars +++ readonly -f os::util::environment::setup_etcd_vars +++ readonly -f os::util::environment::setup_server_vars +++ readonly -f os::util::environment::setup_images_vars ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/util/find.sh +++ readonly -f os::util::find::system_binary +++ readonly -f os::util::find::built_binary +++ readonly -f os::util::find::gopath_binary ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/util/golang.sh +++ readonly -f os::golang::verify_go_version +++ readonly -f os::golang::verify_glide_version ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/util/text.sh +++ readonly -f os::text::reset +++ readonly -f os::text::bold +++ readonly -f os::text::red +++ readonly -f os::text::green +++ readonly -f os::text::blue +++ readonly -f os::text::yellow +++ readonly -f os::text::clear_last_line +++ readonly -f os::text::internal::is_tty +++ readonly -f os::text::print_bold +++ readonly -f os::text::print_red +++ readonly -f os::text::print_red_bold +++ readonly -f os::text::print_green +++ readonly -f os::text::print_green_bold +++ readonly -f os::text::print_blue +++ readonly -f os::text::print_blue_bold +++ readonly -f os::text::print_yellow +++ readonly -f os::text::print_yellow_bold ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/util/trap.sh +++ readonly -f os::util::trap::init_err +++ readonly -f os::util::trap::init_exit +++ readonly -f os::util::trap::err_handler +++ readonly -f os::util::trap::exit_handler ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/util/misc.sh +++ readonly -f os::util::describe_return_code +++ readonly -f os::util::install_describe_return_code +++ [[ -z '' ]] ++++ pwd +++ OS_ORIGINAL_WD=/data/src/github.com/openshift/origin +++ readonly OS_ORIGINAL_WD +++ export OS_ORIGINAL_WD +++ readonly -f os::util::repository_relative_path +++ readonly -f os::util::format_seconds +++ readonly -f os::util::sed +++ readonly -f os::util::base64decode ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/constants.sh +++ readonly OS_GO_PACKAGE=github.com/openshift/origin +++ OS_GO_PACKAGE=github.com/openshift/origin +++ readonly OS_BUILD_ENV_GOLANG=1.9 +++ OS_BUILD_ENV_GOLANG=1.9 +++ readonly OS_BUILD_ENV_IMAGE=openshift/origin-release:golang-1.9 +++ OS_BUILD_ENV_IMAGE=openshift/origin-release:golang-1.9 +++ readonly OS_REQUIRED_GO_VERSION=go1.9 +++ OS_REQUIRED_GO_VERSION=go1.9 +++ readonly OS_GLIDE_MINOR_VERSION=13 +++ OS_GLIDE_MINOR_VERSION=13 +++ readonly OS_REQUIRED_GLIDE_VERSION=0.13 +++ OS_REQUIRED_GLIDE_VERSION=0.13 +++ readonly 'OS_GOFLAGS_TAGS=include_gcs include_oss containers_image_openpgp' +++ OS_GOFLAGS_TAGS='include_gcs include_oss containers_image_openpgp' +++ readonly OS_GOFLAGS_TAGS_LINUX_AMD64=gssapi +++ OS_GOFLAGS_TAGS_LINUX_AMD64=gssapi +++ readonly OS_GOFLAGS_TAGS_LINUX_S390X=gssapi +++ OS_GOFLAGS_TAGS_LINUX_S390X=gssapi +++ readonly OS_GOFLAGS_TAGS_LINUX_ARM64=gssapi +++ OS_GOFLAGS_TAGS_LINUX_ARM64=gssapi +++ readonly OS_GOFLAGS_TAGS_LINUX_PPC64LE=gssapi +++ OS_GOFLAGS_TAGS_LINUX_PPC64LE=gssapi +++ readonly OS_OUTPUT_BASEPATH=_output +++ OS_OUTPUT_BASEPATH=_output +++ readonly OS_BASE_OUTPUT=/data/src/github.com/openshift/origin/_output +++ OS_BASE_OUTPUT=/data/src/github.com/openshift/origin/_output +++ readonly OS_OUTPUT_SCRIPTPATH=/data/src/github.com/openshift/origin/_output/scripts +++ OS_OUTPUT_SCRIPTPATH=/data/src/github.com/openshift/origin/_output/scripts +++ readonly OS_OUTPUT_SUBPATH=_output/local +++ OS_OUTPUT_SUBPATH=_output/local +++ readonly OS_OUTPUT=/data/src/github.com/openshift/origin/_output/local +++ OS_OUTPUT=/data/src/github.com/openshift/origin/_output/local +++ readonly OS_OUTPUT_RELEASEPATH=/data/src/github.com/openshift/origin/_output/local/releases +++ OS_OUTPUT_RELEASEPATH=/data/src/github.com/openshift/origin/_output/local/releases +++ readonly OS_OUTPUT_RPMPATH=/data/src/github.com/openshift/origin/_output/local/releases/rpms +++ OS_OUTPUT_RPMPATH=/data/src/github.com/openshift/origin/_output/local/releases/rpms +++ readonly OS_OUTPUT_BINPATH=/data/src/github.com/openshift/origin/_output/local/bin +++ OS_OUTPUT_BINPATH=/data/src/github.com/openshift/origin/_output/local/bin +++ readonly OS_OUTPUT_PKGDIR=/data/src/github.com/openshift/origin/_output/local/pkgdir +++ OS_OUTPUT_PKGDIR=/data/src/github.com/openshift/origin/_output/local/pkgdir +++ OS_SDN_COMPILE_TARGETS_LINUX=(pkg/network/sdn-cni-plugin vendor/github.com/containernetworking/plugins/plugins/ipam/host-local vendor/github.com/containernetworking/plugins/plugins/main/loopback) +++ readonly OS_SDN_COMPILE_TARGETS_LINUX +++ OS_IMAGE_COMPILE_TARGETS_LINUX=("${OS_SDN_COMPILE_TARGETS_LINUX[@]}") +++ readonly OS_IMAGE_COMPILE_TARGETS_LINUX +++ OS_SCRATCH_IMAGE_COMPILE_TARGETS_LINUX=(images/pod examples/hello-openshift) +++ readonly OS_SCRATCH_IMAGE_COMPILE_TARGETS_LINUX +++ OS_IMAGE_COMPILE_BINARIES=("${OS_SCRATCH_IMAGE_COMPILE_TARGETS_LINUX[@]##*/}" "${OS_IMAGE_COMPILE_TARGETS_LINUX[@]##*/}") +++ readonly OS_IMAGE_COMPILE_BINARIES +++ OS_CROSS_COMPILE_TARGETS=(cmd/hypershift cmd/openshift cmd/oc cmd/oadm cmd/template-service-broker vendor/k8s.io/kubernetes/cmd/hyperkube) +++ readonly OS_CROSS_COMPILE_TARGETS +++ OS_CROSS_COMPILE_BINARIES=("${OS_CROSS_COMPILE_TARGETS[@]##*/}") +++ readonly OS_CROSS_COMPILE_BINARIES +++ OS_TEST_TARGETS=(test/extended/extended.test) +++ readonly OS_TEST_TARGETS +++ OS_GOVET_BLACKLIST=("pkg/.*/generated/internalclientset/fake/clientset_generated.go:[0-9]+: literal copies lock value from fakePtr: github.com/openshift/origin/vendor/k8s.io/client-go/testing.Fake" "pkg/.*/generated/clientset/fake/clientset_generated.go:[0-9]+: literal copies lock value from fakePtr: github.com/openshift/origin/vendor/k8s.io/client-go/testing.Fake" "pkg/build/vendor/github.com/docker/docker/client/hijack.go:[0-9]+: assignment copies lock value to c: crypto/tls.Config contains sync.Once contains sync.Mutex" "cmd/cluster-capacity/.*" "pkg/build/builder/vendor/.*" "pkg/cmd/server/start/.*") +++ readonly OS_GOVET_BLACKLIST +++ OPENSHIFT_BINARY_SYMLINKS=(openshift-router openshift-deploy openshift-recycle openshift-sti-build openshift-docker-build openshift-git-clone openshift-manage-dockerfile openshift-extract-image-content origin) +++ readonly OPENSHIFT_BINARY_SYMLINKS +++ OC_BINARY_COPY=(kubectl) +++ readonly OC_BINARY_COPY +++ OS_BINARY_RELEASE_CLIENT_WINDOWS=(oc.exe README.md ./LICENSE) +++ readonly OS_BINARY_RELEASE_CLIENT_WINDOWS +++ OS_BINARY_RELEASE_CLIENT_MAC=(oc README.md ./LICENSE) +++ readonly OS_BINARY_RELEASE_CLIENT_MAC +++ OS_BINARY_RELEASE_CLIENT_LINUX=(./oc ./README.md ./LICENSE) +++ readonly OS_BINARY_RELEASE_CLIENT_LINUX +++ OS_BINARY_RELEASE_SERVER_LINUX=('./*') +++ readonly OS_BINARY_RELEASE_SERVER_LINUX +++ OS_BINARY_RELEASE_CLIENT_EXTRA=(${OS_ROOT}/README.md ${OS_ROOT}/LICENSE) +++ readonly OS_BINARY_RELEASE_CLIENT_EXTRA +++ readonly -f os::build::ldflags +++ readonly -f os::util::list_go_src_files +++ readonly -f os::util::list_go_src_dirs +++ readonly -f os::util::list_test_packages_under +++ readonly -f os::build::generate_windows_versioninfo +++ readonly -f os::build::clean_windows_versioninfo +++ OS_ALL_IMAGES=(origin origin-base origin-pod origin-deployer origin-docker-builder origin-keepalived-ipfailover origin-sti-builder origin-haproxy-router origin-f5-router origin-egress-router origin-egress-http-proxy origin-egress-dns-proxy origin-recycler origin-cluster-capacity origin-template-service-broker hello-openshift openvswitch node) +++ readonly OS_ALL_IMAGES +++ readonly -f os::build::images ++ for library_file in '$( find "${OS_ROOT}/hack/lib" -type f -name '\''*.sh'\'' -not -path '\''*/hack/lib/init.sh'\'' )' ++ source /data/src/github.com/openshift/origin/hack/lib/start.sh +++ readonly -f os::start::configure_server +++ readonly -f os::start::internal::create_master_certs +++ readonly -f os::start::internal::configure_node +++ readonly -f os::start::internal::configure_master +++ readonly -f os::start::internal::patch_master_config +++ readonly -f os::start::server +++ readonly -f os::start::master +++ readonly -f os::start::all_in_one +++ readonly -f os::start::etcd +++ readonly -f os::start::api_server +++ readonly -f os::start::controllers +++ readonly -f os::start::internal::start_node +++ readonly -f os::start::internal::openshift_executable +++ readonly -f os::start::internal::determine_hostnames +++ readonly -f os::start::router +++ readonly -f os::start::registry ++ unset library_files library_file init_source ++ os::log::stacktrace::install ++ set -o errtrace ++ export OS_USE_STACKTRACE=true ++ OS_USE_STACKTRACE=true ++ os::util::trap::init_err ++ trap -p ERR ++ grep -q os::util::trap::err_handler ++ trap 'os::util::trap::err_handler;' ERR ++ os::util::environment::update_path_var ++ local prefix ++ os::util::find::system_binary go +++ os::build::host_platform ++++ go env GOHOSTOS ++++ go env GOHOSTARCH +++ echo linux/amd64 ++ prefix+=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64: ++ [[ -n /data ]] ++ prefix+=/data/bin: ++ PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/data/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/origin/.local/bin:/home/origin/bin ++ export PATH ++ [[ -z '' ]] ++ [[ /tmp/tmp.rIf9t6kVa2 =~ .*\.sh ]] ++ os::util::environment::setup_tmpdir_vars shell ++ local sub_dir=shell ++ BASETMPDIR=/tmp/openshift/shell ++ export BASETMPDIR ++ VOLUME_DIR=/tmp/openshift/shell/volumes ++ export VOLUME_DIR ++ BASEOUTDIR=/data/src/github.com/openshift/origin/_output/scripts/shell ++ export BASEOUTDIR ++ LOG_DIR=/data/src/github.com/openshift/origin/_output/scripts/shell/logs ++ export LOG_DIR ++ ARTIFACT_DIR=/data/src/github.com/openshift/origin/_output/scripts/shell/artifacts ++ export ARTIFACT_DIR ++ FAKE_HOME_DIR=/data/src/github.com/openshift/origin/_output/scripts/shell/openshift.local.home ++ export FAKE_HOME_DIR ++ mkdir -p /data/src/github.com/openshift/origin/_output/scripts/shell/logs /tmp/openshift/shell/volumes /data/src/github.com/openshift/origin/_output/scripts/shell/artifacts /data/src/github.com/openshift/origin/_output/scripts/shell/openshift.local.home ++ export OS_TMP_ENV_SET=shell ++ OS_TMP_ENV_SET=shell ++ [[ -n '' ]] + os::build::rpm::get_nvra_vars ++ uname -i + OS_RPM_ARCHITECTURE=x86_64 + os::build::version::get_vars + [[ -n '' ]] + os::build::version::git_vars + [[ -n '' ]] + os::build::version::kubernetes_vars ++ go run /data/src/github.com/openshift/origin/tools/godepversion/godepversion.go /data/src/github.com/openshift/origin/Godeps/Godeps.json k8s.io/kubernetes/pkg/api comment + KUBE_GIT_VERSION=v1.9.1-57-ga0ce1bc657 ++ go run /data/src/github.com/openshift/origin/tools/godepversion/godepversion.go /data/src/github.com/openshift/origin/Godeps/Godeps.json k8s.io/kubernetes/pkg/api + KUBE_GIT_COMMIT=a0ce1bc ++ echo v1.9.1-57-ga0ce1bc657 ++ sed 's/-\([0-9]\{1,\}\)-g\([0-9a-f]\{7,40\}\)$/\+\2/' + KUBE_GIT_VERSION=v1.9.1+a0ce1bc657 + [[ v1.9.1+a0ce1bc657 =~ ^v([0-9]+)\.([0-9]+)(\.[0-9]+)*([-].*)?$ ]] + os::build::version::etcd_vars ++ go run /data/src/github.com/openshift/origin/tools/godepversion/godepversion.go /data/src/github.com/openshift/origin/Godeps/Godeps.json github.com/coreos/etcd/etcdserver comment + ETCD_GIT_VERSION=v3.2.16 ++ go run /data/src/github.com/openshift/origin/tools/godepversion/godepversion.go /data/src/github.com/openshift/origin/Godeps/Godeps.json github.com/coreos/etcd/etcdserver + ETCD_GIT_COMMIT=121edf0 + git=(git --work-tree "${OS_ROOT}") + local git + [[ -n '' ]] ++ git --work-tree /data/src/github.com/openshift/origin rev-parse --short 'HEAD^{commit}' + OS_GIT_COMMIT=4253ab3 + [[ -z '' ]] ++ git --work-tree /data/src/github.com/openshift/origin status --porcelain + git_status= + [[ -z '' ]] + OS_GIT_TREE_STATE=clean + [[ -n '' ]] ++ git --work-tree /data/src/github.com/openshift/origin describe --long --tags --abbrev=7 --match 'v[0-9]*' '4253ab3^{commit}' + OS_GIT_VERSION=v3.10.0-alpha.0-549-g4253ab3 + [[ v3.10.0-alpha.0-549-g4253ab3 =~ ^v([0-9]+)\.([0-9]+)\.([0-9]+)(\.[0-9]+)*([-].*)?$ ]] + OS_GIT_MAJOR=3 + OS_GIT_MINOR=10 + OS_GIT_PATCH=0 + [[ -n -alpha.0-549-g4253ab3 ]] + OS_GIT_MINOR+=+ ++ echo v3.10.0-alpha.0-549-g4253ab3 ++ sed 's/-\([0-9]\{1,\}\)-g\([0-9a-f]\{7,40\}\)$/\+\2-\1/' + OS_GIT_VERSION=v3.10.0-alpha.0+4253ab3-549 ++ echo v3.10.0-alpha.0+4253ab3-549 ++ sed 's/-0$//' + OS_GIT_VERSION=v3.10.0-alpha.0+4253ab3-549 + [[ clean == \d\i\r\t\y ]] + [[ v3.10.0-alpha.0+4253ab3-549 =~ ^v([0-9](\.[0-9]+)*)(.*) ]] + OS_RPM_VERSION=3.10.0 + metadata=-alpha.0+4253ab3-549 + [[ - == \+ ]] + [[ - == \- ]] + [[ -alpha.0+4253ab3-549 =~ ^-([^\+]+)\+([a-z0-9]{7,40})(-([0-9]+))?(-dirty)?$ ]] + pre_release=alpha.0 + build_sha=4253ab3 + build_num=549 + OS_RPM_RELEASE=0.alpha.0.549.4253ab3 ++ os::build::version::save_vars ++ cat ++ tr '\n' ' ' + OS_RPM_GIT_VARS='OS_GIT_COMMIT='\''4253ab3'\'' OS_GIT_TREE_STATE='\''clean'\'' OS_GIT_VERSION='\''v3.10.0-alpha.0+4253ab3-549'\'' OS_GIT_MAJOR='\''3'\'' OS_GIT_MINOR='\''10+'\'' OS_GIT_PATCH='\''0'\'' KUBE_GIT_COMMIT='\''a0ce1bc'\'' KUBE_GIT_VERSION='\''v1.9.1+a0ce1bc657'\'' ETCD_GIT_VERSION='\''v3.2.16'\'' ETCD_GIT_COMMIT='\''121edf0'\'' OS_GIT_CATALOG_VERSION='\'''\'' ' + export OS_RPM_VERSION OS_RPM_RELEASE OS_RPM_ARCHITECTURE OS_RPM_GIT_VARS + echo -3.10.0-0.alpha.0.549.4253ab3 + echo 3.10+ + sed s/+// + echo 3.10.0 + cut -d. -f2 ++ echo v3.10+ ++ sed s/+// + tag=v3.10 + echo v3.10 + set +o xtrace ########## FINISHED STAGE: SUCCESS: DETERMINE THE RELEASE COMMIT FOR ORIGIN IMAGES AND VERSION FOR RPMS [00h 00m 53s] ########## [workspace] $ /bin/bash /tmp/jenkins4774865298487786298.sh ########## STARTING STAGE: RUN EXTENDED TESTS ########## + [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ]] + source /var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/activate ++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66 ++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin ++ unset PYTHON_HOME ++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_networking/workspace/.config ++ mktemp + script=/tmp/tmp.e3oWWY3h6s + cat + chmod +x /tmp/tmp.e3oWWY3h6s + scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.e3oWWY3h6s openshiftdevel:/tmp/tmp.e3oWWY3h6s + ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 5000 /tmp/tmp.e3oWWY3h6s"' + cd /data/src/github.com/openshift/origin + OS_BUILD_ENV_PULL_IMAGE=true + OS_BUILD_ENV_PRESERVE=_output/local/bin/linux/amd64/extended.test + hack/env make build-extended-test [INFO] [19:51:47+0000] Pulling the openshift/origin-release:golang-1.9 image to update it... Trying to pull repository registry.access.redhat.com/openshift/origin-release ... Trying to pull repository docker.io/openshift/origin-release ... golang-1.9: Pulling from docker.io/openshift/origin-release Digest: sha256:646266f0500fc7415ee66a42f3037b4219f996a8437c191f0921ff10ed8c7ba8 Status: Image is up to date for docker.io/openshift/origin-release:golang-1.9 hack/build-go.sh test/extended/extended.test ++ Building go targets for linux/amd64: test/extended/extended.test [INFO] [19:53:03+0000] hack/build-go.sh exited with code 0 after 00h 01m 04s + OPENSHIFT_SKIP_BUILD=true + JUNIT_REPORT=true + make test-extended SUITE=networking test/extended/networking.sh [WARNING] [19:53:05+0000] Skipping rebuild of test binary due to OPENSHIFT_SKIP_BUILD=1 [INFO] [19:53:05+0000] Starting 'networking' extended tests [INFO] [19:53:05+0000] [CLEANUP] Cleaning up temporary directories [INFO] [19:53:05+0000] Building docker-in-docker images Building container images [openshift/dind] --> Image fedora:27 was not found, pulling ... [openshift/dind] --> Pulled 1/2 layers, 50% complete [openshift/dind] --> FROM fedora:27 as 0 [openshift/dind] --> ENV TERM=xterm [openshift/dind] --> ENV container=docker [openshift/dind] --> VOLUME ["/run", "/tmp"] [openshift/dind] --> STOPSIGNAL SIGRTMIN+3 [openshift/dind] --> RUN systemctl mask auditd.service console-getty.service dev-hugepages.mount dnf-makecache.service docker-storage-setup.service getty.target lvm2-lvmetad.service sys-fs-fuse-connections.mount systemd-logind.service systemd-remount-fs.service systemd-udev-hwdb-update.service systemd-udev-trigger.service systemd-udevd.service systemd-vconsole-setup.service [openshift/dind] Created symlink /etc/systemd/system/auditd.service → /dev/null. [openshift/dind] Created symlink /etc/systemd/system/dnf-makecache.service → /dev/null. [openshift/dind] Created symlink /etc/systemd/system/docker-storage-setup.service → /dev/null. [openshift/dind] Created symlink /etc/systemd/system/lvm2-lvmetad.service → /dev/null. [openshift/dind] Created symlink /etc/systemd/system/systemd-udev-hwdb-update.service → /dev/null. [openshift/dind] Created symlink /etc/systemd/system/systemd-udev-trigger.service → /dev/null. [openshift/dind] Created symlink /etc/systemd/system/systemd-udevd.service → /dev/null. [openshift/dind] Created symlink /etc/systemd/system/systemd-vconsole-setup.service → /dev/null. [openshift/dind] --> RUN cp /usr/lib/systemd/system/dbus.service /etc/systemd/system/; sed -i 's/OOMScoreAdjust=-900//' /etc/systemd/system/dbus.service [openshift/dind] --> RUN dnf -y update && dnf -y install docker glibc-langpack-en iptables openssh-clients openssh-server tcpdump [openshift/dind] Fedora 27 - x86_64 - Updates 58 MB/s | 22 MB 00:00 [openshift/dind] Fedora 27 - x86_64 24 MB/s | 58 MB 00:02 [openshift/dind] Last metadata expiration check: 0:00:05 ago on Thu Apr 5 19:53:24 2018. [openshift/dind] Dependencies resolved. [openshift/dind] ================================================================================ [openshift/dind] Package Arch Version Repository Size [openshift/dind] ================================================================================ [openshift/dind] Upgrading: [openshift/dind] audit-libs x86_64 2.8.3-1.fc27 updates 112 k [openshift/dind] curl x86_64 7.55.1-10.fc27 updates 312 k [openshift/dind] elfutils-default-yama-scope noarch 0.170-10.fc27 updates 41 k [openshift/dind] elfutils-libelf x86_64 0.170-10.fc27 updates 205 k [openshift/dind] elfutils-libs x86_64 0.170-10.fc27 updates 285 k [openshift/dind] glibc x86_64 2.26-27.fc27 updates 3.4 M [openshift/dind] glibc-common x86_64 2.26-27.fc27 updates 789 k [openshift/dind] glibc-langpack-en x86_64 2.26-27.fc27 updates 278 k [openshift/dind] libblkid x86_64 2.30.2-3.fc27 updates 196 k [openshift/dind] libcrypt-nss x86_64 2.26-27.fc27 updates 40 k [openshift/dind] libcurl x86_64 7.55.1-10.fc27 updates 273 k [openshift/dind] libfdisk x86_64 2.30.2-3.fc27 updates 244 k [openshift/dind] libidn2 x86_64 2.0.4-4.fc27 updates 99 k [openshift/dind] libmount x86_64 2.30.2-3.fc27 updates 219 k [openshift/dind] libreport-filesystem x86_64 2.9.3-3.fc27 updates 17 k [openshift/dind] libsmartcols x86_64 2.30.2-3.fc27 updates 160 k [openshift/dind] libsolv x86_64 0.6.34-1.fc27 updates 373 k [openshift/dind] libsss_idmap x86_64 1.16.1-2.fc27 updates 89 k [openshift/dind] libsss_nss_idmap x86_64 1.16.1-2.fc27 updates 95 k [openshift/dind] libunistring x86_64 0.9.9-1.fc27 updates 417 k [openshift/dind] libuuid x86_64 2.30.2-3.fc27 updates 82 k [openshift/dind] libzstd x86_64 1.3.4-1.fc27 updates 230 k [openshift/dind] nspr x86_64 4.19.0-1.fc27 updates 139 k [openshift/dind] nss x86_64 3.36.0-1.0.fc27 updates 849 k [openshift/dind] nss-softokn x86_64 3.36.0-1.0.fc27 updates 391 k [openshift/dind] nss-softokn-freebl x86_64 3.36.0-1.0.fc27 updates 233 k [openshift/dind] nss-sysinit x86_64 3.36.0-1.0.fc27 updates 63 k [openshift/dind] nss-tools x86_64 3.36.0-1.0.fc27 updates 521 k [openshift/dind] nss-util x86_64 3.36.0-1.0.fc27 updates 89 k [openshift/dind] openssl-libs x86_64 1:1.1.0h-1.fc27 updates 1.3 M [openshift/dind] p11-kit x86_64 0.23.10-1.fc27 updates 263 k [openshift/dind] p11-kit-trust x86_64 0.23.10-1.fc27 updates 135 k [openshift/dind] pcre x86_64 8.41-6.fc27 updates 205 k [openshift/dind] pcre2 x86_64 10.31-3.fc27 updates 235 k [openshift/dind] publicsuffix-list-dafsa noarch 20180328-1.fc27 updates 47 k [openshift/dind] python3 x86_64 3.6.4-9.fc27 updates 69 k [openshift/dind] python3-libs x86_64 3.6.4-9.fc27 updates 8.6 M [openshift/dind] shared-mime-info x86_64 1.9-2.fc27 updates 327 k [openshift/dind] sqlite-libs x86_64 3.20.1-2.fc27 updates 537 k [openshift/dind] sssd-client x86_64 1.16.1-2.fc27 updates 144 k [openshift/dind] tzdata noarch 2018d-1.fc27 updates 457 k [openshift/dind] util-linux x86_64 2.30.2-3.fc27 updates 2.4 M [openshift/dind] vim-minimal x86_64 2:8.0.1573-1.fc27 updates 559 k [openshift/dind] Transaction Summary [openshift/dind] ================================================================================ [openshift/dind] Upgrade 43 Packages [openshift/dind] Total download size: 25 M [openshift/dind] Downloading Packages: [openshift/dind] (1/43): audit-libs-2.8.3-1.fc27.x86_64.rpm 445 kB/s | 112 kB 00:00 [openshift/dind] (2/43): libcurl-7.55.1-10.fc27.x86_64.rpm 1.0 MB/s | 273 kB 00:00 [openshift/dind] (3/43): elfutils-default-yama-scope-0.170-10.fc 3.2 MB/s | 41 kB 00:00 [openshift/dind] (4/43): curl-7.55.1-10.fc27.x86_64.rpm 1.1 MB/s | 312 kB 00:00 [openshift/dind] (5/43): elfutils-libs-0.170-10.fc27.x86_64.rpm 13 MB/s | 285 kB 00:00 [openshift/dind] (6/43): elfutils-libelf-0.170-10.fc27.x86_64.rp 7.4 MB/s | 205 kB 00:00 [openshift/dind] (7/43): glibc-langpack-en-2.26-27.fc27.x86_64.r 13 MB/s | 278 kB 00:00 [openshift/dind] (8/43): glibc-common-2.26-27.fc27.x86_64.rpm 22 MB/s | 789 kB 00:00 [openshift/dind] (9/43): libcrypt-nss-2.26-27.fc27.x86_64.rpm 2.9 MB/s | 40 kB 00:00 [openshift/dind] (10/43): libblkid-2.30.2-3.fc27.x86_64.rpm 11 MB/s | 196 kB 00:00 [openshift/dind] (11/43): libuuid-2.30.2-3.fc27.x86_64.rpm 5.3 MB/s | 82 kB 00:00 [openshift/dind] (12/43): libmount-2.30.2-3.fc27.x86_64.rpm 13 MB/s | 219 kB 00:00 [openshift/dind] (13/43): glibc-2.26-27.fc27.x86_64.rpm 31 MB/s | 3.4 MB 00:00 [openshift/dind] (14/43): libfdisk-2.30.2-3.fc27.x86_64.rpm 8.1 MB/s | 244 kB 00:00 [openshift/dind] (15/43): libsmartcols-2.30.2-3.fc27.x86_64.rpm 11 MB/s | 160 kB 00:00 [openshift/dind] (16/43): libidn2-2.0.4-4.fc27.x86_64.rpm 7.5 MB/s | 99 kB 00:00 [openshift/dind] (17/43): util-linux-2.30.2-3.fc27.x86_64.rpm 31 MB/s | 2.4 MB 00:00 [openshift/dind] (18/43): libreport-filesystem-2.9.3-3.fc27.x86_ 832 kB/s | 17 kB 00:00 [openshift/dind] (19/43): libsolv-0.6.34-1.fc27.x86_64.rpm 12 MB/s | 373 kB 00:00 [openshift/dind] (20/43): libsss_idmap-1.16.1-2.fc27.x86_64.rpm 3.9 MB/s | 89 kB 00:00 [openshift/dind] (21/43): libsss_nss_idmap-1.16.1-2.fc27.x86_64. 4.1 MB/s | 95 kB 00:00 [openshift/dind] (22/43): libunistring-0.9.9-1.fc27.x86_64.rpm 17 MB/s | 417 kB 00:00 [openshift/dind] (23/43): nspr-4.19.0-1.fc27.x86_64.rpm 9.1 MB/s | 139 kB 00:00 [openshift/dind] (24/43): libzstd-1.3.4-1.fc27.x86_64.rpm 9.5 MB/s | 230 kB 00:00 [openshift/dind] (25/43): nss-softokn-3.36.0-1.0.fc27.x86_64.rpm 20 MB/s | 391 kB 00:00 [openshift/dind] (26/43): nss-util-3.36.0-1.0.fc27.x86_64.rpm 6.5 MB/s | 89 kB 00:00 [openshift/dind] (27/43): nss-3.36.0-1.0.fc27.x86_64.rpm 23 MB/s | 849 kB 00:00 [openshift/dind] (28/43): nss-softokn-freebl-3.36.0-1.0.fc27.x86 12 MB/s | 233 kB 00:00 [openshift/dind] (29/43): nss-tools-3.36.0-1.0.fc27.x86_64.rpm 22 MB/s | 521 kB 00:00 [openshift/dind] (30/43): nss-sysinit-3.36.0-1.0.fc27.x86_64.rpm 4.0 MB/s | 63 kB 00:00 [openshift/dind] (31/43): p11-kit-0.23.10-1.fc27.x86_64.rpm 15 MB/s | 263 kB 00:00 [openshift/dind] (32/43): p11-kit-trust-0.23.10-1.fc27.x86_64.rp 8.5 MB/s | 135 kB 00:00 [openshift/dind] (33/43): openssl-libs-1.1.0h-1.fc27.x86_64.rpm 32 MB/s | 1.3 MB 00:00 [openshift/dind] (34/43): pcre-8.41-6.fc27.x86_64.rpm 11 MB/s | 205 kB 00:00 [openshift/dind] (35/43): pcre2-10.31-3.fc27.x86_64.rpm 9.5 MB/s | 235 kB 00:00 [openshift/dind] (36/43): publicsuffix-list-dafsa-20180328-1.fc2 3.3 MB/s | 47 kB 00:00 [openshift/dind] (37/43): python3-3.6.4-9.fc27.x86_64.rpm 3.9 MB/s | 69 kB 00:00 [openshift/dind] (38/43): shared-mime-info-1.9-2.fc27.x86_64.rpm 18 MB/s | 327 kB 00:00 [openshift/dind] (39/43): sqlite-libs-3.20.1-2.fc27.x86_64.rpm 21 MB/s | 537 kB 00:00 [openshift/dind] (40/43): sssd-client-1.16.1-2.fc27.x86_64.rpm 7.9 MB/s | 144 kB 00:00 [openshift/dind] (41/43): tzdata-2018d-1.fc27.noarch.rpm 19 MB/s | 457 kB 00:00 [openshift/dind] (42/43): vim-minimal-8.0.1573-1.fc27.x86_64.rpm 19 MB/s | 559 kB 00:00 [openshift/dind] (43/43): python3-libs-3.6.4-9.fc27.x86_64.rpm 52 MB/s | 8.6 MB 00:00 [openshift/dind] -------------------------------------------------------------------------------- [openshift/dind] Total 24 MB/s | 25 MB 00:01 [openshift/dind] Running transaction check [openshift/dind] Transaction check succeeded. [openshift/dind] Running transaction test [openshift/dind] Transaction test succeeded. [openshift/dind] Running transaction [openshift/dind] Preparing : 1/1 [openshift/dind] Running scriptlet: tzdata-2018d-1.fc27.noarch 1/1 [openshift/dind] Upgrading : tzdata-2018d-1.fc27.noarch 1/86 [openshift/dind] Upgrading : glibc-common-2.26-27.fc27.x86_64 2/86 [openshift/dind] Upgrading : glibc-langpack-en-2.26-27.fc27.x86_64 3/86 [openshift/dind] Running scriptlet: glibc-2.26-27.fc27.x86_64 4/86 [openshift/dind] Upgrading : glibc-2.26-27.fc27.x86_64 4/86 [openshift/dind] Running scriptlet: glibc-2.26-27.fc27.x86_64 4/86 [openshift/dind] Upgrading : nspr-4.19.0-1.fc27.x86_64 5/86 [openshift/dind] Running scriptlet: nspr-4.19.0-1.fc27.x86_64 5/86 [openshift/dind] Upgrading : nss-util-3.36.0-1.0.fc27.x86_64 6/86 [openshift/dind] Running scriptlet: nss-util-3.36.0-1.0.fc27.x86_64 6/86 [openshift/dind] Upgrading : libuuid-2.30.2-3.fc27.x86_64 7/86 [openshift/dind] Running scriptlet: libuuid-2.30.2-3.fc27.x86_64 7/86 [openshift/dind] Upgrading : libblkid-2.30.2-3.fc27.x86_64 8/86 [openshift/dind] Running scriptlet: libblkid-2.30.2-3.fc27.x86_64 8/86 [openshift/dind] Upgrading : openssl-libs-1:1.1.0h-1.fc27.x86_64 9/86 [openshift/dind] Running scriptlet: openssl-libs-1:1.1.0h-1.fc27.x86_64 9/86 [openshift/dind] Upgrading : nss-softokn-freebl-3.36.0-1.0.fc27.x86_64 10/86 [openshift/dind] Upgrading : libcrypt-nss-2.26-27.fc27.x86_64 11/86 [openshift/dind] Running scriptlet: libcrypt-nss-2.26-27.fc27.x86_64 11/86 [openshift/dind] Upgrading : sqlite-libs-3.20.1-2.fc27.x86_64 12/86 [openshift/dind] Running scriptlet: sqlite-libs-3.20.1-2.fc27.x86_64 12/86 [openshift/dind] Upgrading : nss-softokn-3.36.0-1.0.fc27.x86_64 13/86 [openshift/dind] Running scriptlet: nss-softokn-3.36.0-1.0.fc27.x86_64 13/86 [openshift/dind] Upgrading : nss-sysinit-3.36.0-1.0.fc27.x86_64 14/86 [openshift/dind] Upgrading : nss-3.36.0-1.0.fc27.x86_64 15/86 [openshift/dind] Running scriptlet: nss-3.36.0-1.0.fc27.x86_64 15/86 [openshift/dind] Upgrading : python3-3.6.4-9.fc27.x86_64 16/86 [openshift/dind] Running scriptlet: python3-3.6.4-9.fc27.x86_64 16/86 [openshift/dind] Upgrading : python3-libs-3.6.4-9.fc27.x86_64 17/86 [openshift/dind] Running scriptlet: python3-libs-3.6.4-9.fc27.x86_64 17/86 [openshift/dind] Upgrading : libmount-2.30.2-3.fc27.x86_64 18/86 [openshift/dind] Running scriptlet: libmount-2.30.2-3.fc27.x86_64 18/86 [openshift/dind] Upgrading : libfdisk-2.30.2-3.fc27.x86_64 19/86 [openshift/dind] Running scriptlet: libfdisk-2.30.2-3.fc27.x86_64 19/86 [openshift/dind] Upgrading : audit-libs-2.8.3-1.fc27.x86_64 20/86 [openshift/dind] Running scriptlet: audit-libs-2.8.3-1.fc27.x86_64 20/86 [openshift/dind] Upgrading : elfutils-libelf-0.170-10.fc27.x86_64 21/86 [openshift/dind] Running scriptlet: elfutils-libelf-0.170-10.fc27.x86_64 21/86 [openshift/dind] Upgrading : libsmartcols-2.30.2-3.fc27.x86_64 22/86 [openshift/dind] Running scriptlet: libsmartcols-2.30.2-3.fc27.x86_64 22/86 [openshift/dind] Upgrading : libsss_idmap-1.16.1-2.fc27.x86_64 23/86 [openshift/dind] Running scriptlet: libsss_idmap-1.16.1-2.fc27.x86_64 23/86 [openshift/dind] Upgrading : libsss_nss_idmap-1.16.1-2.fc27.x86_64 24/86 [openshift/dind] Running scriptlet: libsss_nss_idmap-1.16.1-2.fc27.x86_64 24/86 [openshift/dind] Upgrading : libunistring-0.9.9-1.fc27.x86_64 25/86 [openshift/dind] Running scriptlet: libunistring-0.9.9-1.fc27.x86_64 25/86 [openshift/dind] Upgrading : libidn2-2.0.4-4.fc27.x86_64 26/86 [openshift/dind] Running scriptlet: libidn2-2.0.4-4.fc27.x86_64 26/86 [openshift/dind] install-info: No such file or directory for /usr/share/info/libidn2.info.gz [openshift/dind] Upgrading : libcurl-7.55.1-10.fc27.x86_64 27/86 [openshift/dind] Running scriptlet: libcurl-7.55.1-10.fc27.x86_64 27/86 [openshift/dind] Upgrading : p11-kit-0.23.10-1.fc27.x86_64 28/86 [openshift/dind] Running scriptlet: p11-kit-0.23.10-1.fc27.x86_64 28/86 [openshift/dind] Upgrading : elfutils-default-yama-scope-0.170-10.fc27.noarch 29/86 [openshift/dind] Running scriptlet: elfutils-default-yama-scope-0.170-10.fc27.noarch 29/86 [openshift/dind] Upgrading : elfutils-libs-0.170-10.fc27.x86_64 30/86 [openshift/dind] Running scriptlet: elfutils-libs-0.170-10.fc27.x86_64 30/86 [openshift/dind] Upgrading : p11-kit-trust-0.23.10-1.fc27.x86_64 31/86 [openshift/dind] Running scriptlet: p11-kit-trust-0.23.10-1.fc27.x86_64 31/86 [openshift/dind] Upgrading : curl-7.55.1-10.fc27.x86_64 32/86 [openshift/dind] Upgrading : sssd-client-1.16.1-2.fc27.x86_64 33/86 [openshift/dind] Running scriptlet: sssd-client-1.16.1-2.fc27.x86_64 33/86 [openshift/dind] Upgrading : util-linux-2.30.2-3.fc27.x86_64 34/86 [openshift/dind] Running scriptlet: util-linux-2.30.2-3.fc27.x86_64 34/86 [openshift/dind] Upgrading : nss-tools-3.36.0-1.0.fc27.x86_64 35/86 [openshift/dind] Upgrading : libsolv-0.6.34-1.fc27.x86_64 36/86 [openshift/dind] Running scriptlet: libsolv-0.6.34-1.fc27.x86_64 36/86 [openshift/dind] Upgrading : libzstd-1.3.4-1.fc27.x86_64 37/86 [openshift/dind] Running scriptlet: libzstd-1.3.4-1.fc27.x86_64 37/86 [openshift/dind] Upgrading : pcre-8.41-6.fc27.x86_64 38/86 [openshift/dind] Running scriptlet: pcre-8.41-6.fc27.x86_64 38/86 [openshift/dind] Upgrading : pcre2-10.31-3.fc27.x86_64 39/86 [openshift/dind] Running scriptlet: pcre2-10.31-3.fc27.x86_64 39/86 [openshift/dind] Upgrading : shared-mime-info-1.9-2.fc27.x86_64 40/86 [openshift/dind] Running scriptlet: shared-mime-info-1.9-2.fc27.x86_64 40/86 [openshift/dind] Upgrading : vim-minimal-2:8.0.1573-1.fc27.x86_64 41/86 [openshift/dind] Upgrading : publicsuffix-list-dafsa-20180328-1.fc27.noarch 42/86 [openshift/dind] Upgrading : libreport-filesystem-2.9.3-3.fc27.x86_64 43/86 [openshift/dind] Cleanup : nss-tools-3.35.0-1.1.fc27.x86_64 44/86 [openshift/dind] Cleanup : util-linux-2.30.2-1.fc27.x86_64 45/86 [openshift/dind] Cleanup : nss-3.35.0-1.1.fc27.x86_64 46/86 [openshift/dind] Running scriptlet: nss-3.35.0-1.1.fc27.x86_64 46/86 [openshift/dind] Cleanup : nss-softokn-3.35.0-1.0.fc27.x86_64 47/86 [openshift/dind] Running scriptlet: nss-softokn-3.35.0-1.0.fc27.x86_64 47/86 [openshift/dind] Cleanup : libmount-2.30.2-1.fc27.x86_64 48/86 [openshift/dind] Running scriptlet: libmount-2.30.2-1.fc27.x86_64 48/86 [openshift/dind] Cleanup : elfutils-libs-0.170-1.fc27.x86_64 49/86 [openshift/dind] Running scriptlet: elfutils-libs-0.170-1.fc27.x86_64 49/86 [openshift/dind] Cleanup : libfdisk-2.30.2-1.fc27.x86_64 50/86 [openshift/dind] Running scriptlet: libfdisk-2.30.2-1.fc27.x86_64 50/86 [openshift/dind] Cleanup : p11-kit-trust-0.23.9-2.fc27.x86_64 51/86 [openshift/dind] Running scriptlet: p11-kit-trust-0.23.9-2.fc27.x86_64 51/86 [openshift/dind] Cleanup : p11-kit-0.23.9-2.fc27.x86_64 52/86 [openshift/dind] Running scriptlet: p11-kit-0.23.9-2.fc27.x86_64 52/86 [openshift/dind] Cleanup : libblkid-2.30.2-1.fc27.x86_64 53/86 [openshift/dind] Running scriptlet: libblkid-2.30.2-1.fc27.x86_64 53/86 [openshift/dind] Running scriptlet: sssd-client-1.16.0-6.fc27.x86_64 54/86 [openshift/dind] Cleanup : sssd-client-1.16.0-6.fc27.x86_64 54/86 [openshift/dind] Running scriptlet: sssd-client-1.16.0-6.fc27.x86_64 54/86 [openshift/dind] Cleanup : curl-7.55.1-9.fc27.x86_64 55/86 [openshift/dind] Cleanup : elfutils-default-yama-scope-0.170-1.fc27.noarch 56/86 [openshift/dind] Cleanup : publicsuffix-list-dafsa-20180223-1.fc27.noarch 57/86 [openshift/dind] Cleanup : libreport-filesystem-2.9.3-2.fc27.x86_64 58/86 [openshift/dind] Cleanup : libcurl-7.55.1-9.fc27.x86_64 59/86 [openshift/dind] Running scriptlet: libcurl-7.55.1-9.fc27.x86_64 59/86 [openshift/dind] Cleanup : libsss_nss_idmap-1.16.0-6.fc27.x86_64 60/86 [openshift/dind] Running scriptlet: libsss_nss_idmap-1.16.0-6.fc27.x86_64 60/86 [openshift/dind] Cleanup : libsmartcols-2.30.2-1.fc27.x86_64 61/86 [openshift/dind] Running scriptlet: libsmartcols-2.30.2-1.fc27.x86_64 61/86 [openshift/dind] Cleanup : nss-sysinit-3.35.0-1.1.fc27.x86_64 62/86 [openshift/dind] Cleanup : libsss_idmap-1.16.0-6.fc27.x86_64 63/86 [openshift/dind] Running scriptlet: libsss_idmap-1.16.0-6.fc27.x86_64 63/86 [openshift/dind] Cleanup : libuuid-2.30.2-1.fc27.x86_64 64/86 [openshift/dind] Running scriptlet: libuuid-2.30.2-1.fc27.x86_64 64/86 [openshift/dind] Cleanup : pcre2-10.31-1.fc27.x86_64 65/86 [openshift/dind] Running scriptlet: pcre2-10.31-1.fc27.x86_64 65/86 [openshift/dind] Cleanup : pcre-8.41-5.fc27.x86_64 66/86 [openshift/dind] Running scriptlet: pcre-8.41-5.fc27.x86_64 66/86 [openshift/dind] Cleanup : libsolv-0.6.33-1.fc27.x86_64 67/86 [openshift/dind] Running scriptlet: libsolv-0.6.33-1.fc27.x86_64 67/86 [openshift/dind] Running scriptlet: libidn2-2.0.4-3.fc27.x86_64 68/86 [openshift/dind] Cleanup : libidn2-2.0.4-3.fc27.x86_64 68/86 [openshift/dind] Running scriptlet: libidn2-2.0.4-3.fc27.x86_64 68/86 [openshift/dind] Cleanup : elfutils-libelf-0.170-1.fc27.x86_64 69/86 [openshift/dind] Running scriptlet: elfutils-libelf-0.170-1.fc27.x86_64 69/86 [openshift/dind] Cleanup : audit-libs-2.8.2-1.fc27.x86_64 70/86 [openshift/dind] Running scriptlet: audit-libs-2.8.2-1.fc27.x86_64 70/86 [openshift/dind] Cleanup : python3-3.6.4-8.fc27.x86_64 71/86 [openshift/dind] Running scriptlet: python3-3.6.4-8.fc27.x86_64 71/86 [openshift/dind] Cleanup : python3-libs-3.6.4-8.fc27.x86_64 72/86 [openshift/dind] Running scriptlet: python3-libs-3.6.4-8.fc27.x86_64 72/86 [openshift/dind] Cleanup : openssl-libs-1:1.1.0g-1.fc27.x86_64 73/86 [openshift/dind] Running scriptlet: openssl-libs-1:1.1.0g-1.fc27.x86_64 73/86 [openshift/dind] Cleanup : sqlite-libs-3.20.1-1.fc27.x86_64 74/86 [openshift/dind] Running scriptlet: sqlite-libs-3.20.1-1.fc27.x86_64 74/86 [openshift/dind] Cleanup : libunistring-0.9.7-3.fc27.x86_64 75/86 [openshift/dind] Running scriptlet: libunistring-0.9.7-3.fc27.x86_64 75/86 [openshift/dind] Cleanup : vim-minimal-2:8.0.1553-1.fc27.x86_64 76/86 [openshift/dind] Cleanup : libcrypt-nss-2.26-26.fc27.x86_64 77/86 [openshift/dind] Running scriptlet: libcrypt-nss-2.26-26.fc27.x86_64 77/86 [openshift/dind] Cleanup : nss-softokn-freebl-3.35.0-1.0.fc27.x86_64 78/86 [openshift/dind] Cleanup : nss-util-3.35.0-1.0.fc27.x86_64 79/86 [openshift/dind] Running scriptlet: nss-util-3.35.0-1.0.fc27.x86_64 79/86 [openshift/dind] Cleanup : nspr-4.18.0-1.fc27.x86_64 80/86 [openshift/dind] Running scriptlet: nspr-4.18.0-1.fc27.x86_64 80/86 [openshift/dind] Cleanup : libzstd-1.3.3-1.fc27.x86_64 81/86 [openshift/dind] Running scriptlet: libzstd-1.3.3-1.fc27.x86_64 81/86 [openshift/dind] Cleanup : shared-mime-info-1.9-1.fc27.x86_64 82/86 [openshift/dind] Cleanup : glibc-2.26-26.fc27.x86_64 83/86 [openshift/dind] Running scriptlet: glibc-2.26-26.fc27.x86_64 83/86 [openshift/dind] Cleanup : glibc-langpack-en-2.26-26.fc27.x86_64 84/86 [openshift/dind] Cleanup : glibc-common-2.26-26.fc27.x86_64 85/86 [openshift/dind] Cleanup : tzdata-2018c-1.fc27.noarch 86/86 [openshift/dind] Running scriptlet: python3-3.6.4-9.fc27.x86_64 86/86 [openshift/dind] Running scriptlet: tzdata-2018c-1.fc27.noarch 86/86Failed to connect to bus: No such file or directory [openshift/dind] [openshift/dind] Running scriptlet: shared-mime-info-1.9-2.fc27.x86_64 86/86 [openshift/dind] Verifying : audit-libs-2.8.3-1.fc27.x86_64 1/86 [openshift/dind] Verifying : curl-7.55.1-10.fc27.x86_64 2/86 [openshift/dind] Verifying : libcurl-7.55.1-10.fc27.x86_64 3/86 [openshift/dind] Verifying : elfutils-default-yama-scope-0.170-10.fc27.noarch 4/86 [openshift/dind] Verifying : elfutils-libelf-0.170-10.fc27.x86_64 5/86 [openshift/dind] Verifying : elfutils-libs-0.170-10.fc27.x86_64 6/86 [openshift/dind] Verifying : glibc-2.26-27.fc27.x86_64 7/86 [openshift/dind] Verifying : glibc-common-2.26-27.fc27.x86_64 8/86 [openshift/dind] Verifying : glibc-langpack-en-2.26-27.fc27.x86_64 9/86 [openshift/dind] Verifying : libcrypt-nss-2.26-27.fc27.x86_64 10/86 [openshift/dind] Verifying : libblkid-2.30.2-3.fc27.x86_64 11/86 [openshift/dind] Verifying : libuuid-2.30.2-3.fc27.x86_64 12/86 [openshift/dind] Verifying : libmount-2.30.2-3.fc27.x86_64 13/86 [openshift/dind] Verifying : util-linux-2.30.2-3.fc27.x86_64 14/86 [openshift/dind] Verifying : libfdisk-2.30.2-3.fc27.x86_64 15/86 [openshift/dind] Verifying : libsmartcols-2.30.2-3.fc27.x86_64 16/86 [openshift/dind] Verifying : libidn2-2.0.4-4.fc27.x86_64 17/86 [openshift/dind] Verifying : libreport-filesystem-2.9.3-3.fc27.x86_64 18/86 [openshift/dind] Verifying : libsolv-0.6.34-1.fc27.x86_64 19/86 [openshift/dind] Verifying : libsss_idmap-1.16.1-2.fc27.x86_64 20/86 [openshift/dind] Verifying : libsss_nss_idmap-1.16.1-2.fc27.x86_64 21/86 [openshift/dind] Verifying : libunistring-0.9.9-1.fc27.x86_64 22/86 [openshift/dind] Verifying : libzstd-1.3.4-1.fc27.x86_64 23/86 [openshift/dind] Verifying : nspr-4.19.0-1.fc27.x86_64 24/86 [openshift/dind] Verifying : nss-3.36.0-1.0.fc27.x86_64 25/86 [openshift/dind] Verifying : nss-softokn-3.36.0-1.0.fc27.x86_64 26/86 [openshift/dind] Verifying : nss-util-3.36.0-1.0.fc27.x86_64 27/86 [openshift/dind] Verifying : nss-softokn-freebl-3.36.0-1.0.fc27.x86_64 28/86 [openshift/dind] Verifying : nss-tools-3.36.0-1.0.fc27.x86_64 29/86 [openshift/dind] Verifying : nss-sysinit-3.36.0-1.0.fc27.x86_64 30/86 [openshift/dind] Verifying : openssl-libs-1:1.1.0h-1.fc27.x86_64 31/86 [openshift/dind] Verifying : p11-kit-0.23.10-1.fc27.x86_64 32/86 [openshift/dind] Verifying : p11-kit-trust-0.23.10-1.fc27.x86_64 33/86 [openshift/dind] Verifying : pcre-8.41-6.fc27.x86_64 34/86 [openshift/dind] Verifying : pcre2-10.31-3.fc27.x86_64 35/86 [openshift/dind] Verifying : publicsuffix-list-dafsa-20180328-1.fc27.noarch 36/86 [openshift/dind] Verifying : python3-3.6.4-9.fc27.x86_64 37/86 [openshift/dind] Verifying : python3-libs-3.6.4-9.fc27.x86_64 38/86 [openshift/dind] Verifying : shared-mime-info-1.9-2.fc27.x86_64 39/86 [openshift/dind] Verifying : sqlite-libs-3.20.1-2.fc27.x86_64 40/86 [openshift/dind] Verifying : sssd-client-1.16.1-2.fc27.x86_64 41/86 [openshift/dind] Verifying : tzdata-2018d-1.fc27.noarch 42/86 [openshift/dind] Verifying : vim-minimal-2:8.0.1573-1.fc27.x86_64 43/86 [openshift/dind] Verifying : pcre-8.41-5.fc27.x86_64 44/86 [openshift/dind] Verifying : pcre2-10.31-1.fc27.x86_64 45/86 [openshift/dind] Verifying : audit-libs-2.8.2-1.fc27.x86_64 46/86 [openshift/dind] Verifying : publicsuffix-list-dafsa-20180223-1.fc27.noarch 47/86 [openshift/dind] Verifying : python3-3.6.4-8.fc27.x86_64 48/86 [openshift/dind] Verifying : python3-libs-3.6.4-8.fc27.x86_64 49/86 [openshift/dind] Verifying : curl-7.55.1-9.fc27.x86_64 50/86 [openshift/dind] Verifying : elfutils-default-yama-scope-0.170-1.fc27.noarch 51/86 [openshift/dind] Verifying : elfutils-libelf-0.170-1.fc27.x86_64 52/86 [openshift/dind] Verifying : elfutils-libs-0.170-1.fc27.x86_64 53/86 [openshift/dind] Verifying : shared-mime-info-1.9-1.fc27.x86_64 54/86 [openshift/dind] Verifying : sqlite-libs-3.20.1-1.fc27.x86_64 55/86 [openshift/dind] Verifying : sssd-client-1.16.0-6.fc27.x86_64 56/86 [openshift/dind] Verifying : tzdata-2018c-1.fc27.noarch 57/86 [openshift/dind] Verifying : util-linux-2.30.2-1.fc27.x86_64 58/86 [openshift/dind] Verifying : glibc-2.26-26.fc27.x86_64 59/86 [openshift/dind] Verifying : glibc-common-2.26-26.fc27.x86_64 60/86 [openshift/dind] Verifying : glibc-langpack-en-2.26-26.fc27.x86_64 61/86 [openshift/dind] Verifying : vim-minimal-2:8.0.1553-1.fc27.x86_64 62/86 [openshift/dind] Verifying : libblkid-2.30.2-1.fc27.x86_64 63/86 [openshift/dind] Verifying : libcrypt-nss-2.26-26.fc27.x86_64 64/86 [openshift/dind] Verifying : libcurl-7.55.1-9.fc27.x86_64 65/86 [openshift/dind] Verifying : libfdisk-2.30.2-1.fc27.x86_64 66/86 [openshift/dind] Verifying : libidn2-2.0.4-3.fc27.x86_64 67/86 [openshift/dind] Verifying : libmount-2.30.2-1.fc27.x86_64 68/86 [openshift/dind] Verifying : libreport-filesystem-2.9.3-2.fc27.x86_64 69/86 [openshift/dind] Verifying : libsmartcols-2.30.2-1.fc27.x86_64 70/86 [openshift/dind] Verifying : libsolv-0.6.33-1.fc27.x86_64 71/86 [openshift/dind] Verifying : libsss_idmap-1.16.0-6.fc27.x86_64 72/86 [openshift/dind] Verifying : libsss_nss_idmap-1.16.0-6.fc27.x86_64 73/86 [openshift/dind] Verifying : libunistring-0.9.7-3.fc27.x86_64 74/86 [openshift/dind] Verifying : libuuid-2.30.2-1.fc27.x86_64 75/86 [openshift/dind] Verifying : libzstd-1.3.3-1.fc27.x86_64 76/86 [openshift/dind] Verifying : nspr-4.18.0-1.fc27.x86_64 77/86 [openshift/dind] Verifying : nss-3.35.0-1.1.fc27.x86_64 78/86 [openshift/dind] Verifying : nss-softokn-3.35.0-1.0.fc27.x86_64 79/86 [openshift/dind] Verifying : nss-softokn-freebl-3.35.0-1.0.fc27.x86_64 80/86 [openshift/dind] Verifying : nss-sysinit-3.35.0-1.1.fc27.x86_64 81/86 [openshift/dind] Verifying : nss-tools-3.35.0-1.1.fc27.x86_64 82/86 [openshift/dind] Verifying : nss-util-3.35.0-1.0.fc27.x86_64 83/86 [openshift/dind] Verifying : openssl-libs-1:1.1.0g-1.fc27.x86_64 84/86 [openshift/dind] Verifying : p11-kit-0.23.9-2.fc27.x86_64 85/86 [openshift/dind] Verifying : p11-kit-trust-0.23.9-2.fc27.x86_64 86/86 [openshift/dind] Upgraded: [openshift/dind] audit-libs.x86_64 2.8.3-1.fc27 [openshift/dind] curl.x86_64 7.55.1-10.fc27 [openshift/dind] elfutils-default-yama-scope.noarch 0.170-10.fc27 [openshift/dind] elfutils-libelf.x86_64 0.170-10.fc27 [openshift/dind] elfutils-libs.x86_64 0.170-10.fc27 [openshift/dind] glibc.x86_64 2.26-27.fc27 [openshift/dind] glibc-common.x86_64 2.26-27.fc27 [openshift/dind] glibc-langpack-en.x86_64 2.26-27.fc27 [openshift/dind] libblkid.x86_64 2.30.2-3.fc27 [openshift/dind] libcrypt-nss.x86_64 2.26-27.fc27 [openshift/dind] libcurl.x86_64 7.55.1-10.fc27 [openshift/dind] libfdisk.x86_64 2.30.2-3.fc27 [openshift/dind] libidn2.x86_64 2.0.4-4.fc27 [openshift/dind] libmount.x86_64 2.30.2-3.fc27 [openshift/dind] libreport-filesystem.x86_64 2.9.3-3.fc27 [openshift/dind] libsmartcols.x86_64 2.30.2-3.fc27 [openshift/dind] libsolv.x86_64 0.6.34-1.fc27 [openshift/dind] libsss_idmap.x86_64 1.16.1-2.fc27 [openshift/dind] libsss_nss_idmap.x86_64 1.16.1-2.fc27 [openshift/dind] libunistring.x86_64 0.9.9-1.fc27 [openshift/dind] libuuid.x86_64 2.30.2-3.fc27 [openshift/dind] libzstd.x86_64 1.3.4-1.fc27 [openshift/dind] nspr.x86_64 4.19.0-1.fc27 [openshift/dind] nss.x86_64 3.36.0-1.0.fc27 [openshift/dind] nss-softokn.x86_64 3.36.0-1.0.fc27 [openshift/dind] nss-softokn-freebl.x86_64 3.36.0-1.0.fc27 [openshift/dind] nss-sysinit.x86_64 3.36.0-1.0.fc27 [openshift/dind] nss-tools.x86_64 3.36.0-1.0.fc27 [openshift/dind] nss-util.x86_64 3.36.0-1.0.fc27 [openshift/dind] openssl-libs.x86_64 1:1.1.0h-1.fc27 [openshift/dind] p11-kit.x86_64 0.23.10-1.fc27 [openshift/dind] p11-kit-trust.x86_64 0.23.10-1.fc27 [openshift/dind] pcre.x86_64 8.41-6.fc27 [openshift/dind] pcre2.x86_64 10.31-3.fc27 [openshift/dind] publicsuffix-list-dafsa.noarch 20180328-1.fc27 [openshift/dind] python3.x86_64 3.6.4-9.fc27 [openshift/dind] python3-libs.x86_64 3.6.4-9.fc27 [openshift/dind] shared-mime-info.x86_64 1.9-2.fc27 [openshift/dind] sqlite-libs.x86_64 3.20.1-2.fc27 [openshift/dind] sssd-client.x86_64 1.16.1-2.fc27 [openshift/dind] tzdata.noarch 2018d-1.fc27 [openshift/dind] util-linux.x86_64 2.30.2-3.fc27 [openshift/dind] vim-minimal.x86_64 2:8.0.1573-1.fc27 [openshift/dind] Complete! [openshift/dind] Last metadata expiration check: 0:02:33 ago on Thu Apr 5 19:53:24 2018. [openshift/dind] Package glibc-langpack-en-2.26-27.fc27.x86_64 is already installed, skipping. [openshift/dind] Dependencies resolved. [openshift/dind] ================================================================================ [openshift/dind] Package Arch Version Repository [openshift/dind] Size [openshift/dind] ================================================================================ [openshift/dind] Installing: [openshift/dind] docker x86_64 2:1.13.1-51.git4032bd5.fc27 updates 20 M [openshift/dind] iptables x86_64 1.6.1-4.fc27 fedora 471 k [openshift/dind] openssh-clients x86_64 7.6p1-5.fc27 updates 671 k [openshift/dind] openssh-server x86_64 7.6p1-5.fc27 updates 470 k [openshift/dind] tcpdump x86_64 14:4.9.1-3.fc27 fedora 460 k [openshift/dind] Installing dependencies: [openshift/dind] atomic-registries x86_64 1.22.1-1.fc27 updates 39 k [openshift/dind] audit x86_64 2.8.3-1.fc27 updates 255 k [openshift/dind] audit-libs-python3 x86_64 2.8.3-1.fc27 updates 82 k [openshift/dind] checkpolicy x86_64 2.7-2.fc27 updates 330 k [openshift/dind] container-selinux noarch 2:2.55-1.fc27 updates 39 k [openshift/dind] container-storage-setup noarch 0.8.0-2.git1d27ecf.fc27 updates 36 k [openshift/dind] device-mapper-event x86_64 1.02.144-1.fc27 updates 254 k [openshift/dind] device-mapper-event-libs x86_64 1.02.144-1.fc27 updates 254 k [openshift/dind] device-mapper-persistent-data x86_64 0.7.5-1.fc27 updates 429 k [openshift/dind] docker-common x86_64 2:1.13.1-51.git4032bd5.fc27 updates 87 k [openshift/dind] docker-rhel-push-plugin x86_64 2:1.13.1-51.git4032bd5.fc27 updates 1.7 M [openshift/dind] fipscheck x86_64 1.5.0-3.fc27 fedora 25 k [openshift/dind] fipscheck-lib x86_64 1.5.0-3.fc27 fedora 14 k [openshift/dind] gnupg x86_64 1.4.22-3.fc27 fedora 1.3 M [openshift/dind] kmod x86_64 25-1.fc27 updates 115 k [openshift/dind] libaio x86_64 0.3.110-9.fc27 fedora 29 k [openshift/dind] libcgroup x86_64 0.41-13.fc27 fedora 67 k [openshift/dind] libedit x86_64 3.1-20.20170329cvs.fc27 fedora 99 k [openshift/dind] libmnl x86_64 1.0.4-4.fc27 fedora 28 k [openshift/dind] libnet x86_64 1.1.6-14.fc27 fedora 65 k [openshift/dind] libnetfilter_conntrack x86_64 1.0.6-4.fc27 fedora 62 k [openshift/dind] libnfnetlink x86_64 1.0.1-11.fc27 fedora 31 k [openshift/dind] libnl3 x86_64 3.4.0-1.fc27 updates 304 k [openshift/dind] libselinux-python3 x86_64 2.7-3.fc27 updates 251 k [openshift/dind] libselinux-utils x86_64 2.7-3.fc27 updates 162 k [openshift/dind] libsemanage-python3 x86_64 2.7-2.fc27 updates 122 k [openshift/dind] libstdc++ x86_64 7.3.1-5.fc27 updates 482 k [openshift/dind] libusb x86_64 1:0.1.5-10.fc27 fedora 40 k [openshift/dind] libyaml x86_64 0.1.7-4.fc27 fedora 58 k [openshift/dind] lvm2 x86_64 2.02.175-1.fc27 updates 1.4 M [openshift/dind] lvm2-libs x86_64 2.02.175-1.fc27 updates 1.1 M [openshift/dind] oci-umount x86_64 2:2.3.3-1.gite3c9055.fc27 updates 35 k [openshift/dind] openssh x86_64 7.6p1-5.fc27 updates 501 k [openshift/dind] parted x86_64 3.2-28.fc27 fedora 549 k [openshift/dind] policycoreutils x86_64 2.7-6.fc27 updates 707 k [openshift/dind] policycoreutils-python-utils x86_64 2.7-6.fc27 updates 225 k [openshift/dind] policycoreutils-python3 x86_64 2.7-6.fc27 updates 1.8 M [openshift/dind] protobuf-c x86_64 1.2.1-7.fc27 fedora 34 k [openshift/dind] python3-IPy noarch 0.81-20.fc27 fedora 42 k [openshift/dind] python3-PyYAML x86_64 3.12-5.fc27 fedora 182 k [openshift/dind] python3-pytoml noarch 0.1.14-2.git7dea353.fc27 fedora 23 k [openshift/dind] selinux-policy noarch 3.13.1-283.30.fc27 updates 538 k [openshift/dind] selinux-policy-targeted noarch 3.13.1-283.30.fc27 updates 10 M [openshift/dind] setools-python3 x86_64 4.1.1-3.fc27 fedora 585 k [openshift/dind] skopeo-containers x86_64 0.1.28-1.git0270e56.fc27 updates 17 k [openshift/dind] subscription-manager-rhsm-certificates [openshift/dind] x86_64 1.21.2-3.fc27 updates 205 k [openshift/dind] systemd-container x86_64 234-10.git5f8984e.fc27 updates 425 k [openshift/dind] tcp_wrappers-libs x86_64 7.6-87.fc27 fedora 72 k [openshift/dind] xfsprogs x86_64 4.15.1-1.fc27 updates 1.1 M [openshift/dind] xz x86_64 5.2.3-4.fc27 fedora 150 k [openshift/dind] yajl x86_64 2.1.0-8.fc27 fedora 38 k [openshift/dind] Installing weak dependencies: [openshift/dind] criu x86_64 3.7-3.fc27 updates 460 k [openshift/dind] oci-register-machine x86_64 0-6.1.git66fa845.fc27 updates 1.1 M [openshift/dind] oci-systemd-hook x86_64 1:0.1.15-1.git2d0b8a3.fc27 updates 37 k [openshift/dind] Transaction Summary [openshift/dind] ================================================================================ [openshift/dind] Install 59 Packages [openshift/dind] Total download size: 50 M [openshift/dind] Installed size: 154 M [openshift/dind] Downloading Packages: [openshift/dind] (1/59): tcpdump-4.9.1-3.fc27.x86_64.rpm 363 kB/s | 460 kB 00:01 [openshift/dind] (2/59): atomic-registries-1.22.1-1.fc27.x86_64. 16 kB/s | 39 kB 00:02 [openshift/dind] (3/59): docker-common-1.13.1-51.git4032bd5.fc27 33 kB/s | 87 kB 00:02 [openshift/dind] (4/59): skopeo-containers-0.1.28-1.git0270e56.f 137 kB/s | 17 kB 00:00 [openshift/dind] (5/59): gnupg-1.4.22-3.fc27.x86_64.rpm 1.4 MB/s | 1.3 MB 00:00 [openshift/dind] (6/59): xz-5.2.3-4.fc27.x86_64.rpm 243 kB/s | 150 kB 00:00 [openshift/dind] (7/59): python3-PyYAML-3.12-5.fc27.x86_64.rpm 303 kB/s | 182 kB 00:00 [openshift/dind] (8/59): docker-rhel-push-plugin-1.13.1-51.git40 460 kB/s | 1.7 MB 00:03 [openshift/dind] (9/59): python3-pytoml-0.1.14-2.git7dea353.fc27 114 kB/s | 23 kB 00:00 [openshift/dind] (10/59): libusb-0.1.5-10.fc27.x86_64.rpm 174 kB/s | 40 kB 00:00 [openshift/dind] (11/59): libyaml-0.1.7-4.fc27.x86_64.rpm 328 kB/s | 58 kB 00:00 [openshift/dind] (12/59): libnetfilter_conntrack-1.0.6-4.fc27.x8 235 kB/s | 62 kB 00:00 [openshift/dind] (13/59): iptables-1.6.1-4.fc27.x86_64.rpm 935 kB/s | 471 kB 00:00 [openshift/dind] (14/59): libnfnetlink-1.0.1-11.fc27.x86_64.rpm 169 kB/s | 31 kB 00:00 [openshift/dind] (15/59): libmnl-1.0.4-4.fc27.x86_64.rpm 142 kB/s | 28 kB 00:00 [openshift/dind] (16/59): openssh-clients-7.6p1-5.fc27.x86_64.rp 850 kB/s | 671 kB 00:00 [openshift/dind] (17/59): fipscheck-lib-1.5.0-3.fc27.x86_64.rpm 109 kB/s | 14 kB 00:00 [openshift/dind] (18/59): libedit-3.1-20.20170329cvs.fc27.x86_64 376 kB/s | 99 kB 00:00 [openshift/dind] (19/59): fipscheck-1.5.0-3.fc27.x86_64.rpm 159 kB/s | 25 kB 00:00 [openshift/dind] (20/59): openssh-server-7.6p1-5.fc27.x86_64.rpm 2.0 MB/s | 470 kB 00:00 [openshift/dind] (21/59): tcp_wrappers-libs-7.6-87.fc27.x86_64.r 283 kB/s | 72 kB 00:00 [openshift/dind] (22/59): openssh-7.6p1-5.fc27.x86_64.rpm 259 kB/s | 501 kB 00:01 [openshift/dind] (23/59): lvm2-libs-2.02.175-1.fc27.x86_64.rpm 600 kB/s | 1.1 MB 00:01 [openshift/dind] (24/59): lvm2-2.02.175-1.fc27.x86_64.rpm 562 kB/s | 1.4 MB 00:02 [openshift/dind] (25/59): device-mapper-event-libs-1.02.144-1.fc 2.3 MB/s | 254 kB 00:00 [openshift/dind] (26/59): device-mapper-persistent-data-0.7.5-1. 3.1 MB/s | 429 kB 00:00 [openshift/dind] (27/59): libaio-0.3.110-9.fc27.x86_64.rpm 185 kB/s | 29 kB 00:00 [openshift/dind] (28/59): kmod-25-1.fc27.x86_64.rpm 698 kB/s | 115 kB 00:00 [openshift/dind] (29/59): container-selinux-2.55-1.fc27.noarch.r 1.0 MB/s | 39 kB 00:00 [openshift/dind] (30/59): libselinux-utils-2.7-3.fc27.x86_64.rpm 3.9 MB/s | 162 kB 00:00 [openshift/dind] (31/59): container-storage-setup-0.8.0-2.git1d2 1.2 MB/s | 36 kB 00:00 [openshift/dind] (32/59): oci-umount-2.3.3-1.gite3c9055.fc27.x86 1.5 MB/s | 35 kB 00:00 [openshift/dind] (33/59): yajl-2.1.0-8.fc27.x86_64.rpm 203 kB/s | 38 kB 00:00 [openshift/dind] (34/59): docker-1.13.1-51.git4032bd5.fc27.x86_6 1.6 MB/s | 20 MB 00:12 [openshift/dind] (35/59): subscription-manager-rhsm-certificates 1.4 MB/s | 205 kB 00:00 [openshift/dind] (36/59): device-mapper-event-1.02.144-1.fc27.x8 137 kB/s | 254 kB 00:01 [openshift/dind] (37/59): parted-3.2-28.fc27.x86_64.rpm 910 kB/s | 549 kB 00:00 [openshift/dind] (38/59): xfsprogs-4.15.1-1.fc27.x86_64.rpm 1.3 MB/s | 1.1 MB 00:00 [openshift/dind] (39/59): policycoreutils-2.7-6.fc27.x86_64.rpm 2.4 MB/s | 707 kB 00:00 [openshift/dind] (40/59): policycoreutils-python-utils-2.7-6.fc2 2.0 MB/s | 225 kB 00:00 [openshift/dind] (41/59): audit-libs-python3-2.8.3-1.fc27.x86_64 1.8 MB/s | 82 kB 00:00 [openshift/dind] (42/59): libselinux-python3-2.7-3.fc27.x86_64.r 2.1 MB/s | 251 kB 00:00 [openshift/dind] (43/59): libsemanage-python3-2.7-2.fc27.x86_64. 2.3 MB/s | 122 kB 00:00 [openshift/dind] (44/59): libstdc++-7.3.1-5.fc27.x86_64.rpm 706 kB/s | 482 kB 00:00 [openshift/dind] (45/59): python3-IPy-0.81-20.fc27.noarch.rpm 208 kB/s | 42 kB 00:00 [openshift/dind] (46/59): audit-2.8.3-1.fc27.x86_64.rpm 2.3 MB/s | 255 kB 00:00 [openshift/dind] (47/59): setools-python3-4.1.1-3.fc27.x86_64.rp 853 kB/s | 585 kB 00:00 [openshift/dind] (48/59): selinux-policy-3.13.1-283.30.fc27.noar 842 kB/s | 538 kB 00:00 [openshift/dind] (49/59): checkpolicy-2.7-2.fc27.x86_64.rpm 573 kB/s | 330 kB 00:00 [openshift/dind] (50/59): libcgroup-0.41-13.fc27.x86_64.rpm 269 kB/s | 67 kB 00:00 [openshift/dind] (51/59): selinux-policy-targeted-3.13.1-283.30. 7.0 MB/s | 10 MB 00:01 [openshift/dind] (52/59): libnet-1.1.6-14.fc27.x86_64.rpm 125 kB/s | 65 kB 00:00 [openshift/dind] (53/59): protobuf-c-1.2.1-7.fc27.x86_64.rpm 154 kB/s | 34 kB 00:00 [openshift/dind] (54/59): oci-systemd-hook-0.1.15-1.git2d0b8a3.f 808 kB/s | 37 kB 00:00 [openshift/dind] (55/59): policycoreutils-python3-2.7-6.fc27.x86 550 kB/s | 1.8 MB 00:03 [openshift/dind] (56/59): systemd-container-234-10.git5f8984e.fc 3.9 MB/s | 425 kB 00:00 [openshift/dind] (57/59): criu-3.7-3.fc27.x86_64.rpm 317 kB/s | 460 kB 00:01 [openshift/dind] (58/59): oci-register-machine-0-6.1.git66fa845. 3.8 MB/s | 1.1 MB 00:00 [openshift/dind] (59/59): libnl3-3.4.0-1.fc27.x86_64.rpm 3.8 MB/s | 304 kB 00:00 [openshift/dind] -------------------------------------------------------------------------------- [openshift/dind] Total 2.9 MB/s | 50 MB 00:17 [openshift/dind] Running transaction check [openshift/dind] Transaction check succeeded. [openshift/dind] Running transaction test [openshift/dind] Transaction test succeeded. [openshift/dind] Running transaction [openshift/dind] Preparing : 1/1 [openshift/dind] Installing : fipscheck-1.5.0-3.fc27.x86_64 1/59 [openshift/dind] Installing : fipscheck-lib-1.5.0-3.fc27.x86_64 2/59 [openshift/dind] Running scriptlet: fipscheck-lib-1.5.0-3.fc27.x86_64 2/59 [openshift/dind] Installing : device-mapper-event-libs-1.02.144-1.fc27.x86_64 3/59 [openshift/dind] Running scriptlet: device-mapper-event-libs-1.02.144-1.fc27.x86_64 3/59 [openshift/dind] Running scriptlet: openssh-7.6p1-5.fc27.x86_64 4/59 [openshift/dind] Installing : openssh-7.6p1-5.fc27.x86_64 4/59 [openshift/dind] Installing : libselinux-python3-2.7-3.fc27.x86_64 5/59 [openshift/dind] Installing : parted-3.2-28.fc27.x86_64 6/59 [openshift/dind] Running scriptlet: parted-3.2-28.fc27.x86_64 6/59 [openshift/dind] Installing : xfsprogs-4.15.1-1.fc27.x86_64 7/59 [openshift/dind] Running scriptlet: xfsprogs-4.15.1-1.fc27.x86_64 7/59 [openshift/dind] Installing : yajl-2.1.0-8.fc27.x86_64 8/59 [openshift/dind] Running scriptlet: yajl-2.1.0-8.fc27.x86_64 8/59 [openshift/dind] Installing : libselinux-utils-2.7-3.fc27.x86_64 9/59 [openshift/dind] Installing : policycoreutils-2.7-6.fc27.x86_64 10/59 [openshift/dind] Installing : selinux-policy-3.13.1-283.30.fc27.noarch 11/59 [openshift/dind] Running scriptlet: selinux-policy-3.13.1-283.30.fc27.noarch 11/59 [openshift/dind] Installing : tcp_wrappers-libs-7.6-87.fc27.x86_64 12/59 [openshift/dind] Running scriptlet: tcp_wrappers-libs-7.6-87.fc27.x86_64 12/59 [openshift/dind] Installing : libnfnetlink-1.0.1-11.fc27.x86_64 13/59 [openshift/dind] Running scriptlet: libnfnetlink-1.0.1-11.fc27.x86_64 13/59 [openshift/dind] Installing : audit-2.8.3-1.fc27.x86_64 14/59 [openshift/dind] Running scriptlet: audit-2.8.3-1.fc27.x86_64 14/59 [openshift/dind] Installing : audit-libs-python3-2.8.3-1.fc27.x86_64 15/59 [openshift/dind] Running scriptlet: selinux-policy-targeted-3.13.1-283.30.fc27.noarch 16/59 [openshift/dind] Installing : selinux-policy-targeted-3.13.1-283.30.fc27.noarch 16/59 [openshift/dind] Running scriptlet: selinux-policy-targeted-3.13.1-283.30.fc27.noarch 16/59 [openshift/dind] Installing : oci-umount-2:2.3.3-1.gite3c9055.fc27.x86_64 17/59 [openshift/dind] Installing : libsemanage-python3-2.7-2.fc27.x86_64 18/59 [openshift/dind] Installing : device-mapper-event-1.02.144-1.fc27.x86_64 19/59 [openshift/dind] Running scriptlet: device-mapper-event-1.02.144-1.fc27.x86_64 19/59 [openshift/dind] Failed to connect to bus: No such file or directory [openshift/dind] Installing : lvm2-libs-2.02.175-1.fc27.x86_64 20/59 [openshift/dind] Running scriptlet: lvm2-libs-2.02.175-1.fc27.x86_64 20/59 [openshift/dind] Installing : libnl3-3.4.0-1.fc27.x86_64 21/59 [openshift/dind] Running scriptlet: libnl3-3.4.0-1.fc27.x86_64 21/59 [openshift/dind] Installing : systemd-container-234-10.git5f8984e.fc27.x86_64 22/59 [openshift/dind] Installing : protobuf-c-1.2.1-7.fc27.x86_64 23/59 [openshift/dind] Running scriptlet: protobuf-c-1.2.1-7.fc27.x86_64 23/59 [openshift/dind] Installing : libnet-1.1.6-14.fc27.x86_64 24/59 [openshift/dind] Running scriptlet: libnet-1.1.6-14.fc27.x86_64 24/59 [openshift/dind] Running scriptlet: libcgroup-0.41-13.fc27.x86_64 25/59 [openshift/dind] Installing : libcgroup-0.41-13.fc27.x86_64 25/59 [openshift/dind] Running scriptlet: libcgroup-0.41-13.fc27.x86_64 25/59 [openshift/dind] Installing : checkpolicy-2.7-2.fc27.x86_64 26/59 [openshift/dind] Installing : setools-python3-4.1.1-3.fc27.x86_64 27/59 [openshift/dind] Installing : python3-IPy-0.81-20.fc27.noarch 28/59 [openshift/dind] Installing : policycoreutils-python3-2.7-6.fc27.x86_64 29/59 [openshift/dind] Installing : policycoreutils-python-utils-2.7-6.fc27.x86_64 30/59 [openshift/dind] Installing : container-selinux-2:2.55-1.fc27.noarch 31/59 [openshift/dind] Running scriptlet: container-selinux-2:2.55-1.fc27.noarch 31/59 [openshift/dind] setsebool: SELinux is disabled. [openshift/dind] Installing : libstdc++-7.3.1-5.fc27.x86_64 32/59 [openshift/dind] Running scriptlet: libstdc++-7.3.1-5.fc27.x86_64 32/59 [openshift/dind] Installing : subscription-manager-rhsm-certificates-1.21.2-3.fc 33/59 [openshift/dind] Installing : kmod-25-1.fc27.x86_64 34/59 [openshift/dind] Installing : libaio-0.3.110-9.fc27.x86_64 35/59 [openshift/dind] Running scriptlet: libaio-0.3.110-9.fc27.x86_64 35/59 [openshift/dind] Installing : device-mapper-persistent-data-0.7.5-1.fc27.x86_64 36/59 [openshift/dind] Installing : lvm2-2.02.175-1.fc27.x86_64 37/59 [openshift/dind] Running scriptlet: lvm2-2.02.175-1.fc27.x86_64 37/59 [openshift/dind] Failed to connect to bus: No such file or directory [openshift/dind] Failed to connect to bus: No such file or directory [openshift/dind] Failed to connect to bus: No such file or directory [openshift/dind] warning: %post(lvm2-2.02.175-1.fc27.x86_64) scriptlet failed, exit status 1 [openshift/dind] Installing : container-storage-setup-0.8.0-2.git1d27ecf.fc27.no 38/59 [openshift/dind] Installing : libedit-3.1-20.20170329cvs.fc27.x86_64 39/59 [openshift/dind] Running scriptlet: libedit-3.1-20.20170329cvs.fc27.x86_64 39/59 [openshift/dind] Installing : libmnl-1.0.4-4.fc27.x86_64 40/59 [openshift/dind] Running scriptlet: libmnl-1.0.4-4.fc27.x86_64 40/59 [openshift/dind] Installing : libnetfilter_conntrack-1.0.6-4.fc27.x86_64 41/59 [openshift/dind] Running scriptlet: libnetfilter_conntrack-1.0.6-4.fc27.x86_64 41/59 [openshift/dind] Installing : iptables-1.6.1-4.fc27.x86_64 42/59 [openshift/dind] Running scriptlet: iptables-1.6.1-4.fc27.x86_64 42/59 [openshift/dind] Installing : libyaml-0.1.7-4.fc27.x86_64 43/59 [openshift/dind] Running scriptlet: libyaml-0.1.7-4.fc27.x86_64 43/59 [openshift/dind] Installing : python3-PyYAML-3.12-5.fc27.x86_64 44/59 [openshift/dind] Installing : libusb-1:0.1.5-10.fc27.x86_64 45/59 [openshift/dind] Running scriptlet: libusb-1:0.1.5-10.fc27.x86_64 45/59 [openshift/dind] Installing : gnupg-1.4.22-3.fc27.x86_64 46/59 [openshift/dind] Running scriptlet: gnupg-1.4.22-3.fc27.x86_64 46/59 [openshift/dind] Installing : python3-pytoml-0.1.14-2.git7dea353.fc27.noarch 47/59 [openshift/dind] Installing : atomic-registries-1.22.1-1.fc27.x86_64 48/59 [openshift/dind] Installing : xz-5.2.3-4.fc27.x86_64 49/59 [openshift/dind] Installing : skopeo-containers-0.1.28-1.git0270e56.fc27.x86_64 50/59 [openshift/dind] Installing : docker-rhel-push-plugin-2:1.13.1-51.git4032bd5.fc2 51/59 [openshift/dind] Running scriptlet: docker-rhel-push-plugin-2:1.13.1-51.git4032bd5.fc2 51/59 [openshift/dind] Installing : docker-common-2:1.13.1-51.git4032bd5.fc27.x86_64 52/59 [openshift/dind] Installing : docker-2:1.13.1-51.git4032bd5.fc27.x86_64 53/59 [openshift/dind] Running scriptlet: docker-2:1.13.1-51.git4032bd5.fc27.x86_64 53/59 [openshift/dind] Installing : openssh-clients-7.6p1-5.fc27.x86_64 54/59 [openshift/dind] Installing : criu-3.7-3.fc27.x86_64 55/59 [openshift/dind] Running scriptlet: criu-3.7-3.fc27.x86_64 55/59 [openshift/dind] Installing : oci-register-machine-0-6.1.git66fa845.fc27.x86_64 56/59 [openshift/dind] Running scriptlet: openssh-server-7.6p1-5.fc27.x86_64 57/59 [openshift/dind] Installing : openssh-server-7.6p1-5.fc27.x86_64 57/59 [openshift/dind] Running scriptlet: openssh-server-7.6p1-5.fc27.x86_64 57/59 [openshift/dind] Installing : oci-systemd-hook-1:0.1.15-1.git2d0b8a3.fc27.x86_64 58/59 [openshift/dind] Running scriptlet: tcpdump-14:4.9.1-3.fc27.x86_64 59/59 [openshift/dind] Installing : tcpdump-14:4.9.1-3.fc27.x86_64 59/59 [openshift/dind] Running scriptlet: docker-2:1.13.1-51.git4032bd5.fc27.x86_64 59/59 [openshift/dind] Running scriptlet: tcpdump-14:4.9.1-3.fc27.x86_64 59/59Failed to connect to bus: No such file or directory [openshift/dind] [openshift/dind] Verifying : tcpdump-14:4.9.1-3.fc27.x86_64 1/59 [openshift/dind] Verifying : docker-2:1.13.1-51.git4032bd5.fc27.x86_64 2/59 [openshift/dind] Verifying : atomic-registries-1.22.1-1.fc27.x86_64 3/59 [openshift/dind] Verifying : docker-common-2:1.13.1-51.git4032bd5.fc27.x86_64 4/59 [openshift/dind] Verifying : docker-rhel-push-plugin-2:1.13.1-51.git4032bd5.fc2 5/59 [openshift/dind] Verifying : skopeo-containers-0.1.28-1.git0270e56.fc27.x86_64 6/59 [openshift/dind] Verifying : gnupg-1.4.22-3.fc27.x86_64 7/59 [openshift/dind] Verifying : xz-5.2.3-4.fc27.x86_64 8/59 [openshift/dind] Verifying : python3-PyYAML-3.12-5.fc27.x86_64 9/59 [openshift/dind] Verifying : python3-pytoml-0.1.14-2.git7dea353.fc27.noarch 10/59 [openshift/dind] Verifying : libusb-1:0.1.5-10.fc27.x86_64 11/59 [openshift/dind] Verifying : libyaml-0.1.7-4.fc27.x86_64 12/59 [openshift/dind] Verifying : iptables-1.6.1-4.fc27.x86_64 13/59 [openshift/dind] Verifying : libnetfilter_conntrack-1.0.6-4.fc27.x86_64 14/59 [openshift/dind] Verifying : libnfnetlink-1.0.1-11.fc27.x86_64 15/59 [openshift/dind] Verifying : libmnl-1.0.4-4.fc27.x86_64 16/59 [openshift/dind] Verifying : openssh-clients-7.6p1-5.fc27.x86_64 17/59 [openshift/dind] Verifying : openssh-7.6p1-5.fc27.x86_64 18/59 [openshift/dind] Verifying : fipscheck-lib-1.5.0-3.fc27.x86_64 19/59 [openshift/dind] Verifying : libedit-3.1-20.20170329cvs.fc27.x86_64 20/59 [openshift/dind] Verifying : fipscheck-1.5.0-3.fc27.x86_64 21/59 [openshift/dind] Verifying : openssh-server-7.6p1-5.fc27.x86_64 22/59 [openshift/dind] Verifying : tcp_wrappers-libs-7.6-87.fc27.x86_64 23/59 [openshift/dind] Verifying : lvm2-2.02.175-1.fc27.x86_64 24/59 [openshift/dind] Verifying : lvm2-libs-2.02.175-1.fc27.x86_64 25/59 [openshift/dind] Verifying : device-mapper-event-1.02.144-1.fc27.x86_64 26/59 [openshift/dind] Verifying : device-mapper-event-libs-1.02.144-1.fc27.x86_64 27/59 [openshift/dind] Verifying : device-mapper-persistent-data-0.7.5-1.fc27.x86_64 28/59 [openshift/dind] Verifying : libaio-0.3.110-9.fc27.x86_64 29/59 [openshift/dind] Verifying : kmod-25-1.fc27.x86_64 30/59 [openshift/dind] Verifying : container-selinux-2:2.55-1.fc27.noarch 31/59 [openshift/dind] Verifying : libselinux-utils-2.7-3.fc27.x86_64 32/59 [openshift/dind] Verifying : container-storage-setup-0.8.0-2.git1d27ecf.fc27.no 33/59 [openshift/dind] Verifying : oci-umount-2:2.3.3-1.gite3c9055.fc27.x86_64 34/59 [openshift/dind] Verifying : yajl-2.1.0-8.fc27.x86_64 35/59 [openshift/dind] Verifying : subscription-manager-rhsm-certificates-1.21.2-3.fc 36/59 [openshift/dind] Verifying : xfsprogs-4.15.1-1.fc27.x86_64 37/59 [openshift/dind] Verifying : parted-3.2-28.fc27.x86_64 38/59 [openshift/dind] Verifying : libstdc++-7.3.1-5.fc27.x86_64 39/59 [openshift/dind] Verifying : policycoreutils-2.7-6.fc27.x86_64 40/59 [openshift/dind] Verifying : policycoreutils-python-utils-2.7-6.fc27.x86_64 41/59 [openshift/dind] Verifying : policycoreutils-python3-2.7-6.fc27.x86_64 42/59 [openshift/dind] Verifying : audit-libs-python3-2.8.3-1.fc27.x86_64 43/59 [openshift/dind] Verifying : libselinux-python3-2.7-3.fc27.x86_64 44/59 [openshift/dind] Verifying : libsemanage-python3-2.7-2.fc27.x86_64 45/59 [openshift/dind] Verifying : python3-IPy-0.81-20.fc27.noarch 46/59 [openshift/dind] Verifying : setools-python3-4.1.1-3.fc27.x86_64 47/59 [openshift/dind] Verifying : audit-2.8.3-1.fc27.x86_64 48/59 [openshift/dind] Verifying : selinux-policy-3.13.1-283.30.fc27.noarch 49/59 [openshift/dind] Verifying : selinux-policy-targeted-3.13.1-283.30.fc27.noarch 50/59 [openshift/dind] Verifying : checkpolicy-2.7-2.fc27.x86_64 51/59 [openshift/dind] Verifying : libcgroup-0.41-13.fc27.x86_64 52/59 [openshift/dind] Verifying : criu-3.7-3.fc27.x86_64 53/59 [openshift/dind] Verifying : libnet-1.1.6-14.fc27.x86_64 54/59 [openshift/dind] Verifying : protobuf-c-1.2.1-7.fc27.x86_64 55/59 [openshift/dind] Verifying : oci-systemd-hook-1:0.1.15-1.git2d0b8a3.fc27.x86_64 56/59 [openshift/dind] Verifying : oci-register-machine-0-6.1.git66fa845.fc27.x86_64 57/59 [openshift/dind] Verifying : systemd-container-234-10.git5f8984e.fc27.x86_64 58/59 [openshift/dind] Verifying : libnl3-3.4.0-1.fc27.x86_64 59/59 [openshift/dind] Installed: [openshift/dind] docker.x86_64 2:1.13.1-51.git4032bd5.fc27 [openshift/dind] iptables.x86_64 1.6.1-4.fc27 [openshift/dind] openssh-clients.x86_64 7.6p1-5.fc27 [openshift/dind] openssh-server.x86_64 7.6p1-5.fc27 [openshift/dind] tcpdump.x86_64 14:4.9.1-3.fc27 [openshift/dind] criu.x86_64 3.7-3.fc27 [openshift/dind] oci-register-machine.x86_64 0-6.1.git66fa845.fc27 [openshift/dind] oci-systemd-hook.x86_64 1:0.1.15-1.git2d0b8a3.fc27 [openshift/dind] atomic-registries.x86_64 1.22.1-1.fc27 [openshift/dind] audit.x86_64 2.8.3-1.fc27 [openshift/dind] audit-libs-python3.x86_64 2.8.3-1.fc27 [openshift/dind] checkpolicy.x86_64 2.7-2.fc27 [openshift/dind] container-selinux.noarch 2:2.55-1.fc27 [openshift/dind] container-storage-setup.noarch 0.8.0-2.git1d27ecf.fc27 [openshift/dind] device-mapper-event.x86_64 1.02.144-1.fc27 [openshift/dind] device-mapper-event-libs.x86_64 1.02.144-1.fc27 [openshift/dind] device-mapper-persistent-data.x86_64 0.7.5-1.fc27 [openshift/dind] docker-common.x86_64 2:1.13.1-51.git4032bd5.fc27 [openshift/dind] docker-rhel-push-plugin.x86_64 2:1.13.1-51.git4032bd5.fc27 [openshift/dind] fipscheck.x86_64 1.5.0-3.fc27 [openshift/dind] fipscheck-lib.x86_64 1.5.0-3.fc27 [openshift/dind] gnupg.x86_64 1.4.22-3.fc27 [openshift/dind] kmod.x86_64 25-1.fc27 [openshift/dind] libaio.x86_64 0.3.110-9.fc27 [openshift/dind] libcgroup.x86_64 0.41-13.fc27 [openshift/dind] libedit.x86_64 3.1-20.20170329cvs.fc27 [openshift/dind] libmnl.x86_64 1.0.4-4.fc27 [openshift/dind] libnet.x86_64 1.1.6-14.fc27 [openshift/dind] libnetfilter_conntrack.x86_64 1.0.6-4.fc27 [openshift/dind] libnfnetlink.x86_64 1.0.1-11.fc27 [openshift/dind] libnl3.x86_64 3.4.0-1.fc27 [openshift/dind] libselinux-python3.x86_64 2.7-3.fc27 [openshift/dind] libselinux-utils.x86_64 2.7-3.fc27 [openshift/dind] libsemanage-python3.x86_64 2.7-2.fc27 [openshift/dind] libstdc++.x86_64 7.3.1-5.fc27 [openshift/dind] libusb.x86_64 1:0.1.5-10.fc27 [openshift/dind] libyaml.x86_64 0.1.7-4.fc27 [openshift/dind] lvm2.x86_64 2.02.175-1.fc27 [openshift/dind] lvm2-libs.x86_64 2.02.175-1.fc27 [openshift/dind] oci-umount.x86_64 2:2.3.3-1.gite3c9055.fc27 [openshift/dind] openssh.x86_64 7.6p1-5.fc27 [openshift/dind] parted.x86_64 3.2-28.fc27 [openshift/dind] policycoreutils.x86_64 2.7-6.fc27 [openshift/dind] policycoreutils-python-utils.x86_64 2.7-6.fc27 [openshift/dind] policycoreutils-python3.x86_64 2.7-6.fc27 [openshift/dind] protobuf-c.x86_64 1.2.1-7.fc27 [openshift/dind] python3-IPy.noarch 0.81-20.fc27 [openshift/dind] python3-PyYAML.x86_64 3.12-5.fc27 [openshift/dind] python3-pytoml.noarch 0.1.14-2.git7dea353.fc27 [openshift/dind] selinux-policy.noarch 3.13.1-283.30.fc27 [openshift/dind] selinux-policy-targeted.noarch 3.13.1-283.30.fc27 [openshift/dind] setools-python3.x86_64 4.1.1-3.fc27 [openshift/dind] skopeo-containers.x86_64 0.1.28-1.git0270e56.fc27 [openshift/dind] subscription-manager-rhsm-certificates.x86_64 1.21.2-3.fc27 [openshift/dind] systemd-container.x86_64 234-10.git5f8984e.fc27 [openshift/dind] tcp_wrappers-libs.x86_64 7.6-87.fc27 [openshift/dind] xfsprogs.x86_64 4.15.1-1.fc27 [openshift/dind] xz.x86_64 5.2.3-4.fc27 [openshift/dind] yajl.x86_64 2.1.0-8.fc27 [openshift/dind] Complete! [openshift/dind] Non-fatal POSTIN scriptlet failure in rpm package lvm2 [openshift/dind] Non-fatal POSTIN scriptlet failure in rpm package lvm2 [openshift/dind] --> RUN systemctl enable docker.service [openshift/dind] Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service. [openshift/dind] --> RUN echo "DOCKER_STORAGE_OPTIONS=--storage-driver vfs" > /etc/sysconfig/docker-storage [openshift/dind] --> RUN mkdir -p /usr/local/bin [openshift/dind] --> COPY dind-setup.sh /usr/local/bin/ [openshift/dind] --> COPY dind-setup.service /etc/systemd/system/ [openshift/dind] --> RUN systemctl enable dind-setup.service [openshift/dind] Configuration file /etc/systemd/system/dind-setup.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway. [openshift/dind] Created symlink /etc/systemd/system/docker.service.requires/dind-setup.service → /etc/systemd/system/dind-setup.service. [openshift/dind] --> VOLUME ["/var/lib/docker"] [openshift/dind] --> RUN ln /usr/sbin/init /usr/sbin/dind_init [openshift/dind] --> CMD ["/usr/sbin/dind_init"] [openshift/dind] --> Committing changes to openshift/dind:4253ab3 ... [openshift/dind] --> Tagged as openshift/dind:latest [openshift/dind] --> Done [openshift/dind-node] --> FROM openshift/dind as 0 [openshift/dind-node] --> RUN dnf -y update && dnf -y install bind-utils findutils hostname iproute iputils less procps-ng tar which bridge-utils ethtool iptables-services conntrack-tools openvswitch openvswitch-ovn-* python-netaddr python2-pyroute2 python2-requests PyYAML cri-o [openshift/dind-node] Last metadata expiration check: 0:04:40 ago on Thu Apr 5 19:53:24 2018. [openshift/dind-node] Dependencies resolved. [openshift/dind-node] Nothing to do. [openshift/dind-node] Complete! [openshift/dind-node] Last metadata expiration check: 0:04:43 ago on Thu Apr 5 19:53:24 2018. [openshift/dind-node] Package tar-2:1.29-7.fc27.x86_64 is already installed, skipping. [openshift/dind-node] Dependencies resolved. [openshift/dind-node] ================================================================================ [openshift/dind-node] Package Arch Version Repository [openshift/dind-node] Size [openshift/dind-node] ================================================================================ [openshift/dind-node] Installing: [openshift/dind-node] PyYAML x86_64 3.12-5.fc27 fedora 180 k [openshift/dind-node] bind-utils x86_64 32:9.11.3-2.fc27 updates 422 k [openshift/dind-node] bridge-utils x86_64 1.6-1.fc27 updates 38 k [openshift/dind-node] conntrack-tools x86_64 1.4.4-5.fc27 fedora 201 k [openshift/dind-node] cri-o x86_64 2:1.9.10-1.git8723732.fc27 updates 7.9 M [openshift/dind-node] ethtool x86_64 2:4.15-1.fc27 updates 140 k [openshift/dind-node] findutils x86_64 1:4.6.0-16.fc27 updates 523 k [openshift/dind-node] hostname x86_64 3.18-4.fc27 fedora 28 k [openshift/dind-node] iproute x86_64 4.15.0-1.fc27 updates 534 k [openshift/dind-node] iptables-services x86_64 1.6.1-4.fc27 fedora 54 k [openshift/dind-node] iputils x86_64 20161105-7.fc27 fedora 157 k [openshift/dind-node] less x86_64 487-5.fc27 fedora 159 k [openshift/dind-node] openvswitch x86_64 2.8.1-1.fc27 fedora 6.5 M [openshift/dind-node] openvswitch-ovn-central x86_64 2.8.1-1.fc27 fedora 859 k [openshift/dind-node] openvswitch-ovn-common x86_64 2.8.1-1.fc27 fedora 2.3 M [openshift/dind-node] openvswitch-ovn-docker x86_64 2.8.1-1.fc27 fedora 21 k [openshift/dind-node] openvswitch-ovn-host x86_64 2.8.1-1.fc27 fedora 910 k [openshift/dind-node] openvswitch-ovn-vtep x86_64 2.8.1-1.fc27 fedora 802 k [openshift/dind-node] procps-ng x86_64 3.3.10-15.fc27 fedora 395 k [openshift/dind-node] python-netaddr noarch 0.7.19-3.fc27 fedora 1.5 M [openshift/dind-node] python2-pyroute2 noarch 0.4.15-2.fc27 fedora 364 k [openshift/dind-node] python2-requests noarch 2.18.4-1.fc27 fedora 122 k [openshift/dind-node] which x86_64 2.21-4.fc27 fedora 46 k [openshift/dind-node] Installing dependencies: [openshift/dind-node] GeoIP x86_64 1.6.11-3.fc27 fedora 124 k [openshift/dind-node] GeoIP-GeoLite-data noarch 2018.01-1.fc27 updates 465 k [openshift/dind-node] bind-libs x86_64 32:9.11.3-2.fc27 updates 161 k [openshift/dind-node] bind-libs-lite x86_64 32:9.11.3-2.fc27 updates 1.1 M [openshift/dind-node] bind-license noarch 32:9.11.3-2.fc27 updates 92 k [openshift/dind-node] conmon x86_64 2:1.9.10-1.git8723732.fc27 updates 37 k [openshift/dind-node] firewalld-filesystem noarch 0.4.4.5-3.fc27 fedora 71 k [openshift/dind-node] gc x86_64 7.6.0-7.fc27 fedora 110 k [openshift/dind-node] guile x86_64 5:2.0.14-3.fc27 fedora 3.5 M [openshift/dind-node] libatomic_ops x86_64 7.4.6-3.fc27 fedora 33 k [openshift/dind-node] libnetfilter_cthelper x86_64 1.0.0-12.fc27 fedora 22 k [openshift/dind-node] libnetfilter_cttimeout x86_64 1.0.0-10.fc27 fedora 22 k [openshift/dind-node] libnetfilter_queue x86_64 1.0.2-10.fc27 fedora 28 k [openshift/dind-node] libtool-ltdl x86_64 2.4.6-20.fc27 fedora 55 k [openshift/dind-node] linux-atm-libs x86_64 2.5.1-19.fc27 fedora 40 k [openshift/dind-node] make x86_64 1:4.2.1-4.fc27 fedora 494 k [openshift/dind-node] numactl-libs x86_64 2.0.11-5.fc27 fedora 33 k [openshift/dind-node] openssl x86_64 1:1.1.0h-1.fc27 updates 575 k [openshift/dind-node] python-backports-ssl_match_hostname [openshift/dind-node] noarch 3.5.0.1-5.fc27 fedora 17 k [openshift/dind-node] python-chardet noarch 3.0.4-2.fc27 fedora 189 k [openshift/dind-node] python-ipaddress noarch 1.0.18-2.fc27 fedora 39 k [openshift/dind-node] python2 x86_64 2.7.14-10.fc27 updates 101 k [openshift/dind-node] python2-asn1crypto noarch 0.23.0-1.fc27 updates 176 k [openshift/dind-node] python2-backports x86_64 1.0-12.fc27 updates 10 k [openshift/dind-node] python2-cffi x86_64 1.10.0-3.fc27 fedora 227 k [openshift/dind-node] python2-cryptography x86_64 2.0.2-3.fc27 updates 507 k [openshift/dind-node] python2-enum34 noarch 1.1.6-3.fc27 updates 57 k [openshift/dind-node] python2-idna noarch 2.5-2.fc27 fedora 98 k [openshift/dind-node] python2-libs x86_64 2.7.14-10.fc27 updates 6.3 M [openshift/dind-node] python2-openvswitch noarch 2.8.1-1.fc27 fedora 171 k [openshift/dind-node] python2-pip noarch 9.0.1-14.fc27 updates 1.8 M [openshift/dind-node] python2-ply noarch 3.9-4.fc27 fedora 107 k [openshift/dind-node] python2-pyOpenSSL noarch 17.2.0-2.fc27 fedora 98 k [openshift/dind-node] python2-pycparser noarch 2.14-11.fc27 fedora 146 k [openshift/dind-node] python2-pysocks noarch 1.6.7-1.fc27 fedora 32 k [openshift/dind-node] python2-setuptools noarch 37.0.0-1.fc27 updates 606 k [openshift/dind-node] python2-six noarch 1.11.0-1.fc27 fedora 36 k [openshift/dind-node] python2-urllib3 noarch 1.22-3.fc27 updates 178 k [openshift/dind-node] python3-bind noarch 32:9.11.3-2.fc27 updates 139 k [openshift/dind-node] python3-ply noarch 3.9-4.fc27 fedora 106 k [openshift/dind-node] runc x86_64 2:1.0.0-19.rc5.git4bb1fe4.fc27 updates 1.9 M [openshift/dind-node] socat x86_64 1.7.3.2-4.fc27 fedora 296 k [openshift/dind-node] Installing weak dependencies: [openshift/dind-node] containernetworking-cni x86_64 0.6.0-2.fc27 updates 9.6 M [openshift/dind-node] iproute-tc x86_64 4.15.0-1.fc27 updates 389 k [openshift/dind-node] Transaction Summary [openshift/dind-node] ================================================================================ [openshift/dind-node] Install 67 Packages [openshift/dind-node] Total download size: 54 M [openshift/dind-node] Installed size: 194 M [openshift/dind-node] Downloading Packages: [openshift/dind-node] (1/67): hostname-3.18-4.fc27.x86_64.rpm 61 kB/s | 28 kB 00:00 [openshift/dind-node] (2/67): less-487-5.fc27.x86_64.rpm 253 kB/s | 159 kB 00:00 [openshift/dind-node] (3/67): iputils-20161105-7.fc27.x86_64.rpm 248 kB/s | 157 kB 00:00 [openshift/dind-node] (4/67): which-2.21-4.fc27.x86_64.rpm 222 kB/s | 46 kB 00:00 [openshift/dind-node] (5/67): iptables-services-1.6.1-4.fc27.x86_64.r 271 kB/s | 54 kB 00:00 [openshift/dind-node] (6/67): conntrack-tools-1.4.4-5.fc27.x86_64.rpm 486 kB/s | 201 kB 00:00 [openshift/dind-node] (7/67): openvswitch-ovn-central-2.8.1-1.fc27.x8 1.4 MB/s | 859 kB 00:00 [openshift/dind-node] (8/67): openvswitch-ovn-docker-2.8.1-1.fc27.x86 144 kB/s | 21 kB 00:00 [openshift/dind-node] (9/67): openvswitch-ovn-common-2.8.1-1.fc27.x86 2.6 MB/s | 2.3 MB 00:00 [openshift/dind-node] (10/67): openvswitch-2.8.1-1.fc27.x86_64.rpm 4.8 MB/s | 6.5 MB 00:01 [openshift/dind-node] (11/67): openvswitch-ovn-host-2.8.1-1.fc27.x86_ 1.3 MB/s | 910 kB 00:00 [openshift/dind-node] (12/67): openvswitch-ovn-vtep-2.8.1-1.fc27.x86_ 1.2 MB/s | 802 kB 00:00 [openshift/dind-node] (13/67): python2-pyroute2-0.4.15-2.fc27.noarch. 754 kB/s | 364 kB 00:00 [openshift/dind-node] (14/67): python-netaddr-0.7.19-3.fc27.noarch.rp 1.9 MB/s | 1.5 MB 00:00 [openshift/dind-node] (15/67): python2-requests-2.18.4-1.fc27.noarch. 402 kB/s | 122 kB 00:00 [openshift/dind-node] (16/67): libnetfilter_cthelper-1.0.0-12.fc27.x8 100 kB/s | 22 kB 00:00 [openshift/dind-node] (17/67): PyYAML-3.12-5.fc27.x86_64.rpm 526 kB/s | 180 kB 00:00 [openshift/dind-node] (18/67): libnetfilter_cttimeout-1.0.0-10.fc27.x 107 kB/s | 22 kB 00:00 [openshift/dind-node] (19/67): libnetfilter_queue-1.0.2-10.fc27.x86_6 196 kB/s | 28 kB 00:00 [openshift/dind-node] (20/67): numactl-libs-2.0.11-5.fc27.x86_64.rpm 189 kB/s | 33 kB 00:00 [openshift/dind-node] (21/67): firewalld-filesystem-0.4.4.5-3.fc27.no 248 kB/s | 71 kB 00:00 [openshift/dind-node] (22/67): python2-openvswitch-2.8.1-1.fc27.noarc 546 kB/s | 171 kB 00:00 [openshift/dind-node] (23/67): python-chardet-3.0.4-2.fc27.noarch.rpm 600 kB/s | 189 kB 00:00 [openshift/dind-node] (24/67): python2-idna-2.5-2.fc27.noarch.rpm 382 kB/s | 98 kB 00:00 [openshift/dind-node] (25/67): bind-utils-9.11.3-2.fc27.x86_64.rpm 5.8 MB/s | 422 kB 00:00 [openshift/dind-node] (26/67): bind-libs-9.11.3-2.fc27.x86_64.rpm 3.5 MB/s | 161 kB 00:00 [openshift/dind-node] (27/67): bind-libs-lite-9.11.3-2.fc27.x86_64.rp 29 MB/s | 1.1 MB 00:00 [openshift/dind-node] (28/67): python3-bind-9.11.3-2.fc27.noarch.rpm 2.5 MB/s | 139 kB 00:00 [openshift/dind-node] (29/67): python2-six-1.11.0-1.fc27.noarch.rpm 183 kB/s | 36 kB 00:00 [openshift/dind-node] (30/67): bind-license-9.11.3-2.fc27.noarch.rpm 2.1 MB/s | 92 kB 00:00 [openshift/dind-node] (31/67): findutils-4.6.0-16.fc27.x86_64.rpm 10 MB/s | 523 kB 00:00 [openshift/dind-node] (32/67): iproute-4.15.0-1.fc27.x86_64.rpm 16 MB/s | 534 kB 00:00 [openshift/dind-node] (33/67): python3-ply-3.9-4.fc27.noarch.rpm 366 kB/s | 106 kB 00:00 [openshift/dind-node] (34/67): GeoIP-1.6.11-3.fc27.x86_64.rpm 369 kB/s | 124 kB 00:00 [openshift/dind-node] (35/67): bridge-utils-1.6-1.fc27.x86_64.rpm 3.7 MB/s | 38 kB 00:00 [openshift/dind-node] (36/67): ethtool-4.15-1.fc27.x86_64.rpm 7.5 MB/s | 140 kB 00:00 [openshift/dind-node] (37/67): conmon-1.9.10-1.git8723732.fc27.x86_64 3.1 MB/s | 37 kB 00:00 [openshift/dind-node] (38/67): runc-1.0.0-19.rc5.git4bb1fe4.fc27.x86_ 29 MB/s | 1.9 MB 00:00 [openshift/dind-node] (39/67): cri-o-1.9.10-1.git8723732.fc27.x86_64. 41 MB/s | 7.9 MB 00:00 [openshift/dind-node] (40/67): openssl-1.1.0h-1.fc27.x86_64.rpm 19 MB/s | 575 kB 00:00 [openshift/dind-node] (41/67): procps-ng-3.3.10-15.fc27.x86_64.rpm 711 kB/s | 395 kB 00:00 [openshift/dind-node] (42/67): socat-1.7.3.2-4.fc27.x86_64.rpm 710 kB/s | 296 kB 00:00 [openshift/dind-node] (43/67): gc-7.6.0-7.fc27.x86_64.rpm 394 kB/s | 110 kB 00:00 [openshift/dind-node] (44/67): make-4.2.1-4.fc27.x86_64.rpm 822 kB/s | 494 kB 00:00 [openshift/dind-node] (45/67): libatomic_ops-7.4.6-3.fc27.x86_64.rpm 138 kB/s | 33 kB 00:00 [openshift/dind-node] (46/67): python2-2.7.14-10.fc27.x86_64.rpm 2.4 MB/s | 101 kB 00:00 [openshift/dind-node] (47/67): python2-libs-2.7.14-10.fc27.x86_64.rpm 44 MB/s | 6.3 MB 00:00 [openshift/dind-node] (48/67): libtool-ltdl-2.4.6-20.fc27.x86_64.rpm 210 kB/s | 55 kB 00:00 [openshift/dind-node] (49/67): python2-urllib3-1.22-3.fc27.noarch.rpm 4.2 MB/s | 178 kB 00:00 [openshift/dind-node] (50/67): python-backports-ssl_match_hostname-3. 106 kB/s | 17 kB 00:00 [openshift/dind-node] (51/67): python-ipaddress-1.0.18-2.fc27.noarch. 194 kB/s | 39 kB 00:00 [openshift/dind-node] (52/67): python2-pyOpenSSL-17.2.0-2.fc27.noarch 432 kB/s | 98 kB 00:00 [openshift/dind-node] (53/67): python2-pysocks-1.6.7-1.fc27.noarch.rp 207 kB/s | 32 kB 00:00 [openshift/dind-node] (54/67): python2-cryptography-2.0.2-3.fc27.x86_ 9.4 MB/s | 507 kB 00:00 [openshift/dind-node] (55/67): guile-2.0.14-3.fc27.x86_64.rpm 2.8 MB/s | 3.5 MB 00:01 [openshift/dind-node] (56/67): python2-pycparser-2.14-11.fc27.noarch. 476 kB/s | 146 kB 00:00 [openshift/dind-node] (57/67): python2-cffi-1.10.0-3.fc27.x86_64.rpm 630 kB/s | 227 kB 00:00 [openshift/dind-node] (58/67): python2-backports-1.0-12.fc27.x86_64.r 973 kB/s | 10 kB 00:00 [openshift/dind-node] (59/67): GeoIP-GeoLite-data-2018.01-1.fc27.noar 22 MB/s | 465 kB 00:00 [openshift/dind-node] (60/67): python2-enum34-1.1.6-3.fc27.noarch.rpm 4.3 MB/s | 57 kB 00:00 [openshift/dind-node] (61/67): python2-asn1crypto-0.23.0-1.fc27.noarc 4.0 MB/s | 176 kB 00:00 [openshift/dind-node] (62/67): python2-setuptools-37.0.0-1.fc27.noarc 18 MB/s | 606 kB 00:00 [openshift/dind-node] (63/67): python2-pip-9.0.1-14.fc27.noarch.rpm 35 MB/s | 1.8 MB 00:00 [openshift/dind-node] (64/67): iproute-tc-4.15.0-1.fc27.x86_64.rpm 19 MB/s | 389 kB 00:00 [openshift/dind-node] (65/67): python2-ply-3.9-4.fc27.noarch.rpm 411 kB/s | 107 kB 00:00 [openshift/dind-node] (66/67): containernetworking-cni-0.6.0-2.fc27.x 48 MB/s | 9.6 MB 00:00 [openshift/dind-node] (67/67): linux-atm-libs-2.5.1-19.fc27.x86_64.rp 182 kB/s | 40 kB 00:00 [openshift/dind-node] -------------------------------------------------------------------------------- [openshift/dind-node] Total 7.9 MB/s | 54 MB 00:06 [openshift/dind-node] Running transaction check [openshift/dind-node] Transaction check succeeded. [openshift/dind-node] Running transaction test [openshift/dind-node] Transaction test succeeded. [openshift/dind-node] Running transaction [openshift/dind-node] Preparing : 1/1 [openshift/dind-node] Installing : numactl-libs-2.0.11-5.fc27.x86_64 1/67 [openshift/dind-node] Running scriptlet: numactl-libs-2.0.11-5.fc27.x86_64 1/67 [openshift/dind-node] Installing : python2-libs-2.7.14-10.fc27.x86_64 2/67 [openshift/dind-node] Running scriptlet: python2-libs-2.7.14-10.fc27.x86_64 2/67 [openshift/dind-node] Installing : python2-pip-9.0.1-14.fc27.noarch 3/67 [openshift/dind-node] Installing : python2-setuptools-37.0.0-1.fc27.noarch 4/67 [openshift/dind-node] Installing : python2-2.7.14-10.fc27.x86_64 5/67 [openshift/dind-node] Installing : python2-idna-2.5-2.fc27.noarch 6/67 [openshift/dind-node] Installing : python2-six-1.11.0-1.fc27.noarch 7/67 [openshift/dind-node] Installing : iproute-4.15.0-1.fc27.x86_64 8/67 [openshift/dind-node] Installing : bind-license-32:9.11.3-2.fc27.noarch 9/67 [openshift/dind-node] Installing : python-ipaddress-1.0.18-2.fc27.noarch 10/67 [openshift/dind-node] Installing : firewalld-filesystem-0.4.4.5-3.fc27.noarch 11/67 [openshift/dind-node] Installing : python2-openvswitch-2.8.1-1.fc27.noarch 12/67 [openshift/dind-node] Installing : python-chardet-3.0.4-2.fc27.noarch 13/67 [openshift/dind-node] Installing : python2-pysocks-1.6.7-1.fc27.noarch 14/67 [openshift/dind-node] Installing : python2-ply-3.9-4.fc27.noarch 15/67 [openshift/dind-node] Installing : python2-pycparser-2.14-11.fc27.noarch 16/67 [openshift/dind-node] Installing : python2-cffi-1.10.0-3.fc27.x86_64 17/67 [openshift/dind-node] Installing : python2-backports-1.0-12.fc27.x86_64 18/67 [openshift/dind-node] Installing : python-backports-ssl_match_hostname-3.5.0.1-5.fc27 19/67 [openshift/dind-node] Installing : python2-asn1crypto-0.23.0-1.fc27.noarch 20/67 [openshift/dind-node] Installing : python2-enum34-1.1.6-3.fc27.noarch 21/67 [openshift/dind-node] Installing : python2-cryptography-2.0.2-3.fc27.x86_64 22/67 [openshift/dind-node] Installing : python2-pyOpenSSL-17.2.0-2.fc27.noarch 23/67 [openshift/dind-node] Installing : python2-urllib3-1.22-3.fc27.noarch 24/67 [openshift/dind-node] Installing : linux-atm-libs-2.5.1-19.fc27.x86_64 25/67 [openshift/dind-node] Running scriptlet: linux-atm-libs-2.5.1-19.fc27.x86_64 25/67 [openshift/dind-node] Installing : GeoIP-GeoLite-data-2018.01-1.fc27.noarch 26/67 [openshift/dind-node] Installing : GeoIP-1.6.11-3.fc27.x86_64 27/67 [openshift/dind-node] Running scriptlet: GeoIP-1.6.11-3.fc27.x86_64 27/67 [openshift/dind-node] Installing : bind-libs-lite-32:9.11.3-2.fc27.x86_64 28/67 [openshift/dind-node] Running scriptlet: bind-libs-lite-32:9.11.3-2.fc27.x86_64 28/67 [openshift/dind-node] Installing : bind-libs-32:9.11.3-2.fc27.x86_64 29/67 [openshift/dind-node] Running scriptlet: bind-libs-32:9.11.3-2.fc27.x86_64 29/67 [openshift/dind-node] Installing : libtool-ltdl-2.4.6-20.fc27.x86_64 30/67 [openshift/dind-node] Running scriptlet: libtool-ltdl-2.4.6-20.fc27.x86_64 30/67 [openshift/dind-node] Installing : libatomic_ops-7.4.6-3.fc27.x86_64 31/67 [openshift/dind-node] Running scriptlet: libatomic_ops-7.4.6-3.fc27.x86_64 31/67 [openshift/dind-node] Installing : gc-7.6.0-7.fc27.x86_64 32/67 [openshift/dind-node] Running scriptlet: gc-7.6.0-7.fc27.x86_64 32/67 [openshift/dind-node] Installing : guile-5:2.0.14-3.fc27.x86_64 33/67 [openshift/dind-node] Running scriptlet: guile-5:2.0.14-3.fc27.x86_64 33/67 [openshift/dind-node] Installing : make-1:4.2.1-4.fc27.x86_64 34/67 [openshift/dind-node] Running scriptlet: make-1:4.2.1-4.fc27.x86_64 34/67 [openshift/dind-node] Installing : openssl-1:1.1.0h-1.fc27.x86_64 35/67 [openshift/dind-node] Installing : openvswitch-2.8.1-1.fc27.x86_64 36/67 [openshift/dind-node] Running scriptlet: openvswitch-2.8.1-1.fc27.x86_64 36/67 [openshift/dind-node] Installing : openvswitch-ovn-common-2.8.1-1.fc27.x86_64 37/67 [openshift/dind-node] Installing : socat-1.7.3.2-4.fc27.x86_64 38/67 [openshift/dind-node] Installing : runc-2:1.0.0-19.rc5.git4bb1fe4.fc27.x86_64 39/67 [openshift/dind-node] Installing : conmon-2:1.9.10-1.git8723732.fc27.x86_64 40/67 [openshift/dind-node] Installing : python3-ply-3.9-4.fc27.noarch 41/67 [openshift/dind-node] Installing : python3-bind-32:9.11.3-2.fc27.noarch 42/67 [openshift/dind-node] Installing : libnetfilter_queue-1.0.2-10.fc27.x86_64 43/67 [openshift/dind-node] Running scriptlet: libnetfilter_queue-1.0.2-10.fc27.x86_64 43/67 [openshift/dind-node] Installing : libnetfilter_cttimeout-1.0.0-10.fc27.x86_64 44/67 [openshift/dind-node] Running scriptlet: libnetfilter_cttimeout-1.0.0-10.fc27.x86_64 44/67 [openshift/dind-node] Installing : libnetfilter_cthelper-1.0.0-12.fc27.x86_64 45/67 [openshift/dind-node] Running scriptlet: libnetfilter_cthelper-1.0.0-12.fc27.x86_64 45/67 [openshift/dind-node] Installing : conntrack-tools-1.4.4-5.fc27.x86_64 46/67 [openshift/dind-node] Running scriptlet: conntrack-tools-1.4.4-5.fc27.x86_64 46/67 [openshift/dind-node] Installing : bind-utils-32:9.11.3-2.fc27.x86_64 47/67 [openshift/dind-node] Installing : cri-o-2:1.9.10-1.git8723732.fc27.x86_64 48/67 [openshift/dind-node] Running scriptlet: cri-o-2:1.9.10-1.git8723732.fc27.x86_64 48/67 [openshift/dind-node] Installing : openvswitch-ovn-central-2.8.1-1.fc27.x86_64 49/67 [openshift/dind-node] Running scriptlet: openvswitch-ovn-central-2.8.1-1.fc27.x86_64 49/67 [openshift/dind-node] Installing : openvswitch-ovn-docker-2.8.1-1.fc27.x86_64 50/67 [openshift/dind-node] Installing : openvswitch-ovn-host-2.8.1-1.fc27.x86_64 51/67 [openshift/dind-node] Running scriptlet: openvswitch-ovn-host-2.8.1-1.fc27.x86_64 51/67 [openshift/dind-node] Installing : openvswitch-ovn-vtep-2.8.1-1.fc27.x86_64 52/67 [openshift/dind-node] Running scriptlet: openvswitch-ovn-vtep-2.8.1-1.fc27.x86_64 52/67 [openshift/dind-node] Installing : iproute-tc-4.15.0-1.fc27.x86_64 53/67 [openshift/dind-node] Installing : python2-requests-2.18.4-1.fc27.noarch 54/67 [openshift/dind-node] Installing : python-netaddr-0.7.19-3.fc27.noarch 55/67 [openshift/dind-node] Installing : python2-pyroute2-0.4.15-2.fc27.noarch 56/67 [openshift/dind-node] Installing : PyYAML-3.12-5.fc27.x86_64 57/67 [openshift/dind-node] Installing : containernetworking-cni-0.6.0-2.fc27.x86_64 58/67 [openshift/dind-node] Installing : ethtool-2:4.15-1.fc27.x86_64 59/67 [openshift/dind-node] Installing : bridge-utils-1.6-1.fc27.x86_64 60/67 [openshift/dind-node] Installing : procps-ng-3.3.10-15.fc27.x86_64 61/67 [openshift/dind-node] Running scriptlet: procps-ng-3.3.10-15.fc27.x86_64 61/67 [openshift/dind-node] Installing : findutils-1:4.6.0-16.fc27.x86_64 62/67 [openshift/dind-node] Running scriptlet: findutils-1:4.6.0-16.fc27.x86_64 62/67 [openshift/dind-node] Installing : iptables-services-1.6.1-4.fc27.x86_64 63/67 [openshift/dind-node] Running scriptlet: iptables-services-1.6.1-4.fc27.x86_64 63/67 [openshift/dind-node] Installing : which-2.21-4.fc27.x86_64 64/67 [openshift/dind-node] Running scriptlet: which-2.21-4.fc27.x86_64 64/67 [openshift/dind-node] install-info: No such file or directory for /usr/share/info/which.info.gz [openshift/dind-node] Installing : less-487-5.fc27.x86_64 65/67 [openshift/dind-node] Installing : iputils-20161105-7.fc27.x86_64 66/67 [openshift/dind-node] Running scriptlet: iputils-20161105-7.fc27.x86_64 66/67 [openshift/dind-node] Installing : hostname-3.18-4.fc27.x86_64 67/67 [openshift/dind-node] Running scriptlet: GeoIP-GeoLite-data-2018.01-1.fc27.noarch 67/67 [openshift/dind-node] Running scriptlet: guile-5:2.0.14-3.fc27.x86_64 67/67 [openshift/dind-node] Running scriptlet: hostname-3.18-4.fc27.x86_64 67/67Failed to connect to bus: No such file or directory [openshift/dind-node] [openshift/dind-node] Verifying : hostname-3.18-4.fc27.x86_64 1/67 [openshift/dind-node] Verifying : iputils-20161105-7.fc27.x86_64 2/67 [openshift/dind-node] Verifying : less-487-5.fc27.x86_64 3/67 [openshift/dind-node] Verifying : which-2.21-4.fc27.x86_64 4/67 [openshift/dind-node] Verifying : iptables-services-1.6.1-4.fc27.x86_64 5/67 [openshift/dind-node] Verifying : conntrack-tools-1.4.4-5.fc27.x86_64 6/67 [openshift/dind-node] Verifying : openvswitch-2.8.1-1.fc27.x86_64 7/67 [openshift/dind-node] Verifying : openvswitch-ovn-central-2.8.1-1.fc27.x86_64 8/67 [openshift/dind-node] Verifying : openvswitch-ovn-common-2.8.1-1.fc27.x86_64 9/67 [openshift/dind-node] Verifying : openvswitch-ovn-docker-2.8.1-1.fc27.x86_64 10/67 [openshift/dind-node] Verifying : openvswitch-ovn-host-2.8.1-1.fc27.x86_64 11/67 [openshift/dind-node] Verifying : openvswitch-ovn-vtep-2.8.1-1.fc27.x86_64 12/67 [openshift/dind-node] Verifying : python-netaddr-0.7.19-3.fc27.noarch 13/67 [openshift/dind-node] Verifying : python2-pyroute2-0.4.15-2.fc27.noarch 14/67 [openshift/dind-node] Verifying : python2-requests-2.18.4-1.fc27.noarch 15/67 [openshift/dind-node] Verifying : PyYAML-3.12-5.fc27.x86_64 16/67 [openshift/dind-node] Verifying : libnetfilter_cthelper-1.0.0-12.fc27.x86_64 17/67 [openshift/dind-node] Verifying : libnetfilter_cttimeout-1.0.0-10.fc27.x86_64 18/67 [openshift/dind-node] Verifying : libnetfilter_queue-1.0.2-10.fc27.x86_64 19/67 [openshift/dind-node] Verifying : numactl-libs-2.0.11-5.fc27.x86_64 20/67 [openshift/dind-node] Verifying : firewalld-filesystem-0.4.4.5-3.fc27.noarch 21/67 [openshift/dind-node] Verifying : python2-openvswitch-2.8.1-1.fc27.noarch 22/67 [openshift/dind-node] Verifying : python-chardet-3.0.4-2.fc27.noarch 23/67 [openshift/dind-node] Verifying : python2-idna-2.5-2.fc27.noarch 24/67 [openshift/dind-node] Verifying : python2-six-1.11.0-1.fc27.noarch 25/67 [openshift/dind-node] Verifying : bind-utils-32:9.11.3-2.fc27.x86_64 26/67 [openshift/dind-node] Verifying : bind-libs-32:9.11.3-2.fc27.x86_64 27/67 [openshift/dind-node] Verifying : bind-libs-lite-32:9.11.3-2.fc27.x86_64 28/67 [openshift/dind-node] Verifying : python3-bind-32:9.11.3-2.fc27.noarch 29/67 [openshift/dind-node] Verifying : GeoIP-1.6.11-3.fc27.x86_64 30/67 [openshift/dind-node] Verifying : bind-license-32:9.11.3-2.fc27.noarch 31/67 [openshift/dind-node] Verifying : python3-ply-3.9-4.fc27.noarch 32/67 [openshift/dind-node] Verifying : findutils-1:4.6.0-16.fc27.x86_64 33/67 [openshift/dind-node] Verifying : iproute-4.15.0-1.fc27.x86_64 34/67 [openshift/dind-node] Verifying : procps-ng-3.3.10-15.fc27.x86_64 35/67 [openshift/dind-node] Verifying : bridge-utils-1.6-1.fc27.x86_64 36/67 [openshift/dind-node] Verifying : ethtool-2:4.15-1.fc27.x86_64 37/67 [openshift/dind-node] Verifying : cri-o-2:1.9.10-1.git8723732.fc27.x86_64 38/67 [openshift/dind-node] Verifying : conmon-2:1.9.10-1.git8723732.fc27.x86_64 39/67 [openshift/dind-node] Verifying : runc-2:1.0.0-19.rc5.git4bb1fe4.fc27.x86_64 40/67 [openshift/dind-node] Verifying : socat-1.7.3.2-4.fc27.x86_64 41/67 [openshift/dind-node] Verifying : openssl-1:1.1.0h-1.fc27.x86_64 42/67 [openshift/dind-node] Verifying : make-1:4.2.1-4.fc27.x86_64 43/67 [openshift/dind-node] Verifying : gc-7.6.0-7.fc27.x86_64 44/67 [openshift/dind-node] Verifying : guile-5:2.0.14-3.fc27.x86_64 45/67 [openshift/dind-node] Verifying : libatomic_ops-7.4.6-3.fc27.x86_64 46/67 [openshift/dind-node] Verifying : libtool-ltdl-2.4.6-20.fc27.x86_64 47/67 [openshift/dind-node] Verifying : python2-2.7.14-10.fc27.x86_64 48/67 [openshift/dind-node] Verifying : python2-libs-2.7.14-10.fc27.x86_64 49/67 [openshift/dind-node] Verifying : python2-urllib3-1.22-3.fc27.noarch 50/67 [openshift/dind-node] Verifying : python-backports-ssl_match_hostname-3.5.0.1-5.fc27 51/67 [openshift/dind-node] Verifying : python-ipaddress-1.0.18-2.fc27.noarch 52/67 [openshift/dind-node] Verifying : python2-pyOpenSSL-17.2.0-2.fc27.noarch 53/67 [openshift/dind-node] Verifying : python2-pysocks-1.6.7-1.fc27.noarch 54/67 [openshift/dind-node] Verifying : python2-cryptography-2.0.2-3.fc27.x86_64 55/67 [openshift/dind-node] Verifying : python2-cffi-1.10.0-3.fc27.x86_64 56/67 [openshift/dind-node] Verifying : python2-pycparser-2.14-11.fc27.noarch 57/67 [openshift/dind-node] Verifying : python2-ply-3.9-4.fc27.noarch 58/67 [openshift/dind-node] Verifying : python2-backports-1.0-12.fc27.x86_64 59/67 [openshift/dind-node] Verifying : GeoIP-GeoLite-data-2018.01-1.fc27.noarch 60/67 [openshift/dind-node] Verifying : python2-asn1crypto-0.23.0-1.fc27.noarch 61/67 [openshift/dind-node] Verifying : python2-enum34-1.1.6-3.fc27.noarch 62/67 [openshift/dind-node] Verifying : python2-pip-9.0.1-14.fc27.noarch 63/67 [openshift/dind-node] Verifying : python2-setuptools-37.0.0-1.fc27.noarch 64/67 [openshift/dind-node] Verifying : iproute-tc-4.15.0-1.fc27.x86_64 65/67 [openshift/dind-node] Verifying : linux-atm-libs-2.5.1-19.fc27.x86_64 66/67 [openshift/dind-node] Verifying : containernetworking-cni-0.6.0-2.fc27.x86_64 67/67 [openshift/dind-node] Installed: [openshift/dind-node] PyYAML.x86_64 3.12-5.fc27 [openshift/dind-node] bind-utils.x86_64 32:9.11.3-2.fc27 [openshift/dind-node] bridge-utils.x86_64 1.6-1.fc27 [openshift/dind-node] conntrack-tools.x86_64 1.4.4-5.fc27 [openshift/dind-node] cri-o.x86_64 2:1.9.10-1.git8723732.fc27 [openshift/dind-node] ethtool.x86_64 2:4.15-1.fc27 [openshift/dind-node] findutils.x86_64 1:4.6.0-16.fc27 [openshift/dind-node] hostname.x86_64 3.18-4.fc27 [openshift/dind-node] iproute.x86_64 4.15.0-1.fc27 [openshift/dind-node] iptables-services.x86_64 1.6.1-4.fc27 [openshift/dind-node] iputils.x86_64 20161105-7.fc27 [openshift/dind-node] less.x86_64 487-5.fc27 [openshift/dind-node] openvswitch.x86_64 2.8.1-1.fc27 [openshift/dind-node] openvswitch-ovn-central.x86_64 2.8.1-1.fc27 [openshift/dind-node] openvswitch-ovn-common.x86_64 2.8.1-1.fc27 [openshift/dind-node] openvswitch-ovn-docker.x86_64 2.8.1-1.fc27 [openshift/dind-node] openvswitch-ovn-host.x86_64 2.8.1-1.fc27 [openshift/dind-node] openvswitch-ovn-vtep.x86_64 2.8.1-1.fc27 [openshift/dind-node] procps-ng.x86_64 3.3.10-15.fc27 [openshift/dind-node] python-netaddr.noarch 0.7.19-3.fc27 [openshift/dind-node] python2-pyroute2.noarch 0.4.15-2.fc27 [openshift/dind-node] python2-requests.noarch 2.18.4-1.fc27 [openshift/dind-node] which.x86_64 2.21-4.fc27 [openshift/dind-node] containernetworking-cni.x86_64 0.6.0-2.fc27 [openshift/dind-node] iproute-tc.x86_64 4.15.0-1.fc27 [openshift/dind-node] GeoIP.x86_64 1.6.11-3.fc27 [openshift/dind-node] GeoIP-GeoLite-data.noarch 2018.01-1.fc27 [openshift/dind-node] bind-libs.x86_64 32:9.11.3-2.fc27 [openshift/dind-node] bind-libs-lite.x86_64 32:9.11.3-2.fc27 [openshift/dind-node] bind-license.noarch 32:9.11.3-2.fc27 [openshift/dind-node] conmon.x86_64 2:1.9.10-1.git8723732.fc27 [openshift/dind-node] firewalld-filesystem.noarch 0.4.4.5-3.fc27 [openshift/dind-node] gc.x86_64 7.6.0-7.fc27 [openshift/dind-node] guile.x86_64 5:2.0.14-3.fc27 [openshift/dind-node] libatomic_ops.x86_64 7.4.6-3.fc27 [openshift/dind-node] libnetfilter_cthelper.x86_64 1.0.0-12.fc27 [openshift/dind-node] libnetfilter_cttimeout.x86_64 1.0.0-10.fc27 [openshift/dind-node] libnetfilter_queue.x86_64 1.0.2-10.fc27 [openshift/dind-node] libtool-ltdl.x86_64 2.4.6-20.fc27 [openshift/dind-node] linux-atm-libs.x86_64 2.5.1-19.fc27 [openshift/dind-node] make.x86_64 1:4.2.1-4.fc27 [openshift/dind-node] numactl-libs.x86_64 2.0.11-5.fc27 [openshift/dind-node] openssl.x86_64 1:1.1.0h-1.fc27 [openshift/dind-node] python-backports-ssl_match_hostname.noarch 3.5.0.1-5.fc27 [openshift/dind-node] python-chardet.noarch 3.0.4-2.fc27 [openshift/dind-node] python-ipaddress.noarch 1.0.18-2.fc27 [openshift/dind-node] python2.x86_64 2.7.14-10.fc27 [openshift/dind-node] python2-asn1crypto.noarch 0.23.0-1.fc27 [openshift/dind-node] python2-backports.x86_64 1.0-12.fc27 [openshift/dind-node] python2-cffi.x86_64 1.10.0-3.fc27 [openshift/dind-node] python2-cryptography.x86_64 2.0.2-3.fc27 [openshift/dind-node] python2-enum34.noarch 1.1.6-3.fc27 [openshift/dind-node] python2-idna.noarch 2.5-2.fc27 [openshift/dind-node] python2-libs.x86_64 2.7.14-10.fc27 [openshift/dind-node] python2-openvswitch.noarch 2.8.1-1.fc27 [openshift/dind-node] python2-pip.noarch 9.0.1-14.fc27 [openshift/dind-node] python2-ply.noarch 3.9-4.fc27 [openshift/dind-node] python2-pyOpenSSL.noarch 17.2.0-2.fc27 [openshift/dind-node] python2-pycparser.noarch 2.14-11.fc27 [openshift/dind-node] python2-pysocks.noarch 1.6.7-1.fc27 [openshift/dind-node] python2-setuptools.noarch 37.0.0-1.fc27 [openshift/dind-node] python2-six.noarch 1.11.0-1.fc27 [openshift/dind-node] python2-urllib3.noarch 1.22-3.fc27 [openshift/dind-node] python3-bind.noarch 32:9.11.3-2.fc27 [openshift/dind-node] python3-ply.noarch 3.9-4.fc27 [openshift/dind-node] runc.x86_64 2:1.0.0-19.rc5.git4bb1fe4.fc27 [openshift/dind-node] socat.x86_64 1.7.3.2-4.fc27 [openshift/dind-node] Complete! [openshift/dind-node] --> RUN rm -f /etc/cni/net.d/* [openshift/dind-node] --> RUN systemctl enable iptables.service [openshift/dind-node] Created symlink /etc/systemd/system/basic.target.wants/iptables.service → /usr/lib/systemd/system/iptables.service. [openshift/dind-node] --> COPY iptables /etc/sysconfig/ [openshift/dind-node] --> COPY openshift-generate-node-config.sh /usr/local/bin/ [openshift/dind-node] --> COPY openshift-dind-lib.sh /usr/local/bin/ [openshift/dind-node] --> RUN systemctl enable openvswitch [openshift/dind-node] Created symlink /etc/systemd/system/multi-user.target.wants/openvswitch.service → /usr/lib/systemd/system/openvswitch.service. [openshift/dind-node] --> COPY openshift-node.service /etc/systemd/system/ [openshift/dind-node] --> RUN systemctl enable openshift-node.service [openshift/dind-node] Configuration file /etc/systemd/system/openshift-node.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway. [openshift/dind-node] Created symlink /etc/systemd/system/multi-user.target.wants/openshift-node.service → /etc/systemd/system/openshift-node.service. [openshift/dind-node] --> RUN mkdir -p /var/lib/origin [openshift/dind-node] --> COPY openshift-enable-ssh-access.sh /usr/local/bin/ [openshift/dind-node] --> COPY openshift-enable-ssh-access.service /etc/systemd/system/ [openshift/dind-node] --> RUN systemctl enable openshift-enable-ssh-access.service [openshift/dind-node] Configuration file /etc/systemd/system/openshift-enable-ssh-access.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway. [openshift/dind-node] Created symlink /etc/systemd/system/sshd.service.wants/openshift-enable-ssh-access.service → /etc/systemd/system/openshift-enable-ssh-access.service. [openshift/dind-node] --> RUN mkdir -p /etc/cni/net.d [openshift/dind-node] --> RUN mkdir -p /opt/cni/bin [openshift/dind-node] --> RUN ln -sf /data/openshift /usr/local/bin/ && ln -sf /data/oc /usr/local/bin/ && ln -sf /data/openshift /usr/local/bin/openshift-deploy && ln -sf /data/openshift /usr/local/bin/openshift-docker-build && ln -sf /data/openshift /usr/local/bin/openshift-sti-build && ln -sf /data/openshift /usr/local/bin/openshift-git-clone && ln -sf /data/openshift /usr/local/bin/openshift-manage-dockerfile && ln -sf /data/openshift /usr/local/bin/openshift-extract-image-content && ln -sf /data/openshift /usr/local/bin/openshift-f5-router && ln -sf /data/openshift-sdn /opt/cni/bin/ && ln -sf /data/host-local /opt/cni/bin/ && ln -sf /data/loopback /opt/cni/bin/ [openshift/dind-node] --> ENV KUBECONFIG /data/openshift.local.config/master/admin.kubeconfig [openshift/dind-node] --> COPY ovn-kubernetes-node-setup.service /etc/systemd/system/ [openshift/dind-node] --> COPY ovn-kubernetes-node-setup.sh /usr/local/bin/ [openshift/dind-node] --> RUN systemctl enable ovn-kubernetes-node-setup.service [openshift/dind-node] Configuration file /etc/systemd/system/ovn-kubernetes-node-setup.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway. [openshift/dind-node] Created symlink /etc/systemd/system/openshift-node.service.wants/ovn-kubernetes-node-setup.service → /etc/systemd/system/ovn-kubernetes-node-setup.service. [openshift/dind-node] --> COPY ovn-kubernetes-node.service /etc/systemd/system/ [openshift/dind-node] --> COPY ovn-kubernetes-node.sh /usr/local/bin/ [openshift/dind-node] --> RUN systemctl enable ovn-kubernetes-node.service [openshift/dind-node] Configuration file /etc/systemd/system/ovn-kubernetes-node.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway. [openshift/dind-node] Created symlink /etc/systemd/system/openshift-node.service.wants/ovn-kubernetes-node.service → /etc/systemd/system/ovn-kubernetes-node.service. [openshift/dind-node] Created symlink /etc/systemd/system/ovn-kubernetes-node-setup.service.wants/ovn-kubernetes-node.service → /etc/systemd/system/ovn-kubernetes-node.service. [openshift/dind-node] --> COPY crio-node.sh /usr/local/bin/ [openshift/dind-node] --> COPY crio-node.service /etc/systemd/system/ [openshift/dind-node] --> RUN systemctl enable crio-node.service [openshift/dind-node] Configuration file /etc/systemd/system/crio-node.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway. [openshift/dind-node] Created symlink /etc/systemd/system/openshift-node.service.wants/crio-node.service → /etc/systemd/system/crio-node.service. [openshift/dind-node] --> RUN echo "CRIO_STORAGE_OPTIONS=--storage-driver vfs" > /etc/sysconfig/crio-storage [openshift/dind-node] --> Committing changes to openshift/dind-node:4253ab3 ... [openshift/dind-node] --> Tagged as openshift/dind-node:latest [openshift/dind-node] --> Done [openshift/dind-master] --> FROM openshift/dind-node as 0 [openshift/dind-master] --> RUN systemctl disable iptables.service [openshift/dind-master] Removed /etc/systemd/system/basic.target.wants/iptables.service. [openshift/dind-master] --> COPY openshift-generate-master-config.sh /usr/local/bin/ [openshift/dind-master] --> COPY openshift-disable-master-node.sh /usr/local/bin/ [openshift/dind-master] --> COPY openshift-disable-master-node.service /etc/systemd/system/ [openshift/dind-master] --> RUN systemctl enable openshift-disable-master-node.service [openshift/dind-master] Configuration file /etc/systemd/system/openshift-disable-master-node.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway. [openshift/dind-master] Created symlink /etc/systemd/system/openshift-node.service.wants/openshift-disable-master-node.service → /etc/systemd/system/openshift-disable-master-node.service. [openshift/dind-master] --> COPY openshift-get-hosts.sh /usr/local/bin/ [openshift/dind-master] --> COPY openshift-add-to-hosts.sh /usr/local/bin/ [openshift/dind-master] --> COPY openshift-remove-from-hosts.sh /usr/local/bin/ [openshift/dind-master] --> COPY openshift-sync-etc-hosts.service /etc/systemd/system/ [openshift/dind-master] --> RUN systemctl enable openshift-sync-etc-hosts.service [openshift/dind-master] Configuration file /etc/systemd/system/openshift-sync-etc-hosts.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway. [openshift/dind-master] Created symlink /etc/systemd/system/openshift-master.service.wants/openshift-sync-etc-hosts.service → /etc/systemd/system/openshift-sync-etc-hosts.service. [openshift/dind-master] --> COPY openshift-master.service /etc/systemd/system/ [openshift/dind-master] --> RUN systemctl enable openshift-master.service [openshift/dind-master] Configuration file /etc/systemd/system/openshift-master.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway. [openshift/dind-master] Created symlink /etc/systemd/system/multi-user.target.wants/openshift-master.service → /etc/systemd/system/openshift-master.service. [openshift/dind-master] --> RUN mkdir -p /etc/systemd/system/openshift-node.service.d [openshift/dind-master] --> COPY master-node.conf /etc/systemd/system/openshift-node.service.d/ [openshift/dind-master] --> COPY ovn-kubernetes-master-setup.service /etc/systemd/system/ [openshift/dind-master] --> COPY ovn-kubernetes-master-setup.sh /usr/local/bin/ [openshift/dind-master] --> RUN systemctl enable ovn-kubernetes-master-setup.service [openshift/dind-master] Configuration file /etc/systemd/system/ovn-kubernetes-master-setup.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway. [openshift/dind-master] Created symlink /etc/systemd/system/openshift-master.service.wants/ovn-kubernetes-master-setup.service → /etc/systemd/system/ovn-kubernetes-master-setup.service. [openshift/dind-master] --> COPY ovn-kubernetes-master.service /etc/systemd/system/ [openshift/dind-master] --> COPY ovn-kubernetes-master.sh /usr/local/bin/ [openshift/dind-master] --> RUN systemctl enable ovn-kubernetes-master.service [openshift/dind-master] Configuration file /etc/systemd/system/ovn-kubernetes-master.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway. [openshift/dind-master] Created symlink /etc/systemd/system/openshift-master.service.wants/ovn-kubernetes-master.service → /etc/systemd/system/ovn-kubernetes-master.service. [openshift/dind-master] Created symlink /etc/systemd/system/ovn-kubernetes-master-setup.service.wants/ovn-kubernetes-master.service → /etc/systemd/system/ovn-kubernetes-master.service. [openshift/dind-master] --> Committing changes to openshift/dind-master:4253ab3 ... [openshift/dind-master] --> Tagged as openshift/dind-master:latest [openshift/dind-master] --> Done [INFO] [19:59:33+0000] Temporarily disabling selinux enforcement [INFO] [19:59:33+0000] Targeting subnet plugin: redhat/openshift-ovs-subnet [INFO] [19:59:33+0000] Launching a docker-in-docker cluster for the subnet plugin Stopping dind cluster 'nettest' cat: /tmp/openshift/networking/subnet/dind-env: No such file or directory Starting dind cluster 'nettest' with plugin 'redhat/openshift-ovs-subnet' and runtime 'dockershim' Waiting for ok ........................................................................... Done Waiting for 3 nodes to report readiness ............... Done Before invoking the openshift cli, make sure to source the cluster's rc file to configure the bash environment: $ . dind-nettest.rc $ oc get nodes [INFO] [20:01:38+0000] Saving cluster configuration [INFO] [20:01:38+0000] Running networking e2e tests against the subnet plugin === RUN TestExtended I0405 20:01:39.164055 17925 test.go:94] Extended test version v3.10.0-alpha.0+4253ab3-549 Running Suite: Extended ======================= Random Seed: 1522958498 - Will randomize all specs Will run 45 of 440 specs Apr 5 20:01:39.260: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig I0405 20:01:39.260527 17925 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. Apr 5 20:01:39.263: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable Apr 5 20:01:39.275: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 5 20:01:39.280: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 5 20:01:39.280: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready. Apr 5 20:01:39.281: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller] Apr 5 20:01:39.281: INFO: Dumping network health container logs from all nodes... Apr 5 20:01:39.283: INFO: e2e test version: v1.9.1+a0ce1bc657 Apr 5 20:01:39.284: INFO: kube-apiserver version: v1.9.1+a0ce1bc657 I0405 20:01:39.284105 17925 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:01:39.284: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:01:39.348: INFO: About to run a Kube e2e test, ensuring namespace is privileged Apr 5 20:01:39.381: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7jd8d STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 5 20:01:39.383: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 5 20:02:11.436: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | timeout -t 2 nc -w 1 -u 10.129.0.2 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-7jd8d PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:02:11.436: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig Apr 5 20:02:12.523: INFO: Found all expected endpoints: [netserver-0] Apr 5 20:02:12.525: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | timeout -t 2 nc -w 1 -u 10.128.0.2 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-7jd8d PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:02:12.525: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig Apr 5 20:02:13.604: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:02:13.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-7jd8d" for this suite. Apr 5 20:02:35.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:02:35.687: INFO: namespace: e2e-tests-pod-network-test-7jd8d, resource: bindings, ignored listing per whitelist Apr 5 20:02:35.719: INFO: namespace e2e-tests-pod-network-test-7jd8d deletion completed in 22.112074385s • [SLOW TEST:56.435 seconds] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ SS ------------------------------ [Area:Networking] network isolation when using a plugin that does not isolate namespaces by default should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:404 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:02:35.934: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:02:36.005: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15 Apr 5 20:02:36.089: INFO: Using nettest-node-1 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:02:40.102: INFO: Target pod IP:port is 10.128.0.3:8080 Apr 5 20:02:40.102: INFO: Creating an exec pod on node nettest-node-1 Apr 5 20:02:40.102: INFO: Creating new exec pod Apr 5 20:02:44.114: INFO: Waiting up to 10s to wget 10.128.0.3:8080 Apr 5 20:02:44.115: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-isolation2-26jkh execpod-sourceip-nettest-node-157bbn -- /bin/sh -c wget -T 30 -qO- 10.128.0.3:8080' Apr 5 20:02:44.398: INFO: stderr: "" Apr 5 20:02:44.398: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:02:44.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation1-sg6hl" for this suite. Apr 5 20:03:02.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:03:02.484: INFO: namespace: e2e-tests-net-isolation1-sg6hl, resource: bindings, ignored listing per whitelist Apr 5 20:03:02.519: INFO: namespace e2e-tests-net-isolation1-sg6hl deletion completed in 18.10973177s [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:03:02.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation2-26jkh" for this suite. Apr 5 20:03:08.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:03:08.585: INFO: namespace: e2e-tests-net-isolation2-26jkh, resource: bindings, ignored listing per whitelist Apr 5 20:03:08.630: INFO: namespace e2e-tests-net-isolation2-26jkh deletion completed in 6.108738573s • [SLOW TEST:32.911 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:403 should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15 ------------------------------ S ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should allow communication from default to non-default namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:41 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:03:08.630: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:03:08.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:03:08.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow communication from default to non-default namespace on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:41 Apr 5 20:03:08.630: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should expose a health check on the metrics port [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:83 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:03:08.631: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:03:08.659: INFO: configPath is now "/tmp/extended-test-router-metrics-k2s9c-wlzjc-user.kubeconfig" Apr 5 20:03:08.659: INFO: The user is now "extended-test-router-metrics-k2s9c-wlzjc-user" Apr 5 20:03:08.659: INFO: Creating project "extended-test-router-metrics-k2s9c-wlzjc" Apr 5 20:03:08.732: INFO: Waiting on permissions in project "extended-test-router-metrics-k2s9c-wlzjc" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:03:08.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-metrics-k2s9c-wlzjc" for this suite. Apr 5 20:03:14.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:03:14.855: INFO: namespace: extended-test-router-metrics-k2s9c-wlzjc, resource: bindings, ignored listing per whitelist Apr 5 20:03:14.855: INFO: namespace extended-test-router-metrics-k2s9c-wlzjc deletion completed in 6.111281214s [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:76 S [SKIPPING] in Spec Setup (BeforeEach) [6.224 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:26 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:82 should expose a health check on the metrics port [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:83 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:39 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:03:14.855: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:03:14.905: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dfb9h STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 5 20:03:14.942: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 5 20:03:34.999: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.129.0.5:8080/dial?request=hostName&protocol=http&host=10.129.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-dfb9h PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:03:34.999: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig Apr 5 20:03:35.081: INFO: Waiting for endpoints: map[] Apr 5 20:03:35.084: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.129.0.5:8080/dial?request=hostName&protocol=http&host=10.128.0.5&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-dfb9h PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:03:35.084: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig Apr 5 20:03:35.167: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:03:35.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-dfb9h" for this suite. Apr 5 20:03:57.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:03:57.251: INFO: namespace: e2e-tests-pod-network-test-dfb9h, resource: bindings, ignored listing per whitelist Apr 5 20:03:57.282: INFO: namespace e2e-tests-pod-network-test-dfb9h deletion completed in 22.111910702s • [SLOW TEST:42.426 seconds] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:39 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:03:57.282: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:03:57.300: INFO: configPath is now "/tmp/extended-test-weighted-router-fgvll-2wmff-user.kubeconfig" Apr 5 20:03:57.300: INFO: The user is now "extended-test-weighted-router-fgvll-2wmff-user" Apr 5 20:03:57.300: INFO: Creating project "extended-test-weighted-router-fgvll-2wmff" Apr 5 20:03:57.359: INFO: Waiting on permissions in project "extended-test-weighted-router-fgvll-2wmff" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:29 Apr 5 20:03:57.375: INFO: Running 'oc new-app --config=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-weighted-router-fgvll-2wmff -f /tmp/fixture-testdata-dir522601851/test/extended/testdata/weighted-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "extended-test-weighted-router-fgvll-2wmff/" for "/tmp/fixture-testdata-dir522601851/test/extended/testdata/weighted-router.yaml" to project extended-test-weighted-router-fgvll-2wmff * With parameters: * IMAGE=openshift/origin-haproxy-router --> Creating resources ... pod "weighted-router" created rolebinding "system-router" created route "weightedroute" created route "zeroweightroute" created service "weightedendpoints1" created service "weightedendpoints2" created pod "endpoint-1" created pod "endpoint-2" created pod "endpoint-3" created --> Success Access your application via route 'weighted.example.com' Access your application via route 'zeroweight.example.com' Run 'oc status' to view your app. [It] should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:39 Apr 5 20:03:57.745: INFO: Creating new exec pod STEP: creating a weighted router from a config file "/tmp/fixture-testdata-dir522601851/test/extended/testdata/weighted-router.yaml" STEP: waiting for the healthz endpoint to respond Apr 5 20:04:36.765: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-fgvll-2wmff execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.129.0.6' "http://10.129.0.6:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:04:37.053: INFO: stderr: "" STEP: checking that 100 requests go through successfully Apr 5 20:04:37.053: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-fgvll-2wmff execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: weighted.example.com' "http://10.129.0.6" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:04:37.348: INFO: stderr: "" Apr 5 20:04:37.348: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-fgvll-2wmff execpod -- /bin/sh -c set -e for i in $(seq 1 100); do code=$( curl -s -o /dev/null -w '%{http_code}\n' --header 'Host: weighted.example.com' "http://10.129.0.6" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -ne 200 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi done ' Apr 5 20:04:38.064: INFO: stderr: "" STEP: checking that there are three weighted backends in the router stats Apr 5 20:04:38.064: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-fgvll-2wmff execpod -- /bin/sh -c curl -s -u admin:password --header 'Host: weighted.example.com' "http://10.129.0.6:1936/;csv"' Apr 5 20:04:38.352: INFO: stderr: "" STEP: checking that zero weights are also respected by the router Apr 5 20:04:38.352: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-fgvll-2wmff execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: zeroweight.example.com' "http://10.129.0.6"' Apr 5 20:04:38.636: INFO: stderr: "" [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:04:38.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-weighted-router-fgvll-2wmff" for this suite. Apr 5 20:04:48.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:04:48.689: INFO: namespace: extended-test-weighted-router-fgvll-2wmff, resource: bindings, ignored listing per whitelist Apr 5 20:04:48.754: INFO: namespace extended-test-weighted-router-fgvll-2wmff deletion completed in 10.109803233s • [SLOW TEST:51.473 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:22 The HAProxy router /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:38 should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:39 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:42 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:04:48.755: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:04:48.779: INFO: configPath is now "/tmp/extended-test-scoped-router-l822r-nmzhs-user.kubeconfig" Apr 5 20:04:48.779: INFO: The user is now "extended-test-scoped-router-l822r-nmzhs-user" Apr 5 20:04:48.779: INFO: Creating project "extended-test-scoped-router-l822r-nmzhs" Apr 5 20:04:48.830: INFO: Waiting on permissions in project "extended-test-scoped-router-l822r-nmzhs" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:32 Apr 5 20:04:48.884: INFO: Running 'oc new-app --config=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-scoped-router-l822r-nmzhs -f /tmp/fixture-testdata-dir522601851/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "extended-test-scoped-router-l822r-nmzhs/" for "/tmp/fixture-testdata-dir522601851/test/extended/testdata/scoped-router.yaml" to project extended-test-scoped-router-l822r-nmzhs * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-scoped", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first"] --> Creating resources ... pod "scoped-router" created pod "router-override" created rolebinding "system-router" created route "route-1" created route "route-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Run 'oc status' to view your app. [It] should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:42 Apr 5 20:04:49.195: INFO: Creating new exec pod STEP: creating a scoped router from a config file "/tmp/fixture-testdata-dir522601851/test/extended/testdata/scoped-router.yaml" STEP: waiting for the healthz endpoint to respond Apr 5 20:05:27.212: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-l822r-nmzhs execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.128.0.8' "http://10.128.0.8:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:05:27.502: INFO: stderr: "" STEP: waiting for the valid route to respond Apr 5 20:05:27.502: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-l822r-nmzhs execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: FIRST.example.com' "http://10.128.0.8/Letter" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:05:27.795: INFO: stderr: "" STEP: checking that second.example.com does not match a route Apr 5 20:05:27.795: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-l822r-nmzhs execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: second.example.com' "http://10.128.0.8/Letter"' Apr 5 20:05:28.079: INFO: stderr: "" STEP: checking that third.example.com does not match a route Apr 5 20:05:28.079: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-l822r-nmzhs execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: third.example.com' "http://10.128.0.8/Letter"' Apr 5 20:05:28.406: INFO: stderr: "" [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:05:28.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-scoped-router-l822r-nmzhs" for this suite. Apr 5 20:05:38.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:05:38.497: INFO: namespace: extended-test-scoped-router-l822r-nmzhs, resource: bindings, ignored listing per whitelist Apr 5 20:05:38.526: INFO: namespace extended-test-scoped-router-l822r-nmzhs deletion completed in 10.111490956s • [SLOW TEST:49.771 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:25 The HAProxy router /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:41 should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:42 ------------------------------ SSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should expose the profiling endpoints [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:206 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:05:38.526: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:05:38.544: INFO: configPath is now "/tmp/extended-test-router-metrics-k2s9c-q996s-user.kubeconfig" Apr 5 20:05:38.544: INFO: The user is now "extended-test-router-metrics-k2s9c-q996s-user" Apr 5 20:05:38.544: INFO: Creating project "extended-test-router-metrics-k2s9c-q996s" Apr 5 20:05:38.593: INFO: Waiting on permissions in project "extended-test-router-metrics-k2s9c-q996s" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:05:38.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-metrics-k2s9c-q996s" for this suite. Apr 5 20:05:44.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:05:44.741: INFO: namespace: extended-test-router-metrics-k2s9c-q996s, resource: bindings, ignored listing per whitelist Apr 5 20:05:44.752: INFO: namespace extended-test-router-metrics-k2s9c-q996s deletion completed in 6.109817209s [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:76 S [SKIPPING] in Spec Setup (BeforeEach) [6.226 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:26 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:82 should expose the profiling endpoints [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:206 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:39 ------------------------------ SSS ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should allow connections from pods in the default namespace to a service in another namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:60 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:05:44.752: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:05:44.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:05:44.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow connections from pods in the default namespace to a service in another namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:60 Apr 5 20:05:44.752: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSSSSSSS ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should allow communication from non-default to default namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:53 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:05:44.754: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:05:44.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:05:44.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow communication from non-default to default namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:53 Apr 5 20:05:44.754: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should enforce policy based on Ports [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:132 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 Apr 5 20:05:44.756: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:05:44.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should enforce policy based on Ports [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:132 Apr 5 20:05:44.756: This plugin does not implement NetworkPolicy. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should respond with 503 to unrecognized hosts [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:69 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:05:44.757: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:05:44.775: INFO: configPath is now "/tmp/extended-test-router-stress-bmd9q-t9rtb-user.kubeconfig" Apr 5 20:05:44.775: INFO: The user is now "extended-test-router-stress-bmd9q-t9rtb-user" Apr 5 20:05:44.775: INFO: Creating project "extended-test-router-stress-bmd9q-t9rtb" Apr 5 20:05:44.814: INFO: Waiting on permissions in project "extended-test-router-stress-bmd9q-t9rtb" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:45 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:32 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:05:44.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-stress-bmd9q-t9rtb" for this suite. Apr 5 20:05:50.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:05:50.976: INFO: namespace: extended-test-router-stress-bmd9q-t9rtb, resource: bindings, ignored listing per whitelist Apr 5 20:05:50.977: INFO: namespace extended-test-router-stress-bmd9q-t9rtb deletion completed in 6.112103757s S [SKIPPING] in Spec Setup (BeforeEach) [6.220 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:21 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:68 should respond with 503 to unrecognized hosts [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:69 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:48 ------------------------------ SS ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should prevent connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:44 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:05:50.977: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:05:50.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:05:50.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should prevent connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:44 Apr 5 20:05:50.977: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ S ------------------------------ [Area:Networking] network isolation when using a plugin that does not isolate namespaces by default should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:404 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:05:50.978: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:05:51.037: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19 Apr 5 20:05:51.114: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:05:55.130: INFO: Target pod IP:port is 10.128.0.10:8080 Apr 5 20:05:55.130: INFO: Creating an exec pod on node nettest-node-2 Apr 5 20:05:55.130: INFO: Creating new exec pod Apr 5 20:05:59.141: INFO: Waiting up to 10s to wget 10.128.0.10:8080 Apr 5 20:05:59.141: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-isolation2-b98hw execpod-sourceip-nettest-node-2r8fxt -- /bin/sh -c wget -T 30 -qO- 10.128.0.10:8080' Apr 5 20:05:59.433: INFO: stderr: "" Apr 5 20:05:59.433: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:05:59.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation1-dqdfp" for this suite. Apr 5 20:06:07.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:06:07.531: INFO: namespace: e2e-tests-net-isolation1-dqdfp, resource: bindings, ignored listing per whitelist Apr 5 20:06:07.560: INFO: namespace e2e-tests-net-isolation1-dqdfp deletion completed in 8.111063583s [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:06:07.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation2-b98hw" for this suite. Apr 5 20:06:13.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:06:13.672: INFO: namespace: e2e-tests-net-isolation2-b98hw, resource: bindings, ignored listing per whitelist Apr 5 20:06:13.672: INFO: namespace e2e-tests-net-isolation2-b98hw deletion completed in 6.109725448s • [SLOW TEST:22.693 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:403 should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19 ------------------------------ SSSSSSSSSS ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should allow communication from non-default to default namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:49 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:06:13.672: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:06:13.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:06:13.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow communication from non-default to default namespace on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:49 Apr 5 20:06:13.672: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSSS ------------------------------ [Area:Networking] services when using a plugin that does not isolate namespaces by default should allow connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:31 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:404 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:06:13.674: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:06:13.732: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:31 Apr 5 20:06:13.828: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:06:17.839: INFO: Target pod IP:port is 10.128.0.11:8080 Apr 5 20:06:17.853: INFO: Endpoint e2e-tests-net-services1-4mb67/service-xwbgm is not ready yet Apr 5 20:06:22.857: INFO: Target service IP:port is 172.30.190.181:8080 Apr 5 20:06:22.857: INFO: Creating an exec pod on node nettest-node-2 Apr 5 20:06:22.857: INFO: Creating new exec pod Apr 5 20:06:26.868: INFO: Waiting up to 10s to wget 172.30.190.181:8080 Apr 5 20:06:26.868: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-69hqc execpod-sourceip-nettest-node-267bb8 -- /bin/sh -c wget -T 30 -qO- 172.30.190.181:8080' Apr 5 20:06:27.161: INFO: stderr: "" Apr 5 20:06:27.161: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:06:27.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-4mb67" for this suite. Apr 5 20:06:33.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:06:33.300: INFO: namespace: e2e-tests-net-services1-4mb67, resource: bindings, ignored listing per whitelist Apr 5 20:06:33.301: INFO: namespace e2e-tests-net-services1-4mb67 deletion completed in 6.115318575s [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:06:33.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services2-69hqc" for this suite. Apr 5 20:06:39.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:06:39.382: INFO: namespace: e2e-tests-net-services2-69hqc, resource: bindings, ignored listing per whitelist Apr 5 20:06:39.417: INFO: namespace e2e-tests-net-services2-69hqc deletion completed in 6.113585301s • [SLOW TEST:25.743 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:403 should allow connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:31 ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should allow communication from default to non-default namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:45 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:06:39.417: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:06:39.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:06:39.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow communication from default to non-default namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:45 Apr 5 20:06:39.417: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:06:39.418: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:06:39.564: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-lh7cm STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 5 20:06:39.605: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 5 20:07:05.659: INFO: ExecWithOptions {Command:[/bin/sh -c timeout -t 15 curl -g -q -s --connect-timeout 1 http://10.128.0.12:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-lh7cm PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:07:05.659: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig Apr 5 20:07:05.746: INFO: Found all expected endpoints: [netserver-0] Apr 5 20:07:05.748: INFO: ExecWithOptions {Command:[/bin/sh -c timeout -t 15 curl -g -q -s --connect-timeout 1 http://10.129.0.11:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-lh7cm PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:07:05.748: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig Apr 5 20:07:05.836: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:07:05.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-lh7cm" for this suite. Apr 5 20:07:27.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:07:27.891: INFO: namespace: e2e-tests-pod-network-test-lh7cm, resource: bindings, ignored listing per whitelist Apr 5 20:07:27.947: INFO: namespace e2e-tests-pod-network-test-lh7cm deletion completed in 22.109191315s • [SLOW TEST:48.529 seconds] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should prevent connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:40 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:07:27.948: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:07:27.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:07:27.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should prevent connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:40 Apr 5 20:07:27.948: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSS ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should prevent communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:28 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:07:27.949: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:07:27.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:07:27.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should prevent communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:28 Apr 5 20:07:27.949: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [Area:Networking] multicast when using one of the plugins 'redhat/openshift-ovs-subnet' should block multicast traffic [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:31 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-subnet' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:441 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-subnet' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:07:27.951: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:07:27.969: INFO: configPath is now "/tmp/extended-test-multicast-kcdlw-8vsms-user.kubeconfig" Apr 5 20:07:27.969: INFO: The user is now "extended-test-multicast-kcdlw-8vsms-user" Apr 5 20:07:27.969: INFO: Creating project "extended-test-multicast-kcdlw-8vsms" Apr 5 20:07:28.027: INFO: Waiting on permissions in project "extended-test-multicast-kcdlw-8vsms" ... STEP: Waiting for a default service account to be provisioned in namespace [It] should block multicast traffic [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:31 Apr 5 20:07:28.056: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:07:28.062: INFO: Waiting up to 5m0s for pod multicast-0 status to be running Apr 5 20:07:28.063: INFO: Waiting for pod multicast-0 in namespace 'extended-test-multicast-kcdlw-8vsms' status to be 'running'(found phase: "Pending", readiness: false) (1.435496ms elapsed) Apr 5 20:07:33.066: INFO: Waiting for pod multicast-0 in namespace 'extended-test-multicast-kcdlw-8vsms' status to be 'running'(found phase: "Pending", readiness: false) (5.00370272s elapsed) Apr 5 20:07:38.074: INFO: Waiting up to 5m0s for pod multicast-1 status to be running Apr 5 20:07:38.077: INFO: Waiting for pod multicast-1 in namespace 'extended-test-multicast-kcdlw-8vsms' status to be 'running'(found phase: "Pending", readiness: false) (2.299886ms elapsed) Apr 5 20:07:43.084: INFO: Waiting up to 5m0s for pod multicast-2 status to be running Apr 5 20:07:43.087: INFO: Waiting for pod multicast-2 in namespace 'extended-test-multicast-kcdlw-8vsms' status to be 'running'(found phase: "Pending", readiness: false) (2.621171ms elapsed) Apr 5 20:07:48.090: INFO: Waiting for pod multicast-2 in namespace 'extended-test-multicast-kcdlw-8vsms' status to be 'running'(found phase: "Pending", readiness: false) (5.005370583s elapsed) Apr 5 20:07:53.092: INFO: Running 'oc exec --config=/tmp/extended-test-multicast-kcdlw-8vsms-user.kubeconfig --namespace=extended-test-multicast-kcdlw-8vsms multicast-2 -- omping -c 1 -T 60 -q -q 10.128.0.14 10.128.0.15 10.129.0.12' Apr 5 20:07:53.092: INFO: Running 'oc exec --config=/tmp/extended-test-multicast-kcdlw-8vsms-user.kubeconfig --namespace=extended-test-multicast-kcdlw-8vsms multicast-1 -- omping -c 1 -T 60 -q -q 10.128.0.14 10.128.0.15 10.129.0.12' Apr 5 20:07:53.092: INFO: Running 'oc exec --config=/tmp/extended-test-multicast-kcdlw-8vsms-user.kubeconfig --namespace=extended-test-multicast-kcdlw-8vsms multicast-0 -- omping -c 1 -T 60 -q -q 10.128.0.14 10.128.0.15 10.129.0.12' [AfterEach] when using one of the plugins 'redhat/openshift-ovs-subnet' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:08:00.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-multicast-kcdlw-8vsms" for this suite. Apr 5 20:08:06.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:08:06.594: INFO: namespace: extended-test-multicast-kcdlw-8vsms, resource: bindings, ignored listing per whitelist Apr 5 20:08:06.626: INFO: namespace extended-test-multicast-kcdlw-8vsms deletion completed in 6.116235887s • [SLOW TEST:38.675 seconds] [Area:Networking] multicast /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:21 when using one of the plugins 'redhat/openshift-ovs-subnet' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:439 should block multicast traffic [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:31 ------------------------------ SS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should expose prometheus metrics for a route [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:95 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:08:06.626: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:08:06.646: INFO: configPath is now "/tmp/extended-test-router-metrics-k2s9c-rvjdr-user.kubeconfig" Apr 5 20:08:06.646: INFO: The user is now "extended-test-router-metrics-k2s9c-rvjdr-user" Apr 5 20:08:06.646: INFO: Creating project "extended-test-router-metrics-k2s9c-rvjdr" Apr 5 20:08:06.683: INFO: Waiting on permissions in project "extended-test-router-metrics-k2s9c-rvjdr" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:08:06.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-metrics-k2s9c-rvjdr" for this suite. Apr 5 20:08:12.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:08:12.778: INFO: namespace: extended-test-router-metrics-k2s9c-rvjdr, resource: bindings, ignored listing per whitelist Apr 5 20:08:12.832: INFO: namespace extended-test-router-metrics-k2s9c-rvjdr deletion completed in 6.110666673s [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:76 S [SKIPPING] in Spec Setup (BeforeEach) [6.206 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:26 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:82 should expose prometheus metrics for a route [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:95 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:39 ------------------------------ SSSSSSSSS ------------------------------ [Area:Networking] services basic functionality should allow connections to another pod on the same node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:14 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:08:12.833: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to another pod on the same node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:14 Apr 5 20:08:12.899: INFO: Using nettest-node-1 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:08:16.912: INFO: Target pod IP:port is 10.128.0.16:8080 Apr 5 20:08:16.927: INFO: Endpoint e2e-tests-net-services1-6wrmm/service-6v7z6 is not ready yet Apr 5 20:08:21.930: INFO: Target service IP:port is 172.30.236.159:8080 Apr 5 20:08:21.930: INFO: Creating an exec pod on node nettest-node-1 Apr 5 20:08:21.930: INFO: Creating new exec pod Apr 5 20:08:25.942: INFO: Waiting up to 10s to wget 172.30.236.159:8080 Apr 5 20:08:25.942: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services1-6wrmm execpod-sourceip-nettest-node-197xpt -- /bin/sh -c wget -T 30 -qO- 172.30.236.159:8080' Apr 5 20:08:26.237: INFO: stderr: "" Apr 5 20:08:26.237: INFO: Cleaning up the exec pod [AfterEach] basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:08:26.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-6wrmm" for this suite. Apr 5 20:08:38.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:08:38.319: INFO: namespace: e2e-tests-net-services1-6wrmm, resource: bindings, ignored listing per whitelist Apr 5 20:08:38.381: INFO: namespace e2e-tests-net-services1-6wrmm deletion completed in 12.116246771s • [SLOW TEST:25.548 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:11 should allow connections to another pod on the same node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:14 ------------------------------ SSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should serve routes that were created from an ingress [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:79 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:08:38.381: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:08:38.400: INFO: configPath is now "/tmp/extended-test-router-stress-bmd9q-gk22g-user.kubeconfig" Apr 5 20:08:38.400: INFO: The user is now "extended-test-router-stress-bmd9q-gk22g-user" Apr 5 20:08:38.400: INFO: Creating project "extended-test-router-stress-bmd9q-gk22g" Apr 5 20:08:38.449: INFO: Waiting on permissions in project "extended-test-router-stress-bmd9q-gk22g" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:45 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:32 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:08:38.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-stress-bmd9q-gk22g" for this suite. Apr 5 20:08:44.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:08:44.567: INFO: namespace: extended-test-router-stress-bmd9q-gk22g, resource: bindings, ignored listing per whitelist Apr 5 20:08:44.586: INFO: namespace extended-test-router-stress-bmd9q-gk22g deletion completed in 6.111910942s S [SKIPPING] in Spec Setup (BeforeEach) [6.205 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:21 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:68 should serve routes that were created from an ingress [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:79 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:48 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:08:44.586: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:08:44.648: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-nzfbq STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 5 20:08:44.685: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 5 20:09:06.739: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.128.0.19:8080/dial?request=hostName&protocol=udp&host=10.128.0.18&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-nzfbq PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:09:06.739: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig Apr 5 20:09:06.820: INFO: Waiting for endpoints: map[] Apr 5 20:09:06.822: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.128.0.19:8080/dial?request=hostName&protocol=udp&host=10.129.0.13&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-nzfbq PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:09:06.822: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig Apr 5 20:09:06.919: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:09:06.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-nzfbq" for this suite. Apr 5 20:09:28.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:09:29.027: INFO: namespace: e2e-tests-pod-network-test-nzfbq, resource: bindings, ignored listing per whitelist Apr 5 20:09:29.031: INFO: namespace e2e-tests-pod-network-test-nzfbq deletion completed in 22.109761042s • [SLOW TEST:44.445 seconds] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ SSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router converges when multiple routers are writing status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:87 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:09:29.031: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:09:29.051: INFO: configPath is now "/tmp/extended-test-router-stress-xnfv6-wfccm-user.kubeconfig" Apr 5 20:09:29.051: INFO: The user is now "extended-test-router-stress-xnfv6-wfccm-user" Apr 5 20:09:29.051: INFO: Creating project "extended-test-router-stress-xnfv6-wfccm" Apr 5 20:09:29.102: INFO: Waiting on permissions in project "extended-test-router-stress-xnfv6-wfccm" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:52 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:40 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:09:29.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-stress-xnfv6-wfccm" for this suite. Apr 5 20:09:35.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:09:35.264: INFO: namespace: extended-test-router-stress-xnfv6-wfccm, resource: bindings, ignored listing per whitelist Apr 5 20:09:35.264: INFO: namespace extended-test-router-stress-xnfv6-wfccm deletion completed in 6.114784323s S [SKIPPING] in Spec Setup (BeforeEach) [6.232 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:30 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:86 converges when multiple routers are writing status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:87 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:57 ------------------------------ SSSSSSSSSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should enforce policy based on PodSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:86 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 Apr 5 20:09:35.264: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:09:35.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should enforce policy based on PodSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:86 Apr 5 20:09:35.264: This plugin does not implement NetworkPolicy. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSS ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should allow connections to services in the default namespace from a pod in another namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:52 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:09:35.265: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:09:35.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:09:35.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow connections to services in the default namespace from a pod in another namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:52 Apr 5 20:09:35.265: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should support a 'default-deny' policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:52 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 Apr 5 20:09:35.266: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:09:35.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should support a 'default-deny' policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:52 Apr 5 20:09:35.266: This plugin does not implement NetworkPolicy. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should support reencrypt to services backed by a serving certificate automatically [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:53 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:09:35.267: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:09:35.286: INFO: configPath is now "/tmp/extended-test-router-reencrypt-qx9kp-srdqp-user.kubeconfig" Apr 5 20:09:35.286: INFO: The user is now "extended-test-router-reencrypt-qx9kp-srdqp-user" Apr 5 20:09:35.286: INFO: Creating project "extended-test-router-reencrypt-qx9kp-srdqp" Apr 5 20:09:35.332: INFO: Waiting on permissions in project "extended-test-router-reencrypt-qx9kp-srdqp" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:41 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:29 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:09:35.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-reencrypt-qx9kp-srdqp" for this suite. Apr 5 20:09:41.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:09:41.419: INFO: namespace: extended-test-router-reencrypt-qx9kp-srdqp, resource: bindings, ignored listing per whitelist Apr 5 20:09:41.477: INFO: namespace extended-test-router-reencrypt-qx9kp-srdqp deletion completed in 6.117273093s S [SKIPPING] in Spec Setup (BeforeEach) [6.210 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:18 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:52 should support reencrypt to services backed by a serving certificate automatically [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:53 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:44 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-storage] EmptyDir volumes /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:09:41.477: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:09:41.531: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 5 20:09:41.578: INFO: Waiting up to 5m0s for pod "pod-468ba5c7-390d-11e8-83d6-0e3b9f19c974" in namespace "e2e-tests-emptydir-bsln2" to be "success or failure" Apr 5 20:09:41.580: INFO: Pod "pod-468ba5c7-390d-11e8-83d6-0e3b9f19c974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199615ms Apr 5 20:09:43.583: INFO: Pod "pod-468ba5c7-390d-11e8-83d6-0e3b9f19c974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004549759s Apr 5 20:09:45.585: INFO: Pod "pod-468ba5c7-390d-11e8-83d6-0e3b9f19c974": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006999941s Apr 5 20:09:47.587: INFO: Pod "pod-468ba5c7-390d-11e8-83d6-0e3b9f19c974": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.009395493s STEP: Saw pod success Apr 5 20:09:47.587: INFO: Pod "pod-468ba5c7-390d-11e8-83d6-0e3b9f19c974" satisfied condition "success or failure" Apr 5 20:09:47.589: INFO: Trying to get logs from node nettest-node-1 pod pod-468ba5c7-390d-11e8-83d6-0e3b9f19c974 container test-container: <nil> STEP: delete the pod Apr 5 20:09:47.615: INFO: Waiting for pod pod-468ba5c7-390d-11e8-83d6-0e3b9f19c974 to disappear Apr 5 20:09:47.616: INFO: Pod pod-468ba5c7-390d-11e8-83d6-0e3b9f19c974 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:09:47.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bsln2" for this suite. Apr 5 20:09:53.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:09:53.691: INFO: namespace: e2e-tests-emptydir-bsln2, resource: bindings, ignored listing per whitelist Apr 5 20:09:53.727: INFO: namespace e2e-tests-emptydir-bsln2 deletion completed in 6.108979473s • [SLOW TEST:12.250 seconds] [sig-storage] EmptyDir volumes /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ SSSSSSSSSSSSS ------------------------------ [Area:Networking] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should block multicast traffic in namespaces where it is disabled [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:42 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:441 Apr 5 20:09:53.727: INFO: Not using one of the specified plugins [AfterEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:09:53.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] multicast /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:21 when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:439 should block multicast traffic in namespaces where it is disabled [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:42 Apr 5 20:09:53.727: Not using one of the specified plugins /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router converges when multiple routers are writing conflicting status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:168 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:09:53.729: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:09:53.747: INFO: configPath is now "/tmp/extended-test-router-stress-xnfv6-z9s7v-user.kubeconfig" Apr 5 20:09:53.747: INFO: The user is now "extended-test-router-stress-xnfv6-z9s7v-user" Apr 5 20:09:53.747: INFO: Creating project "extended-test-router-stress-xnfv6-z9s7v" Apr 5 20:09:53.797: INFO: Waiting on permissions in project "extended-test-router-stress-xnfv6-z9s7v" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:52 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:40 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:09:53.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-stress-xnfv6-z9s7v" for this suite. Apr 5 20:09:59.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:09:59.893: INFO: namespace: extended-test-router-stress-xnfv6-z9s7v, resource: bindings, ignored listing per whitelist Apr 5 20:09:59.940: INFO: namespace extended-test-router-stress-xnfv6-z9s7v deletion completed in 6.111951642s S [SKIPPING] in Spec Setup (BeforeEach) [6.211 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:30 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:86 converges when multiple routers are writing conflicting status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:168 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:57 ------------------------------ SSSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should override the route host with a custom value [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:91 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:09:59.940: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:09:59.958: INFO: configPath is now "/tmp/openshift-extended-tests/extended-test-scoped-router-l822r-8tzvf-user.kubeconfig" Apr 5 20:09:59.958: INFO: The user is now "extended-test-scoped-router-l822r-8tzvf-user" Apr 5 20:09:59.958: INFO: Creating project "extended-test-scoped-router-l822r-8tzvf" Apr 5 20:09:59.996: INFO: Waiting on permissions in project "extended-test-scoped-router-l822r-8tzvf" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:32 Apr 5 20:10:00.020: INFO: Running 'oc new-app --config=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-scoped-router-l822r-8tzvf -f /tmp/fixture-testdata-dir522601851/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "extended-test-scoped-router-l822r-8tzvf/" for "/tmp/fixture-testdata-dir522601851/test/extended/testdata/scoped-router.yaml" to project extended-test-scoped-router-l822r-8tzvf * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-scoped", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first"] --> Creating resources ... pod "scoped-router" created pod "router-override" created rolebinding "system-router" created route "route-1" created route "route-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Run 'oc status' to view your app. [It] should override the route host with a custom value [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:91 Apr 5 20:10:00.358: INFO: Creating new exec pod STEP: creating a scoped router from a config file "/tmp/fixture-testdata-dir522601851/test/extended/testdata/scoped-router.yaml" STEP: waiting for the healthz endpoint to respond Apr 5 20:10:05.376: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-l822r-8tzvf execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.128.0.22' "http://10.128.0.22:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:10:05.665: INFO: stderr: "" STEP: waiting for the valid route to respond Apr 5 20:10:05.665: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-l822r-8tzvf execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: route-1-extended-test-scoped-router-l822r-8tzvf.myapps.mycompany.com' "http://10.128.0.22/Letter" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:10:09.977: INFO: stderr: "" STEP: checking that the stored domain name does not match a route Apr 5 20:10:09.977: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-l822r-8tzvf execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: first.example.com' "http://10.128.0.22/Letter"' Apr 5 20:10:10.263: INFO: stderr: "" STEP: checking that route-1-extended-test-scoped-router-l822r-8tzvf.myapps.mycompany.com matches a route Apr 5 20:10:10.263: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-l822r-8tzvf execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-1-extended-test-scoped-router-l822r-8tzvf.myapps.mycompany.com' "http://10.128.0.22/Letter"' Apr 5 20:10:10.652: INFO: stderr: "" STEP: checking that route-2-extended-test-scoped-router-l822r-8tzvf.myapps.mycompany.com matches a route Apr 5 20:10:10.652: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-l822r-8tzvf execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-2-extended-test-scoped-router-l822r-8tzvf.myapps.mycompany.com' "http://10.128.0.22/Letter"' Apr 5 20:10:10.961: INFO: stderr: "" STEP: checking that the router reported the correct ingress and override [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:10.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-scoped-router-l822r-8tzvf" for this suite. Apr 5 20:10:20.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:10:21.015: INFO: namespace: extended-test-scoped-router-l822r-8tzvf, resource: bindings, ignored listing per whitelist Apr 5 20:10:21.090: INFO: namespace extended-test-scoped-router-l822r-8tzvf deletion completed in 10.116975894s • [SLOW TEST:21.150 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:25 The HAProxy router /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:41 should override the route host with a custom value [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:91 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should support allow-all policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:245 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 Apr 5 20:10:21.090: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:21.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should support allow-all policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:245 Apr 5 20:10:21.090: This plugin does not implement NetworkPolicy. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should run even if it has no access to update status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:38 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:10:21.091: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:10:21.110: INFO: configPath is now "/tmp/extended-test-unprivileged-router-d4chr-z8p5g-user.kubeconfig" Apr 5 20:10:21.110: INFO: The user is now "extended-test-unprivileged-router-d4chr-z8p5g-user" Apr 5 20:10:21.110: INFO: Creating project "extended-test-unprivileged-router-d4chr-z8p5g" Apr 5 20:10:21.168: INFO: Waiting on permissions in project "extended-test-unprivileged-router-d4chr-z8p5g" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:25 Apr 5 20:10:21.173: INFO: Running 'oc new-app --config=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-unprivileged-router-d4chr-z8p5g -f /tmp/fixture-testdata-dir522601851/test/extended/testdata/scoped-router.yaml -p=IMAGE=openshift/origin-haproxy-router -p=SCOPE=["--name=test-unprivileged", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first", "--update-status=false"]' warning: --param no longer accepts comma-separated lists of values. "SCOPE=[\"--name=test-unprivileged\", \"--namespace=$(POD_NAMESPACE)\", \"--loglevel=4\", \"--labels=select=first\", \"--update-status=false\"]" will be treated as a single key-value pair. --> Deploying template "extended-test-unprivileged-router-d4chr-z8p5g/" for "/tmp/fixture-testdata-dir522601851/test/extended/testdata/scoped-router.yaml" to project extended-test-unprivileged-router-d4chr-z8p5g * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-unprivileged", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first", "--update-status=false"] --> Creating resources ... pod "scoped-router" created pod "router-override" created rolebinding "system-router" created route "route-1" created route "route-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Run 'oc status' to view your app. [It] should run even if it has no access to update status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:38 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:21.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-unprivileged-router-d4chr-z8p5g" for this suite. Apr 5 20:10:29.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:10:29.623: INFO: namespace: extended-test-unprivileged-router-d4chr-z8p5g, resource: bindings, ignored listing per whitelist Apr 5 20:10:29.623: INFO: namespace extended-test-unprivileged-router-d4chr-z8p5g deletion completed in 8.114693396s S [SKIPPING] [8.531 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:18 The HAProxy router /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:37 should run even if it has no access to update status [Suite:openshift/conformance/parallel] [It] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:38 test temporarily disabled /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:44 ------------------------------ SSSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should enforce policy based on NamespaceSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:282 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 Apr 5 20:10:29.623: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:29.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should enforce policy based on NamespaceSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:282 Apr 5 20:10:29.623: This plugin does not implement NetworkPolicy. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ S ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should allow connections to services in the default namespace from a pod in another namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:48 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:10:29.624: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:29.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:29.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow connections to services in the default namespace from a pod in another namespace on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:48 Apr 5 20:10:29.624: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSS ------------------------------ [Area:Networking] services basic functionality should allow connections to another pod on a different node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:18 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:10:29.625: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to another pod on a different node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:18 Apr 5 20:10:29.700: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:10:33.711: INFO: Target pod IP:port is 10.128.0.24:8080 Apr 5 20:10:33.724: INFO: Endpoint e2e-tests-net-services1-p4nv7/service-spnzc is not ready yet Apr 5 20:10:38.728: INFO: Target service IP:port is 172.30.94.32:8080 Apr 5 20:10:38.728: INFO: Creating an exec pod on node nettest-node-2 Apr 5 20:10:38.728: INFO: Creating new exec pod Apr 5 20:10:44.738: INFO: Waiting up to 10s to wget 172.30.94.32:8080 Apr 5 20:10:44.738: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services1-p4nv7 execpod-sourceip-nettest-node-2fjl4h -- /bin/sh -c wget -T 30 -qO- 172.30.94.32:8080' Apr 5 20:10:45.022: INFO: stderr: "" Apr 5 20:10:45.022: INFO: Cleaning up the exec pod [AfterEach] basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:45.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-p4nv7" for this suite. Apr 5 20:10:51.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:10:51.152: INFO: namespace: e2e-tests-net-services1-p4nv7, resource: bindings, ignored listing per whitelist Apr 5 20:10:51.162: INFO: namespace e2e-tests-net-services1-p4nv7 deletion completed in 6.112134347s • [SLOW TEST:21.537 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:11 should allow connections to another pod on a different node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:18 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should enforce multiple, stacked policies with overlapping podSelectors [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:177 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 Apr 5 20:10:51.162: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:51.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should enforce multiple, stacked policies with overlapping podSelectors [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:177 Apr 5 20:10:51.162: This plugin does not implement NetworkPolicy. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should allow connections from pods in the default namespace to a service in another namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:56 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:10:51.164: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:51.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:51.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow connections from pods in the default namespace to a service in another namespace on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:56 Apr 5 20:10:51.164: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ [Area:Networking] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:45 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:441 Apr 5 20:10:51.165: INFO: Not using one of the specified plugins [AfterEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:51.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] multicast /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:21 when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:439 should allow multicast traffic in namespaces where it is enabled [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:45 Apr 5 20:10:51.165: Not using one of the specified plugins /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ S ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should prevent communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:32 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:10:51.166: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:51.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:10:51.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should prevent communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:32 Apr 5 20:10:51.166: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [Area:Networking] services when using a plugin that does not isolate namespaces by default should allow connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:27 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:404 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:10:51.168: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:10:51.220: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:27 Apr 5 20:10:51.306: INFO: Using nettest-node-1 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:10:55.321: INFO: Target pod IP:port is 10.128.0.25:8080 Apr 5 20:10:55.335: INFO: Endpoint e2e-tests-net-services1-ll4px/service-sz4mm is not ready yet Apr 5 20:11:00.339: INFO: Target service IP:port is 172.30.248.13:8080 Apr 5 20:11:00.339: INFO: Creating an exec pod on node nettest-node-1 Apr 5 20:11:00.339: INFO: Creating new exec pod Apr 5 20:11:04.350: INFO: Waiting up to 10s to wget 172.30.248.13:8080 Apr 5 20:11:04.350: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-g5hgc execpod-sourceip-nettest-node-1fmf8b -- /bin/sh -c wget -T 30 -qO- 172.30.248.13:8080' Apr 5 20:11:04.632: INFO: stderr: "" Apr 5 20:11:04.632: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:11:04.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-ll4px" for this suite. Apr 5 20:11:16.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:11:16.761: INFO: namespace: e2e-tests-net-services1-ll4px, resource: bindings, ignored listing per whitelist Apr 5 20:11:16.773: INFO: namespace e2e-tests-net-services1-ll4px deletion completed in 12.111618686s [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:11:16.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services2-g5hgc" for this suite. Apr 5 20:11:22.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:11:22.868: INFO: namespace: e2e-tests-net-services2-g5hgc, resource: bindings, ignored listing per whitelist Apr 5 20:11:22.888: INFO: namespace e2e-tests-net-services2-g5hgc deletion completed in 6.112248119s • [SLOW TEST:31.720 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:403 should allow connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:27 ------------------------------ SSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should set Forwarded headers appropriately [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:42 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:11:22.888: INFO: >>> kubeConfig: /tmp/openshift/networking/subnet/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:11:22.906: INFO: configPath is now "/tmp/extended-test-router-headers-s57dj-f58hn-user.kubeconfig" Apr 5 20:11:22.906: INFO: The user is now "extended-test-router-headers-s57dj-f58hn-user" Apr 5 20:11:22.906: INFO: Creating project "extended-test-router-headers-s57dj-f58hn" Apr 5 20:11:22.971: INFO: Waiting on permissions in project "extended-test-router-headers-s57dj-f58hn" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:30 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:11:22.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-headers-s57dj-f58hn" for this suite. Apr 5 20:11:28.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:11:29.096: INFO: namespace: extended-test-router-headers-s57dj-f58hn, resource: bindings, ignored listing per whitelist Apr 5 20:11:29.104: INFO: namespace extended-test-router-headers-s57dj-f58hn deletion completed in 6.12095917s S [SKIPPING] in Spec Setup (BeforeEach) [6.216 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:21 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:41 should set Forwarded headers appropriately [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:42 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:33 ------------------------------ SSSSSSSApr 5 20:11:29.104: INFO: Running AfterSuite actions on all node Apr 5 20:11:29.104: INFO: Running AfterSuite actions on node 1 Ran 15 of 440 Specs in 589.844 seconds SUCCESS! -- 15 Passed | 0 Failed | 0 Pending | 425 Skipped Apr 5 20:11:29.108: INFO: Dumping logs locally to: /data/src/github.com/openshift/origin/_output/scripts/networking/artifacts/junit Apr 5 20:11:29.108: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec cluster/log-dump/log-dump.sh: no such file or directory --- PASS: TestExtended (589.94s) PASS [INFO] [20:11:29+0000] Saving container logs [INFO] [20:11:30+0000] Shutting down docker-in-docker cluster for the subnet plugin Stopping dind cluster 'nettest' [INFO] [20:11:39+0000] Targeting networkpolicy plugin: redhat/openshift-ovs-networkpolicy [INFO] [20:11:39+0000] Launching a docker-in-docker cluster for the networkpolicy plugin Stopping dind cluster 'nettest' cat: /tmp/openshift/networking/networkpolicy/dind-env: No such file or directory Starting dind cluster 'nettest' with plugin 'redhat/openshift-ovs-networkpolicy' and runtime 'dockershim' Waiting for ok ........................................................................... Done Waiting for 3 nodes to report readiness ...................... Done Before invoking the openshift cli, make sure to source the cluster's rc file to configure the bash environment: $ . dind-nettest.rc $ oc get nodes [INFO] [20:13:51+0000] Saving cluster configuration [INFO] [20:13:51+0000] Running networking e2e tests against the networkpolicy plugin === RUN TestExtended I0405 20:13:51.621837 6947 test.go:94] Extended test version v3.10.0-alpha.0+4253ab3-549 Running Suite: Extended ======================= Random Seed: 1522959231 - Will randomize all specs Will run 45 of 440 specs I0405 20:13:51.712862 6947 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. Apr 5 20:13:51.712: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig Apr 5 20:13:51.715: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable Apr 5 20:13:51.726: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 5 20:13:51.732: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 5 20:13:51.732: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready. Apr 5 20:13:51.733: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller] Apr 5 20:13:51.733: INFO: Dumping network health container logs from all nodes... Apr 5 20:13:51.735: INFO: e2e test version: v1.9.1+a0ce1bc657 Apr 5 20:13:51.736: INFO: kube-apiserver version: v1.9.1+a0ce1bc657 SSSSSSSSSSSSSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should enforce policy based on NamespaceSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:282 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 I0405 20:13:51.736113 6947 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:13:51.951: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:13:52.014: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should enforce policy based on NamespaceSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:282 STEP: Creating a webserver tied to a service. STEP: Creating a server pod server in namespace e2e-tests-network-policy-4h2qf Apr 5 20:13:52.181: INFO: Created pod server STEP: Creating a service svc-server for pod server in namespace e2e-tests-network-policy-4h2qf Apr 5 20:13:52.188: INFO: Created service svc-server Apr 5 20:13:52.188: INFO: Waiting for server to come up. STEP: Creating a network policy for the server which allows traffic from namespace-b. STEP: Creating client pod client-a that should not be able to connect to svc-server. Apr 5 20:15:42.211: INFO: Waiting for client-a to complete. Apr 5 20:15:42.211: INFO: Waiting up to 5m0s for pod "client-a" in namespace "e2e-tests-network-policy-4h2qf" to be "success or failure" Apr 5 20:15:42.213: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.4436ms Apr 5 20:15:44.216: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005094172s Apr 5 20:15:46.218: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007716726s Apr 5 20:15:48.223: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012021437s Apr 5 20:15:50.225: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014309005s Apr 5 20:15:52.227: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.016511807s Apr 5 20:15:54.230: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.018788008s Apr 5 20:15:56.232: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.020945458s Apr 5 20:15:58.234: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.023190082s Apr 5 20:16:00.236: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.025392311s Apr 5 20:16:02.238: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.027612463s Apr 5 20:16:04.241: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.029810378s Apr 5 20:16:06.243: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.032116856s Apr 5 20:16:08.245: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.034335992s Apr 5 20:16:10.247: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.036665094s Apr 5 20:16:12.250: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.038920414s Apr 5 20:16:14.252: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.041154592s Apr 5 20:16:16.254: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.043392629s Apr 5 20:16:18.257: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.045841563s Apr 5 20:16:20.259: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.048459285s Apr 5 20:16:22.263: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.051838846s Apr 5 20:16:24.265: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.054124149s Apr 5 20:16:26.267: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 44.0567047s Apr 5 20:16:28.270: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.058834181s Apr 5 20:16:30.272: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 48.061094789s Apr 5 20:16:32.275: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 50.064020608s Apr 5 20:16:34.277: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 52.066425152s Apr 5 20:16:36.280: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 54.068874082s Apr 5 20:16:38.282: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 56.071045455s Apr 5 20:16:40.284: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 58.073202165s Apr 5 20:16:42.286: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m0.075367525s Apr 5 20:16:44.289: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m2.077748645s Apr 5 20:16:46.291: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m4.079951964s Apr 5 20:16:48.293: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m6.082199483s Apr 5 20:16:50.295: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m8.084470678s Apr 5 20:16:52.298: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m10.086791199s Apr 5 20:16:54.300: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m12.088981493s Apr 5 20:16:56.302: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m14.091382407s Apr 5 20:16:58.305: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m16.093811434s Apr 5 20:17:00.307: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m18.096139189s Apr 5 20:17:02.309: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m20.098273054s Apr 5 20:17:04.311: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m22.100515551s Apr 5 20:17:06.314: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m24.102885817s Apr 5 20:17:08.316: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m26.10562721s Apr 5 20:17:10.319: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m28.107813519s Apr 5 20:17:12.321: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m30.110050382s Apr 5 20:17:14.323: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m32.112250013s Apr 5 20:17:16.325: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m34.114457578s Apr 5 20:17:18.327: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m36.116646405s Apr 5 20:17:20.330: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 1m38.118811404s Apr 5 20:17:22.332: INFO: Pod "client-a": Phase="Failed", Reason="", readiness=false. Elapsed: 1m40.121060343s STEP: Cleaning up the pod client-a STEP: Creating client pod client-b that should successfully connect to svc-server. Apr 5 20:17:22.346: INFO: Waiting for client-b to complete. Apr 5 20:17:28.350: INFO: Waiting for client-b to complete. Apr 5 20:17:28.350: INFO: Waiting up to 5m0s for pod "client-b" in namespace "e2e-tests-network-policy-b-8bpmt" to be "success or failure" Apr 5 20:17:28.352: INFO: Pod "client-b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.466173ms STEP: Saw pod success Apr 5 20:17:28.352: INFO: Pod "client-b" satisfied condition "success or failure" STEP: Cleaning up the pod client-b STEP: Cleaning up the policy. STEP: Cleaning up the server. STEP: Cleaning up the server's service. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:17:28.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-network-policy-4h2qf" for this suite. Apr 5 20:17:34.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:17:34.477: INFO: namespace: e2e-tests-network-policy-4h2qf, resource: bindings, ignored listing per whitelist Apr 5 20:17:34.506: INFO: namespace e2e-tests-network-policy-4h2qf deletion completed in 6.12098528s STEP: Destroying namespace "e2e-tests-network-policy-b-8bpmt" for this suite. Apr 5 20:17:40.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:17:40.593: INFO: namespace: e2e-tests-network-policy-b-8bpmt, resource: bindings, ignored listing per whitelist Apr 5 20:17:40.620: INFO: namespace e2e-tests-network-policy-b-8bpmt deletion completed in 6.114407828s • [SLOW TEST:228.885 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should enforce policy based on NamespaceSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:282 ------------------------------ SSSSSSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should expose prometheus metrics for a route [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:95 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:17:40.621: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:17:40.646: INFO: configPath is now "/tmp/extended-test-router-metrics-r6dbc-7l24b-user.kubeconfig" Apr 5 20:17:40.646: INFO: The user is now "extended-test-router-metrics-r6dbc-7l24b-user" Apr 5 20:17:40.646: INFO: Creating project "extended-test-router-metrics-r6dbc-7l24b" Apr 5 20:17:40.705: INFO: Waiting on permissions in project "extended-test-router-metrics-r6dbc-7l24b" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:17:40.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-metrics-r6dbc-7l24b" for this suite. Apr 5 20:17:46.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:17:46.821: INFO: namespace: extended-test-router-metrics-r6dbc-7l24b, resource: bindings, ignored listing per whitelist Apr 5 20:17:46.840: INFO: namespace extended-test-router-metrics-r6dbc-7l24b deletion completed in 6.108200073s [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:76 S [SKIPPING] in Spec Setup (BeforeEach) [6.220 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:26 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:82 should expose prometheus metrics for a route [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:95 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:39 ------------------------------ S ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:39 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:17:46.840: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:17:46.863: INFO: configPath is now "/tmp/extended-test-weighted-router-wljsm-bqsgx-user.kubeconfig" Apr 5 20:17:46.863: INFO: The user is now "extended-test-weighted-router-wljsm-bqsgx-user" Apr 5 20:17:46.863: INFO: Creating project "extended-test-weighted-router-wljsm-bqsgx" Apr 5 20:17:46.919: INFO: Waiting on permissions in project "extended-test-weighted-router-wljsm-bqsgx" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:29 Apr 5 20:17:46.927: INFO: Running 'oc new-app --config=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-weighted-router-wljsm-bqsgx -f /tmp/fixture-testdata-dir011284646/test/extended/testdata/weighted-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "extended-test-weighted-router-wljsm-bqsgx/" for "/tmp/fixture-testdata-dir011284646/test/extended/testdata/weighted-router.yaml" to project extended-test-weighted-router-wljsm-bqsgx * With parameters: * IMAGE=openshift/origin-haproxy-router --> Creating resources ... pod "weighted-router" created rolebinding "system-router" created route "weightedroute" created route "zeroweightroute" created service "weightedendpoints1" created service "weightedendpoints2" created pod "endpoint-1" created pod "endpoint-2" created pod "endpoint-3" created --> Success Access your application via route 'weighted.example.com' Access your application via route 'zeroweight.example.com' Run 'oc status' to view your app. [It] should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:39 Apr 5 20:17:47.290: INFO: Creating new exec pod STEP: creating a weighted router from a config file "/tmp/fixture-testdata-dir011284646/test/extended/testdata/weighted-router.yaml" STEP: waiting for the healthz endpoint to respond Apr 5 20:18:26.308: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-wljsm-bqsgx execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.128.0.18' "http://10.128.0.18:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:18:26.612: INFO: stderr: "" STEP: checking that 100 requests go through successfully Apr 5 20:18:26.612: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-wljsm-bqsgx execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: weighted.example.com' "http://10.128.0.18" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:18:26.899: INFO: stderr: "" Apr 5 20:18:26.899: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-wljsm-bqsgx execpod -- /bin/sh -c set -e for i in $(seq 1 100); do code=$( curl -s -o /dev/null -w '%{http_code}\n' --header 'Host: weighted.example.com' "http://10.128.0.18" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -ne 200 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi done ' Apr 5 20:18:27.625: INFO: stderr: "" STEP: checking that there are three weighted backends in the router stats Apr 5 20:18:27.625: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-wljsm-bqsgx execpod -- /bin/sh -c curl -s -u admin:password --header 'Host: weighted.example.com' "http://10.128.0.18:1936/;csv"' Apr 5 20:18:27.916: INFO: stderr: "" STEP: checking that zero weights are also respected by the router Apr 5 20:18:27.917: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-wljsm-bqsgx execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: zeroweight.example.com' "http://10.128.0.18"' Apr 5 20:18:28.214: INFO: stderr: "" [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:18:28.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-weighted-router-wljsm-bqsgx" for this suite. Apr 5 20:18:46.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:18:46.314: INFO: namespace: extended-test-weighted-router-wljsm-bqsgx, resource: bindings, ignored listing per whitelist Apr 5 20:18:46.336: INFO: namespace extended-test-weighted-router-wljsm-bqsgx deletion completed in 18.113861471s • [SLOW TEST:59.496 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:22 The HAProxy router /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:38 should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:39 ------------------------------ SSSSS ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should allow connections from pods in the default namespace to a service in another namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:56 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:18:46.336: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:18:46.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:18:46.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow connections from pods in the default namespace to a service in another namespace on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:56 Apr 5 20:18:46.336: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ [Area:Networking] services basic functionality should allow connections to another pod on a different node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:18 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:18:46.337: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to another pod on a different node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:18 Apr 5 20:18:46.420: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:18:50.436: INFO: Target pod IP:port is 10.128.0.20:8080 Apr 5 20:18:50.449: INFO: Endpoint e2e-tests-net-services1-b6655/service-kfm8k is not ready yet Apr 5 20:18:55.452: INFO: Target service IP:port is 172.30.204.110:8080 Apr 5 20:18:55.452: INFO: Creating an exec pod on node nettest-node-2 Apr 5 20:18:55.452: INFO: Creating new exec pod Apr 5 20:18:59.464: INFO: Waiting up to 10s to wget 172.30.204.110:8080 Apr 5 20:18:59.464: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services1-b6655 execpod-sourceip-nettest-node-2n89qs -- /bin/sh -c wget -T 30 -qO- 172.30.204.110:8080' Apr 5 20:18:59.764: INFO: stderr: "" Apr 5 20:18:59.764: INFO: Cleaning up the exec pod [AfterEach] basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:18:59.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-b6655" for this suite. Apr 5 20:19:05.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:19:05.857: INFO: namespace: e2e-tests-net-services1-b6655, resource: bindings, ignored listing per whitelist Apr 5 20:19:05.898: INFO: namespace e2e-tests-net-services1-b6655 deletion completed in 6.107307596s • [SLOW TEST:19.561 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:11 should allow connections to another pod on a different node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:18 ------------------------------ SSSSS ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should allow connections to services in the default namespace from a pod in another namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:52 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:19:05.898: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:19:05.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:19:05.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow connections to services in the default namespace from a pod in another namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:52 Apr 5 20:19:05.898: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSS ------------------------------ [Area:Networking] network isolation when using a plugin that does not isolate namespaces by default should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:404 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:19:05.900: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:19:05.998: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19 Apr 5 20:19:06.171: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:19:10.190: INFO: Target pod IP:port is 10.128.0.21:8080 Apr 5 20:19:10.190: INFO: Creating an exec pod on node nettest-node-2 Apr 5 20:19:10.190: INFO: Creating new exec pod Apr 5 20:19:16.203: INFO: Waiting up to 10s to wget 10.128.0.21:8080 Apr 5 20:19:16.203: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-isolation2-l594n execpod-sourceip-nettest-node-2k7585 -- /bin/sh -c wget -T 30 -qO- 10.128.0.21:8080' Apr 5 20:19:16.485: INFO: stderr: "" Apr 5 20:19:16.485: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:19:16.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation1-xbbkh" for this suite. Apr 5 20:19:22.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:19:22.548: INFO: namespace: e2e-tests-net-isolation1-xbbkh, resource: bindings, ignored listing per whitelist Apr 5 20:19:22.612: INFO: namespace e2e-tests-net-isolation1-xbbkh deletion completed in 6.110951103s [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:19:22.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation2-l594n" for this suite. Apr 5 20:19:28.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:19:28.692: INFO: namespace: e2e-tests-net-isolation2-l594n, resource: bindings, ignored listing per whitelist Apr 5 20:19:28.722: INFO: namespace e2e-tests-net-isolation2-l594n deletion completed in 6.108980946s • [SLOW TEST:22.822 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:403 should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19 ------------------------------ SS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should override the route host with a custom value [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:91 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:19:28.723: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:19:28.742: INFO: configPath is now "/tmp/extended-test-scoped-router-7lkv6-zxnhp-user.kubeconfig" Apr 5 20:19:28.742: INFO: The user is now "extended-test-scoped-router-7lkv6-zxnhp-user" Apr 5 20:19:28.742: INFO: Creating project "extended-test-scoped-router-7lkv6-zxnhp" Apr 5 20:19:28.776: INFO: Waiting on permissions in project "extended-test-scoped-router-7lkv6-zxnhp" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:32 Apr 5 20:19:28.803: INFO: Running 'oc new-app --config=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-scoped-router-7lkv6-zxnhp -f /tmp/fixture-testdata-dir011284646/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "extended-test-scoped-router-7lkv6-zxnhp/" for "/tmp/fixture-testdata-dir011284646/test/extended/testdata/scoped-router.yaml" to project extended-test-scoped-router-7lkv6-zxnhp * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-scoped", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first"] --> Creating resources ... pod "scoped-router" created pod "router-override" created rolebinding "system-router" created route "route-1" created route "route-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Run 'oc status' to view your app. [It] should override the route host with a custom value [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:91 Apr 5 20:19:29.147: INFO: Creating new exec pod STEP: creating a scoped router from a config file "/tmp/fixture-testdata-dir011284646/test/extended/testdata/scoped-router.yaml" STEP: waiting for the healthz endpoint to respond Apr 5 20:19:34.168: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-7lkv6-zxnhp execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.128.0.22' "http://10.128.0.22:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:19:34.663: INFO: stderr: "" STEP: waiting for the valid route to respond Apr 5 20:19:34.663: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-7lkv6-zxnhp execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: route-1-extended-test-scoped-router-7lkv6-zxnhp.myapps.mycompany.com' "http://10.128.0.22/Letter" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:19:37.100: INFO: stderr: "" STEP: checking that the stored domain name does not match a route Apr 5 20:19:37.100: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-7lkv6-zxnhp execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: first.example.com' "http://10.128.0.22/Letter"' Apr 5 20:19:37.462: INFO: stderr: "" STEP: checking that route-1-extended-test-scoped-router-7lkv6-zxnhp.myapps.mycompany.com matches a route Apr 5 20:19:37.462: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-7lkv6-zxnhp execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-1-extended-test-scoped-router-7lkv6-zxnhp.myapps.mycompany.com' "http://10.128.0.22/Letter"' Apr 5 20:19:37.803: INFO: stderr: "" STEP: checking that route-2-extended-test-scoped-router-7lkv6-zxnhp.myapps.mycompany.com matches a route Apr 5 20:19:37.803: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-7lkv6-zxnhp execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-2-extended-test-scoped-router-7lkv6-zxnhp.myapps.mycompany.com' "http://10.128.0.22/Letter"' Apr 5 20:19:38.158: INFO: stderr: "" STEP: checking that the router reported the correct ingress and override [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:19:38.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-scoped-router-7lkv6-zxnhp" for this suite. Apr 5 20:19:52.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:19:52.248: INFO: namespace: extended-test-scoped-router-7lkv6-zxnhp, resource: bindings, ignored listing per whitelist Apr 5 20:19:52.296: INFO: namespace extended-test-scoped-router-7lkv6-zxnhp deletion completed in 14.124562211s • [SLOW TEST:23.573 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:25 The HAProxy router /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:41 should override the route host with a custom value [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:91 ------------------------------ SSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should run even if it has no access to update status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:38 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:19:52.296: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:19:52.317: INFO: configPath is now "/tmp/extended-test-unprivileged-router-nzfkd-7pnhv-user.kubeconfig" Apr 5 20:19:52.317: INFO: The user is now "extended-test-unprivileged-router-nzfkd-7pnhv-user" Apr 5 20:19:52.317: INFO: Creating project "extended-test-unprivileged-router-nzfkd-7pnhv" Apr 5 20:19:52.374: INFO: Waiting on permissions in project "extended-test-unprivileged-router-nzfkd-7pnhv" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:25 Apr 5 20:19:52.403: INFO: Running 'oc new-app --config=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-unprivileged-router-nzfkd-7pnhv -f /tmp/fixture-testdata-dir011284646/test/extended/testdata/scoped-router.yaml -p=IMAGE=openshift/origin-haproxy-router -p=SCOPE=["--name=test-unprivileged", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first", "--update-status=false"]' warning: --param no longer accepts comma-separated lists of values. "SCOPE=[\"--name=test-unprivileged\", \"--namespace=$(POD_NAMESPACE)\", \"--loglevel=4\", \"--labels=select=first\", \"--update-status=false\"]" will be treated as a single key-value pair. --> Deploying template "extended-test-unprivileged-router-nzfkd-7pnhv/" for "/tmp/fixture-testdata-dir011284646/test/extended/testdata/scoped-router.yaml" to project extended-test-unprivileged-router-nzfkd-7pnhv * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-unprivileged", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first", "--update-status=false"] --> Creating resources ... pod "scoped-router" created pod "router-override" created rolebinding "system-router" created route "route-1" created route "route-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Run 'oc status' to view your app. [It] should run even if it has no access to update status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:38 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:19:52.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-unprivileged-router-nzfkd-7pnhv" for this suite. Apr 5 20:20:20.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:20:20.893: INFO: namespace: extended-test-unprivileged-router-nzfkd-7pnhv, resource: bindings, ignored listing per whitelist Apr 5 20:20:20.927: INFO: namespace extended-test-unprivileged-router-nzfkd-7pnhv deletion completed in 28.120929309s S [SKIPPING] [28.631 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:18 The HAProxy router /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:37 should run even if it has no access to update status [Suite:openshift/conformance/parallel] [It] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:38 test temporarily disabled /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:44 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should serve routes that were created from an ingress [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:79 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:20:20.927: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:20:20.947: INFO: configPath is now "/tmp/extended-test-router-stress-d5bsc-zqzzh-user.kubeconfig" Apr 5 20:20:20.947: INFO: The user is now "extended-test-router-stress-d5bsc-zqzzh-user" Apr 5 20:20:20.947: INFO: Creating project "extended-test-router-stress-d5bsc-zqzzh" Apr 5 20:20:21.002: INFO: Waiting on permissions in project "extended-test-router-stress-d5bsc-zqzzh" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:45 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:32 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:20:21.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-stress-d5bsc-zqzzh" for this suite. Apr 5 20:20:27.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:20:27.100: INFO: namespace: extended-test-router-stress-d5bsc-zqzzh, resource: bindings, ignored listing per whitelist Apr 5 20:20:27.131: INFO: namespace extended-test-router-stress-d5bsc-zqzzh deletion completed in 6.113843295s S [SKIPPING] in Spec Setup (BeforeEach) [6.204 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:21 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:68 should serve routes that were created from an ingress [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:79 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:48 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:20:27.131: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:20:27.190: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-r7xdk STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 5 20:20:27.225: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 5 20:20:55.275: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.128.0.25:8080/dial?request=hostName&protocol=http&host=10.128.0.24&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-r7xdk PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:20:55.275: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig Apr 5 20:20:55.363: INFO: Waiting for endpoints: map[] Apr 5 20:20:55.365: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.128.0.25:8080/dial?request=hostName&protocol=http&host=10.129.0.49&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-r7xdk PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:20:55.365: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig Apr 5 20:20:55.453: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:20:55.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-r7xdk" for this suite. Apr 5 20:21:17.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:21:17.525: INFO: namespace: e2e-tests-pod-network-test-r7xdk, resource: bindings, ignored listing per whitelist Apr 5 20:21:17.564: INFO: namespace e2e-tests-pod-network-test-r7xdk deletion completed in 22.108672358s • [SLOW TEST:50.433 seconds] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ SSS ------------------------------ [Area:Networking] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:45 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:441 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:21:17.564: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:21:17.582: INFO: configPath is now "/tmp/extended-test-multicast-nm5rt-fcz7d-user.kubeconfig" Apr 5 20:21:17.582: INFO: The user is now "extended-test-multicast-nm5rt-fcz7d-user" Apr 5 20:21:17.582: INFO: Creating project "extended-test-multicast-nm5rt-fcz7d" Apr 5 20:21:17.649: INFO: Waiting on permissions in project "extended-test-multicast-nm5rt-fcz7d" ... STEP: Waiting for a default service account to be provisioned in namespace [It] should allow multicast traffic in namespaces where it is enabled [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:45 Apr 5 20:21:17.670: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:21:17.681: INFO: Waiting up to 5m0s for pod multicast-0 status to be running Apr 5 20:21:17.684: INFO: Waiting for pod multicast-0 in namespace 'extended-test-multicast-nm5rt-fcz7d' status to be 'running'(found phase: "Pending", readiness: false) (2.082992ms elapsed) Apr 5 20:21:22.686: INFO: Waiting for pod multicast-0 in namespace 'extended-test-multicast-nm5rt-fcz7d' status to be 'running'(found phase: "Pending", readiness: false) (5.004470421s elapsed) Apr 5 20:21:27.693: INFO: Waiting up to 5m0s for pod multicast-1 status to be running Apr 5 20:21:27.695: INFO: Waiting for pod multicast-1 in namespace 'extended-test-multicast-nm5rt-fcz7d' status to be 'running'(found phase: "Pending", readiness: false) (1.931863ms elapsed) Apr 5 20:21:32.701: INFO: Waiting up to 5m0s for pod multicast-2 status to be running Apr 5 20:21:32.703: INFO: Waiting for pod multicast-2 in namespace 'extended-test-multicast-nm5rt-fcz7d' status to be 'running'(found phase: "Pending", readiness: false) (1.982494ms elapsed) Apr 5 20:21:37.706: INFO: Waiting for pod multicast-2 in namespace 'extended-test-multicast-nm5rt-fcz7d' status to be 'running'(found phase: "Pending", readiness: false) (5.004723331s elapsed) Apr 5 20:21:42.709: INFO: Running 'oc exec --config=/tmp/extended-test-multicast-nm5rt-fcz7d-user.kubeconfig --namespace=extended-test-multicast-nm5rt-fcz7d multicast-2 -- omping -c 1 -T 60 -q -q 10.128.0.26 10.128.0.27 10.129.0.50' Apr 5 20:21:42.709: INFO: Running 'oc exec --config=/tmp/extended-test-multicast-nm5rt-fcz7d-user.kubeconfig --namespace=extended-test-multicast-nm5rt-fcz7d multicast-0 -- omping -c 1 -T 60 -q -q 10.128.0.26 10.128.0.27 10.129.0.50' Apr 5 20:21:42.709: INFO: Running 'oc exec --config=/tmp/extended-test-multicast-nm5rt-fcz7d-user.kubeconfig --namespace=extended-test-multicast-nm5rt-fcz7d multicast-1 -- omping -c 1 -T 60 -q -q 10.128.0.26 10.128.0.27 10.129.0.50' [AfterEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:21:50.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-multicast-nm5rt-fcz7d" for this suite. Apr 5 20:21:56.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:21:56.230: INFO: namespace: extended-test-multicast-nm5rt-fcz7d, resource: bindings, ignored listing per whitelist Apr 5 20:21:56.260: INFO: namespace extended-test-multicast-nm5rt-fcz7d deletion completed in 6.114023781s • [SLOW TEST:38.696 seconds] [Area:Networking] multicast /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:21 when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:439 should allow multicast traffic in namespaces where it is enabled [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:45 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:21:56.260: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:21:56.328: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-wndfw STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 5 20:21:56.376: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 5 20:22:22.427: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | timeout -t 2 nc -w 1 -u 10.128.0.28 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-wndfw PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:22:22.427: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig Apr 5 20:22:23.506: INFO: Found all expected endpoints: [netserver-0] Apr 5 20:22:23.509: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | timeout -t 2 nc -w 1 -u 10.129.0.51 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-wndfw PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:22:23.509: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig Apr 5 20:22:24.586: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:22:24.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-wndfw" for this suite. Apr 5 20:22:46.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:22:46.664: INFO: namespace: e2e-tests-pod-network-test-wndfw, resource: bindings, ignored listing per whitelist Apr 5 20:22:46.703: INFO: namespace e2e-tests-pod-network-test-wndfw deletion completed in 22.114347851s • [SLOW TEST:50.443 seconds] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [Area:Networking] services when using a plugin that does not isolate namespaces by default should allow connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:31 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:404 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:22:46.704: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:22:46.775: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:31 Apr 5 20:22:46.828: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:22:50.847: INFO: Target pod IP:port is 10.128.0.30:8080 Apr 5 20:22:50.862: INFO: Endpoint e2e-tests-net-services1-dg7h2/service-jmfq6 is not ready yet Apr 5 20:22:55.865: INFO: Target service IP:port is 172.30.116.222:8080 Apr 5 20:22:55.865: INFO: Creating an exec pod on node nettest-node-2 Apr 5 20:22:55.865: INFO: Creating new exec pod Apr 5 20:23:01.877: INFO: Waiting up to 10s to wget 172.30.116.222:8080 Apr 5 20:23:01.878: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-8tggt execpod-sourceip-nettest-node-22w7g6 -- /bin/sh -c wget -T 30 -qO- 172.30.116.222:8080' Apr 5 20:23:02.164: INFO: stderr: "" Apr 5 20:23:02.164: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:23:02.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-dg7h2" for this suite. Apr 5 20:23:08.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:23:08.248: INFO: namespace: e2e-tests-net-services1-dg7h2, resource: bindings, ignored listing per whitelist Apr 5 20:23:08.301: INFO: namespace e2e-tests-net-services1-dg7h2 deletion completed in 6.110497708s [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:23:08.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services2-8tggt" for this suite. Apr 5 20:23:14.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:23:14.397: INFO: namespace: e2e-tests-net-services2-8tggt, resource: bindings, ignored listing per whitelist Apr 5 20:23:14.410: INFO: namespace e2e-tests-net-services2-8tggt deletion completed in 6.107670223s • [SLOW TEST:27.707 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:403 should allow connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:31 ------------------------------ SSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should set Forwarded headers appropriately [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:42 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:23:14.410: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:23:14.429: INFO: configPath is now "/tmp/extended-test-router-headers-tb56n-2xz77-user.kubeconfig" Apr 5 20:23:14.429: INFO: The user is now "extended-test-router-headers-tb56n-2xz77-user" Apr 5 20:23:14.429: INFO: Creating project "extended-test-router-headers-tb56n-2xz77" Apr 5 20:23:14.481: INFO: Waiting on permissions in project "extended-test-router-headers-tb56n-2xz77" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:30 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:23:14.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-headers-tb56n-2xz77" for this suite. Apr 5 20:23:20.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:23:20.654: INFO: namespace: extended-test-router-headers-tb56n-2xz77, resource: bindings, ignored listing per whitelist Apr 5 20:23:20.679: INFO: namespace extended-test-router-headers-tb56n-2xz77 deletion completed in 6.108255s S [SKIPPING] in Spec Setup (BeforeEach) [6.268 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:21 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:41 should set Forwarded headers appropriately [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:42 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:33 ------------------------------ SSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should support allow-all policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:245 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:23:20.679: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should support allow-all policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:245 STEP: Creating a simple server. STEP: Creating a server pod server in namespace e2e-tests-network-policy-2qwgl Apr 5 20:23:20.784: INFO: Created pod server STEP: Creating a service svc-server for pod server in namespace e2e-tests-network-policy-2qwgl Apr 5 20:23:20.790: INFO: Created service svc-server Apr 5 20:23:20.790: INFO: Waiting for Server to come up. STEP: Testing pods can connect to both ports when no policy is present. STEP: Creating client pod test-a that should successfully connect to svc-server. Apr 5 20:23:24.805: INFO: Waiting for test-a to complete. Apr 5 20:23:28.810: INFO: Waiting for test-a to complete. Apr 5 20:23:28.810: INFO: Waiting up to 5m0s for pod "test-a" in namespace "e2e-tests-network-policy-2qwgl" to be "success or failure" Apr 5 20:23:28.812: INFO: Pod "test-a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.679004ms STEP: Saw pod success Apr 5 20:23:28.812: INFO: Pod "test-a" satisfied condition "success or failure" STEP: Cleaning up the pod test-a STEP: Creating client pod test-b that should successfully connect to svc-server. Apr 5 20:23:28.827: INFO: Waiting for test-b to complete. Apr 5 20:23:32.831: INFO: Waiting for test-b to complete. Apr 5 20:23:32.831: INFO: Waiting up to 5m0s for pod "test-b" in namespace "e2e-tests-network-policy-2qwgl" to be "success or failure" Apr 5 20:23:32.833: INFO: Pod "test-b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.464213ms STEP: Saw pod success Apr 5 20:23:32.833: INFO: Pod "test-b" satisfied condition "success or failure" STEP: Cleaning up the pod test-b STEP: Creating a network policy which allows all traffic. STEP: Testing pods can connect to both ports when an 'allow-all' policy is present. STEP: Creating client pod client-a that should successfully connect to svc-server. Apr 5 20:23:32.853: INFO: Waiting for client-a to complete. Apr 5 20:23:36.862: INFO: Waiting for client-a to complete. Apr 5 20:23:36.862: INFO: Waiting up to 5m0s for pod "client-a" in namespace "e2e-tests-network-policy-2qwgl" to be "success or failure" Apr 5 20:23:36.864: INFO: Pod "client-a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.504668ms STEP: Saw pod success Apr 5 20:23:36.864: INFO: Pod "client-a" satisfied condition "success or failure" STEP: Cleaning up the pod client-a STEP: Creating client pod client-b that should successfully connect to svc-server. Apr 5 20:23:36.875: INFO: Waiting for client-b to complete. Apr 5 20:23:40.880: INFO: Waiting for client-b to complete. Apr 5 20:23:40.880: INFO: Waiting up to 5m0s for pod "client-b" in namespace "e2e-tests-network-policy-2qwgl" to be "success or failure" Apr 5 20:23:40.881: INFO: Pod "client-b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.539937ms STEP: Saw pod success Apr 5 20:23:40.881: INFO: Pod "client-b" satisfied condition "success or failure" STEP: Cleaning up the pod client-b STEP: Cleaning up the policy. STEP: Cleaning up the server. STEP: Cleaning up the server's service. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:23:40.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-network-policy-2qwgl" for this suite. Apr 5 20:23:46.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:23:46.995: INFO: namespace: e2e-tests-network-policy-2qwgl, resource: bindings, ignored listing per whitelist Apr 5 20:23:47.023: INFO: namespace e2e-tests-network-policy-2qwgl deletion completed in 6.111359239s • [SLOW TEST:26.344 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should support allow-all policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:245 ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should prevent connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:44 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:23:47.024: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:23:47.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:23:47.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should prevent connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:44 Apr 5 20:23:47.024: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSSSSSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should support a 'default-deny' policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:52 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:23:47.025: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should support a 'default-deny' policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:52 STEP: Create a simple server. STEP: Creating a server pod server in namespace e2e-tests-network-policy-bwkzq Apr 5 20:23:48.098: INFO: Created pod server STEP: Creating a service svc-server for pod server in namespace e2e-tests-network-policy-bwkzq Apr 5 20:23:48.108: INFO: Created service svc-server Apr 5 20:23:48.108: INFO: Waiting for Server to come up. STEP: Creating client which will be able to contact the server since no policies are present. STEP: Creating client pod client-can-connect that should successfully connect to svc-server. Apr 5 20:23:52.121: INFO: Waiting for client-can-connect to complete. Apr 5 20:23:56.131: INFO: Waiting for client-can-connect to complete. Apr 5 20:23:56.131: INFO: Waiting up to 5m0s for pod "client-can-connect" in namespace "e2e-tests-network-policy-bwkzq" to be "success or failure" Apr 5 20:23:56.133: INFO: Pod "client-can-connect": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.568303ms STEP: Saw pod success Apr 5 20:23:56.133: INFO: Pod "client-can-connect" satisfied condition "success or failure" STEP: Cleaning up the pod client-can-connect STEP: Creating a network policy denying all traffic. STEP: Creating client pod client-cannot-connect that should not be able to connect to svc-server. Apr 5 20:23:56.150: INFO: Waiting for client-cannot-connect to complete. Apr 5 20:23:56.150: INFO: Waiting up to 5m0s for pod "client-cannot-connect" in namespace "e2e-tests-network-policy-bwkzq" to be "success or failure" Apr 5 20:23:56.153: INFO: Pod "client-cannot-connect": Phase="Pending", Reason="", readiness=false. Elapsed: 3.430592ms Apr 5 20:23:58.155: INFO: Pod "client-cannot-connect": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005660029s Apr 5 20:24:00.158: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 4.007864525s Apr 5 20:24:02.160: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 6.010080717s Apr 5 20:24:04.162: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 8.012346065s Apr 5 20:24:06.165: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 10.014725394s Apr 5 20:24:08.167: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 12.01694595s Apr 5 20:24:10.169: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 14.019248625s Apr 5 20:24:12.171: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 16.021549014s Apr 5 20:24:14.174: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 18.02382204s Apr 5 20:24:16.176: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 20.026108443s Apr 5 20:24:18.178: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 22.02838684s Apr 5 20:24:20.180: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 24.030597346s Apr 5 20:24:22.183: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 26.03283476s Apr 5 20:24:24.185: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 28.035095651s Apr 5 20:24:26.187: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 30.037283683s Apr 5 20:24:28.189: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 32.039453011s Apr 5 20:24:30.191: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 34.04165727s Apr 5 20:24:32.194: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 36.04394707s Apr 5 20:24:34.196: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 38.046136006s Apr 5 20:24:36.198: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 40.048442668s Apr 5 20:24:38.201: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 42.05077766s Apr 5 20:24:40.203: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 44.05309337s Apr 5 20:24:42.205: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 46.055585589s Apr 5 20:24:44.208: INFO: Pod "client-cannot-connect": Phase="Running", Reason="", readiness=true. Elapsed: 48.057813404s Apr 5 20:24:46.210: INFO: Pod "client-cannot-connect": Phase="Failed", Reason="", readiness=false. Elapsed: 50.060103804s STEP: Cleaning up the pod client-cannot-connect STEP: Cleaning up the policy. STEP: Cleaning up the server. STEP: Cleaning up the server's service. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:24:46.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-network-policy-bwkzq" for this suite. Apr 5 20:24:52.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:24:52.333: INFO: namespace: e2e-tests-network-policy-bwkzq, resource: bindings, ignored listing per whitelist Apr 5 20:24:52.359: INFO: namespace e2e-tests-network-policy-bwkzq deletion completed in 6.113214944s • [SLOW TEST:65.334 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should support a 'default-deny' policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:52 ------------------------------ [Area:Networking] services basic functionality should allow connections to another pod on the same node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:14 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:24:52.359: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to another pod on the same node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:14 Apr 5 20:24:52.465: INFO: Using nettest-node-1 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:24:56.476: INFO: Target pod IP:port is 10.128.0.35:8080 Apr 5 20:24:56.490: INFO: Endpoint e2e-tests-net-services1-zvt4g/service-s6pzh is not ready yet Apr 5 20:25:01.494: INFO: Target service IP:port is 172.30.116.32:8080 Apr 5 20:25:01.494: INFO: Creating an exec pod on node nettest-node-1 Apr 5 20:25:01.494: INFO: Creating new exec pod Apr 5 20:25:05.504: INFO: Waiting up to 10s to wget 172.30.116.32:8080 Apr 5 20:25:05.504: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services1-zvt4g execpod-sourceip-nettest-node-1s92ww -- /bin/sh -c wget -T 30 -qO- 172.30.116.32:8080' Apr 5 20:25:05.795: INFO: stderr: "" Apr 5 20:25:05.795: INFO: Cleaning up the exec pod [AfterEach] basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:25:05.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-zvt4g" for this suite. Apr 5 20:25:13.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:25:13.905: INFO: namespace: e2e-tests-net-services1-zvt4g, resource: bindings, ignored listing per whitelist Apr 5 20:25:13.933: INFO: namespace e2e-tests-net-services1-zvt4g deletion completed in 8.112443299s • [SLOW TEST:21.575 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:11 should allow connections to another pod on the same node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:14 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should allow connections from pods in the default namespace to a service in another namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:60 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:25:13.934: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:25:13.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:25:13.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow connections from pods in the default namespace to a service in another namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:60 Apr 5 20:25:13.934: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:25:13.935: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:25:13.997: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-kjr9b STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 5 20:25:14.031: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 5 20:25:40.076: INFO: ExecWithOptions {Command:[/bin/sh -c timeout -t 15 curl -g -q -s --connect-timeout 1 http://10.129.0.57:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-kjr9b PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:25:40.076: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig Apr 5 20:25:40.165: INFO: Found all expected endpoints: [netserver-0] Apr 5 20:25:40.167: INFO: ExecWithOptions {Command:[/bin/sh -c timeout -t 15 curl -g -q -s --connect-timeout 1 http://10.128.0.37:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-kjr9b PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:25:40.167: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig Apr 5 20:25:40.250: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:25:40.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-kjr9b" for this suite. Apr 5 20:26:02.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:26:02.294: INFO: namespace: e2e-tests-pod-network-test-kjr9b, resource: bindings, ignored listing per whitelist Apr 5 20:26:02.363: INFO: namespace e2e-tests-pod-network-test-kjr9b deletion completed in 22.110500829s • [SLOW TEST:48.428 seconds] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-storage] EmptyDir volumes /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:26:02.364: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:26:02.403: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 5 20:26:02.464: INFO: Waiting up to 5m0s for pod "pod-8f32e973-390f-11e8-b573-0e3b9f19c974" in namespace "e2e-tests-emptydir-ksg2f" to be "success or failure" Apr 5 20:26:02.465: INFO: Pod "pod-8f32e973-390f-11e8-b573-0e3b9f19c974": Phase="Pending", Reason="", readiness=false. Elapsed: 1.277327ms Apr 5 20:26:04.468: INFO: Pod "pod-8f32e973-390f-11e8-b573-0e3b9f19c974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003701743s Apr 5 20:26:06.470: INFO: Pod "pod-8f32e973-390f-11e8-b573-0e3b9f19c974": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005937852s STEP: Saw pod success Apr 5 20:26:06.470: INFO: Pod "pod-8f32e973-390f-11e8-b573-0e3b9f19c974" satisfied condition "success or failure" Apr 5 20:26:06.472: INFO: Trying to get logs from node nettest-node-1 pod pod-8f32e973-390f-11e8-b573-0e3b9f19c974 container test-container: <nil> STEP: delete the pod Apr 5 20:26:06.493: INFO: Waiting for pod pod-8f32e973-390f-11e8-b573-0e3b9f19c974 to disappear Apr 5 20:26:06.495: INFO: Pod pod-8f32e973-390f-11e8-b573-0e3b9f19c974 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:26:06.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ksg2f" for this suite. Apr 5 20:26:12.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:26:12.542: INFO: namespace: e2e-tests-emptydir-ksg2f, resource: bindings, ignored listing per whitelist Apr 5 20:26:12.604: INFO: namespace e2e-tests-emptydir-ksg2f deletion completed in 6.107713464s • [SLOW TEST:10.241 seconds] [sig-storage] EmptyDir volumes /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ SS ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should prevent connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:40 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:26:12.605: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:26:12.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:26:12.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should prevent connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:40 Apr 5 20:26:12.605: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ S ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router converges when multiple routers are writing status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:87 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:26:12.606: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:26:12.624: INFO: configPath is now "/tmp/extended-test-router-stress-psp7x-hzt76-user.kubeconfig" Apr 5 20:26:12.624: INFO: The user is now "extended-test-router-stress-psp7x-hzt76-user" Apr 5 20:26:12.624: INFO: Creating project "extended-test-router-stress-psp7x-hzt76" Apr 5 20:26:12.673: INFO: Waiting on permissions in project "extended-test-router-stress-psp7x-hzt76" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:52 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:40 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:26:12.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-stress-psp7x-hzt76" for this suite. Apr 5 20:26:18.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:26:18.793: INFO: namespace: extended-test-router-stress-psp7x-hzt76, resource: bindings, ignored listing per whitelist Apr 5 20:26:18.825: INFO: namespace extended-test-router-stress-psp7x-hzt76 deletion completed in 6.115149254s S [SKIPPING] in Spec Setup (BeforeEach) [6.219 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:30 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:86 converges when multiple routers are writing status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:87 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:57 ------------------------------ SSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should expose the profiling endpoints [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:206 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:26:18.826: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:26:18.844: INFO: configPath is now "/tmp/extended-test-router-metrics-r6dbc-dwhkt-user.kubeconfig" Apr 5 20:26:18.844: INFO: The user is now "extended-test-router-metrics-r6dbc-dwhkt-user" Apr 5 20:26:18.844: INFO: Creating project "extended-test-router-metrics-r6dbc-dwhkt" Apr 5 20:26:18.907: INFO: Waiting on permissions in project "extended-test-router-metrics-r6dbc-dwhkt" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:26:18.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-metrics-r6dbc-dwhkt" for this suite. Apr 5 20:26:24.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:26:24.990: INFO: namespace: extended-test-router-metrics-r6dbc-dwhkt, resource: bindings, ignored listing per whitelist Apr 5 20:26:25.031: INFO: namespace extended-test-router-metrics-r6dbc-dwhkt deletion completed in 6.111499362s [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:76 S [SKIPPING] in Spec Setup (BeforeEach) [6.206 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:26 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:82 should expose the profiling endpoints [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:206 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:39 ------------------------------ SSSSS ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should allow communication from default to non-default namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:41 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:26:25.032: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:26:25.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:26:25.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow communication from default to non-default namespace on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:41 Apr 5 20:26:25.032: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should enforce multiple, stacked policies with overlapping podSelectors [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:177 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:26:25.033: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should enforce multiple, stacked policies with overlapping podSelectors [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:177 STEP: Creating a simple server. STEP: Creating a server pod server in namespace e2e-tests-network-policy-g7vsd Apr 5 20:26:25.105: INFO: Created pod server STEP: Creating a service svc-server for pod server in namespace e2e-tests-network-policy-g7vsd Apr 5 20:26:25.114: INFO: Created service svc-server Apr 5 20:26:25.114: INFO: Waiting for Server to come up. STEP: Testing pods can connect to both ports when no policy is present. STEP: Creating client pod test-a that should successfully connect to svc-server. Apr 5 20:26:29.131: INFO: Waiting for test-a to complete. Apr 5 20:26:33.135: INFO: Waiting for test-a to complete. Apr 5 20:26:33.135: INFO: Waiting up to 5m0s for pod "test-a" in namespace "e2e-tests-network-policy-g7vsd" to be "success or failure" Apr 5 20:26:33.136: INFO: Pod "test-a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.464973ms STEP: Saw pod success Apr 5 20:26:33.136: INFO: Pod "test-a" satisfied condition "success or failure" STEP: Cleaning up the pod test-a STEP: Creating client pod test-b that should successfully connect to svc-server. Apr 5 20:26:33.148: INFO: Waiting for test-b to complete. Apr 5 20:26:37.153: INFO: Waiting for test-b to complete. Apr 5 20:26:37.153: INFO: Waiting up to 5m0s for pod "test-b" in namespace "e2e-tests-network-policy-g7vsd" to be "success or failure" Apr 5 20:26:37.155: INFO: Pod "test-b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.593265ms STEP: Saw pod success Apr 5 20:26:37.155: INFO: Pod "test-b" satisfied condition "success or failure" STEP: Cleaning up the pod test-b STEP: Creating a network policy for the Service which allows traffic only to one port. STEP: Creating a network policy for the Service which allows traffic only to another port. STEP: Testing pods can connect to both ports when both policies are present. STEP: Creating client pod client-a that should successfully connect to svc-server. Apr 5 20:26:37.175: INFO: Waiting for client-a to complete. Apr 5 20:26:41.179: INFO: Waiting for client-a to complete. Apr 5 20:26:41.179: INFO: Waiting up to 5m0s for pod "client-a" in namespace "e2e-tests-network-policy-g7vsd" to be "success or failure" Apr 5 20:26:41.181: INFO: Pod "client-a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.514865ms STEP: Saw pod success Apr 5 20:26:41.181: INFO: Pod "client-a" satisfied condition "success or failure" STEP: Cleaning up the pod client-a STEP: Creating client pod client-b that should successfully connect to svc-server. Apr 5 20:26:41.191: INFO: Waiting for client-b to complete. Apr 5 20:26:47.195: INFO: Waiting for client-b to complete. Apr 5 20:26:47.195: INFO: Waiting up to 5m0s for pod "client-b" in namespace "e2e-tests-network-policy-g7vsd" to be "success or failure" Apr 5 20:26:47.197: INFO: Pod "client-b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.45676ms STEP: Saw pod success Apr 5 20:26:47.197: INFO: Pod "client-b" satisfied condition "success or failure" STEP: Cleaning up the pod client-b STEP: Cleaning up the policy. STEP: Cleaning up the policy. STEP: Cleaning up the server. STEP: Cleaning up the server's service. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:26:47.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-network-policy-g7vsd" for this suite. Apr 5 20:26:53.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:26:53.332: INFO: namespace: e2e-tests-network-policy-g7vsd, resource: bindings, ignored listing per whitelist Apr 5 20:26:53.371: INFO: namespace e2e-tests-network-policy-g7vsd deletion completed in 6.121376957s • [SLOW TEST:28.338 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should enforce multiple, stacked policies with overlapping podSelectors [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:177 ------------------------------ SSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should enforce policy based on PodSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:86 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:26:53.371: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should enforce policy based on PodSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:86 STEP: Creating a simple server. STEP: Creating a server pod server in namespace e2e-tests-network-policy-zx42m Apr 5 20:26:53.446: INFO: Created pod server STEP: Creating a service svc-server for pod server in namespace e2e-tests-network-policy-zx42m Apr 5 20:26:53.456: INFO: Created service svc-server Apr 5 20:26:53.456: INFO: Waiting for Server to come up. STEP: Creating a network policy for the server which allows traffic from the pod 'client-a'. STEP: Creating client-a which should be able to contact the server. STEP: Creating client pod client-a that should successfully connect to svc-server. Apr 5 20:26:57.472: INFO: Waiting for client-a to complete. Apr 5 20:27:03.478: INFO: Waiting for client-a to complete. Apr 5 20:27:03.478: INFO: Waiting up to 5m0s for pod "client-a" in namespace "e2e-tests-network-policy-zx42m" to be "success or failure" Apr 5 20:27:03.479: INFO: Pod "client-a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.345707ms STEP: Saw pod success Apr 5 20:27:03.479: INFO: Pod "client-a" satisfied condition "success or failure" STEP: Cleaning up the pod client-a STEP: Creating client pod client-b that should not be able to connect to svc-server. Apr 5 20:27:03.491: INFO: Waiting for client-b to complete. Apr 5 20:27:03.491: INFO: Waiting up to 5m0s for pod "client-b" in namespace "e2e-tests-network-policy-zx42m" to be "success or failure" Apr 5 20:27:03.494: INFO: Pod "client-b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.933336ms Apr 5 20:27:05.496: INFO: Pod "client-b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005185403s Apr 5 20:27:07.498: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 4.007429286s Apr 5 20:27:09.500: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 6.009668897s Apr 5 20:27:11.503: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 8.011943175s Apr 5 20:27:13.505: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 10.014177118s Apr 5 20:27:15.507: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 12.016253968s Apr 5 20:27:17.509: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 14.018434513s Apr 5 20:27:19.511: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 16.020671529s Apr 5 20:27:21.514: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 18.022906599s Apr 5 20:27:23.516: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 20.025078396s Apr 5 20:27:25.518: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 22.027367124s Apr 5 20:27:27.520: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 24.029559956s Apr 5 20:27:29.523: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 26.031935598s Apr 5 20:27:31.525: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 28.034179494s Apr 5 20:27:33.527: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 30.0365706s Apr 5 20:27:35.530: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 32.038853691s Apr 5 20:27:37.532: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 34.041139685s Apr 5 20:27:39.534: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 36.043317882s Apr 5 20:27:41.536: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 38.045475476s Apr 5 20:27:43.538: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 40.047680356s Apr 5 20:27:45.541: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 42.049992754s Apr 5 20:27:47.543: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 44.052279505s Apr 5 20:27:49.545: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 46.054501678s Apr 5 20:27:51.548: INFO: Pod "client-b": Phase="Running", Reason="", readiness=true. Elapsed: 48.056857238s Apr 5 20:27:53.550: INFO: Pod "client-b": Phase="Failed", Reason="", readiness=false. Elapsed: 50.059097315s STEP: Cleaning up the pod client-b STEP: Cleaning up the policy. STEP: Cleaning up the server. STEP: Cleaning up the server's service. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:27:53.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-network-policy-zx42m" for this suite. Apr 5 20:27:59.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:27:59.692: INFO: namespace: e2e-tests-network-policy-zx42m, resource: bindings, ignored listing per whitelist Apr 5 20:27:59.700: INFO: namespace e2e-tests-network-policy-zx42m deletion completed in 6.112327407s • [SLOW TEST:66.329 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should enforce policy based on PodSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:86 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:27:59.700: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:27:59.749: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-p9mvd STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 5 20:27:59.792: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 5 20:28:19.850: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.129.0.64:8080/dial?request=hostName&protocol=udp&host=10.129.0.63&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-p9mvd PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:28:19.850: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig Apr 5 20:28:19.934: INFO: Waiting for endpoints: map[] Apr 5 20:28:19.936: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.129.0.64:8080/dial?request=hostName&protocol=udp&host=10.128.0.44&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-p9mvd PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:28:19.936: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig Apr 5 20:28:20.020: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:28:20.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-p9mvd" for this suite. Apr 5 20:28:42.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:28:42.079: INFO: namespace: e2e-tests-pod-network-test-p9mvd, resource: bindings, ignored listing per whitelist Apr 5 20:28:42.130: INFO: namespace e2e-tests-pod-network-test-p9mvd deletion completed in 22.107442535s • [SLOW TEST:42.430 seconds] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should support reencrypt to services backed by a serving certificate automatically [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:53 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:28:42.130: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:28:42.149: INFO: configPath is now "/tmp/extended-test-router-reencrypt-5sshk-jm6wf-user.kubeconfig" Apr 5 20:28:42.149: INFO: The user is now "extended-test-router-reencrypt-5sshk-jm6wf-user" Apr 5 20:28:42.149: INFO: Creating project "extended-test-router-reencrypt-5sshk-jm6wf" Apr 5 20:28:42.193: INFO: Waiting on permissions in project "extended-test-router-reencrypt-5sshk-jm6wf" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:41 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:29 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:28:42.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-reencrypt-5sshk-jm6wf" for this suite. Apr 5 20:28:48.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:28:48.278: INFO: namespace: extended-test-router-reencrypt-5sshk-jm6wf, resource: bindings, ignored listing per whitelist Apr 5 20:28:48.331: INFO: namespace extended-test-router-reencrypt-5sshk-jm6wf deletion completed in 6.113909056s S [SKIPPING] in Spec Setup (BeforeEach) [6.201 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:18 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:52 should support reencrypt to services backed by a serving certificate automatically [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:53 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:44 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should allow connections to services in the default namespace from a pod in another namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:48 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:28:48.332: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:28:48.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:28:48.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow connections to services in the default namespace from a pod in another namespace on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:48 Apr 5 20:28:48.332: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should respond with 503 to unrecognized hosts [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:69 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:28:48.333: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:28:48.351: INFO: configPath is now "/tmp/extended-test-router-stress-d5bsc-xhbj8-user.kubeconfig" Apr 5 20:28:48.351: INFO: The user is now "extended-test-router-stress-d5bsc-xhbj8-user" Apr 5 20:28:48.351: INFO: Creating project "extended-test-router-stress-d5bsc-xhbj8" Apr 5 20:28:48.398: INFO: Waiting on permissions in project "extended-test-router-stress-d5bsc-xhbj8" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:45 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:32 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:28:48.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-stress-d5bsc-xhbj8" for this suite. Apr 5 20:28:54.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:28:54.504: INFO: namespace: extended-test-router-stress-d5bsc-xhbj8, resource: bindings, ignored listing per whitelist Apr 5 20:28:54.558: INFO: namespace extended-test-router-stress-d5bsc-xhbj8 deletion completed in 6.116098613s S [SKIPPING] in Spec Setup (BeforeEach) [6.224 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:21 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:68 should respond with 503 to unrecognized hosts [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:69 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:48 ------------------------------ SSSS ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should allow communication from non-default to default namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:53 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:28:54.558: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:28:54.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:28:54.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow communication from non-default to default namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:53 Apr 5 20:28:54.558: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSS ------------------------------ [Area:Networking] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should block multicast traffic in namespaces where it is disabled [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:42 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:441 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:28:54.560: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:28:54.578: INFO: configPath is now "/tmp/extended-test-multicast-nm5rt-qcwb2-user.kubeconfig" Apr 5 20:28:54.578: INFO: The user is now "extended-test-multicast-nm5rt-qcwb2-user" Apr 5 20:28:54.578: INFO: Creating project "extended-test-multicast-nm5rt-qcwb2" Apr 5 20:28:54.722: INFO: Waiting on permissions in project "extended-test-multicast-nm5rt-qcwb2" ... STEP: Waiting for a default service account to be provisioned in namespace [It] should block multicast traffic in namespaces where it is disabled [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:42 Apr 5 20:28:54.744: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:28:54.752: INFO: Waiting up to 5m0s for pod multicast-0 status to be running Apr 5 20:28:54.755: INFO: Waiting for pod multicast-0 in namespace 'extended-test-multicast-nm5rt-qcwb2' status to be 'running'(found phase: "Pending", readiness: false) (2.476237ms elapsed) Apr 5 20:28:59.762: INFO: Waiting up to 5m0s for pod multicast-1 status to be running Apr 5 20:28:59.763: INFO: Waiting for pod multicast-1 in namespace 'extended-test-multicast-nm5rt-qcwb2' status to be 'running'(found phase: "Pending", readiness: false) (1.718821ms elapsed) Apr 5 20:29:04.770: INFO: Waiting up to 5m0s for pod multicast-2 status to be running Apr 5 20:29:04.772: INFO: Waiting for pod multicast-2 in namespace 'extended-test-multicast-nm5rt-qcwb2' status to be 'running'(found phase: "Pending", readiness: false) (2.103194ms elapsed) Apr 5 20:29:09.774: INFO: Running 'oc exec --config=/tmp/extended-test-multicast-nm5rt-qcwb2-user.kubeconfig --namespace=extended-test-multicast-nm5rt-qcwb2 multicast-2 -- omping -c 1 -T 60 -q -q 10.128.0.45 10.128.0.46 10.129.0.65' Apr 5 20:29:09.774: INFO: Running 'oc exec --config=/tmp/extended-test-multicast-nm5rt-qcwb2-user.kubeconfig --namespace=extended-test-multicast-nm5rt-qcwb2 multicast-0 -- omping -c 1 -T 60 -q -q 10.128.0.45 10.128.0.46 10.129.0.65' Apr 5 20:29:09.774: INFO: Running 'oc exec --config=/tmp/extended-test-multicast-nm5rt-qcwb2-user.kubeconfig --namespace=extended-test-multicast-nm5rt-qcwb2 multicast-1 -- omping -c 1 -T 60 -q -q 10.128.0.45 10.128.0.46 10.129.0.65' [AfterEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:29:17.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-multicast-nm5rt-qcwb2" for this suite. Apr 5 20:29:23.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:29:23.291: INFO: namespace: extended-test-multicast-nm5rt-qcwb2, resource: bindings, ignored listing per whitelist Apr 5 20:29:23.323: INFO: namespace extended-test-multicast-nm5rt-qcwb2 deletion completed in 6.11225184s • [SLOW TEST:28.763 seconds] [Area:Networking] multicast /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:21 when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:439 should block multicast traffic in namespaces where it is disabled [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:42 ------------------------------ SSSSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should enforce policy based on Ports [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:132 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:29:23.323: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should enforce policy based on Ports [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:132 STEP: Creating a simple server. STEP: Creating a server pod server in namespace e2e-tests-network-policy-qddtn Apr 5 20:29:23.396: INFO: Created pod server STEP: Creating a service svc-server for pod server in namespace e2e-tests-network-policy-qddtn Apr 5 20:29:23.405: INFO: Created service svc-server Apr 5 20:29:23.405: INFO: Waiting for Server to come up. STEP: Testing pods can connect to both ports when no policy is present. STEP: Creating client pod basecase-reachable-80 that should successfully connect to svc-server. Apr 5 20:29:27.417: INFO: Waiting for basecase-reachable-80 to complete. Apr 5 20:29:33.423: INFO: Waiting for basecase-reachable-80 to complete. Apr 5 20:29:33.423: INFO: Waiting up to 5m0s for pod "basecase-reachable-80" in namespace "e2e-tests-network-policy-qddtn" to be "success or failure" Apr 5 20:29:33.424: INFO: Pod "basecase-reachable-80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.496603ms STEP: Saw pod success Apr 5 20:29:33.424: INFO: Pod "basecase-reachable-80" satisfied condition "success or failure" STEP: Cleaning up the pod basecase-reachable-80 STEP: Creating client pod basecase-reachable-81 that should successfully connect to svc-server. Apr 5 20:29:33.435: INFO: Waiting for basecase-reachable-81 to complete. Apr 5 20:29:37.439: INFO: Waiting for basecase-reachable-81 to complete. Apr 5 20:29:37.439: INFO: Waiting up to 5m0s for pod "basecase-reachable-81" in namespace "e2e-tests-network-policy-qddtn" to be "success or failure" Apr 5 20:29:37.440: INFO: Pod "basecase-reachable-81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.470327ms STEP: Saw pod success Apr 5 20:29:37.440: INFO: Pod "basecase-reachable-81" satisfied condition "success or failure" STEP: Cleaning up the pod basecase-reachable-81 STEP: Creating a network policy for the Service which allows traffic only to one port. STEP: Testing pods can connect only to the port allowed by the policy. STEP: Creating client pod client-a that should not be able to connect to svc-server. Apr 5 20:29:37.457: INFO: Waiting for client-a to complete. Apr 5 20:29:37.457: INFO: Waiting up to 5m0s for pod "client-a" in namespace "e2e-tests-network-policy-qddtn" to be "success or failure" Apr 5 20:29:37.459: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096799ms Apr 5 20:29:39.461: INFO: Pod "client-a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004311035s Apr 5 20:29:41.463: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 4.006531538s Apr 5 20:29:43.466: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 6.008778279s Apr 5 20:29:45.468: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 8.011050691s Apr 5 20:29:47.470: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 10.013261831s Apr 5 20:29:49.472: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 12.015469223s Apr 5 20:29:51.475: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 14.017792937s Apr 5 20:29:53.477: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 16.020028287s Apr 5 20:29:55.479: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 18.022188464s Apr 5 20:29:57.481: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 20.024484735s Apr 5 20:29:59.484: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 22.026852704s Apr 5 20:30:01.486: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 24.029085028s Apr 5 20:30:03.488: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 26.031280913s Apr 5 20:30:05.490: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 28.033576247s Apr 5 20:30:07.493: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 30.035816254s Apr 5 20:30:09.495: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 32.03812208s Apr 5 20:30:11.497: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 34.040384545s Apr 5 20:30:13.499: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 36.042641858s Apr 5 20:30:15.502: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 38.044942799s Apr 5 20:30:17.504: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 40.047219169s Apr 5 20:30:19.506: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 42.049367563s Apr 5 20:30:21.508: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 44.051526086s Apr 5 20:30:23.510: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 46.053662272s Apr 5 20:30:25.513: INFO: Pod "client-a": Phase="Running", Reason="", readiness=true. Elapsed: 48.055819676s Apr 5 20:30:27.515: INFO: Pod "client-a": Phase="Failed", Reason="", readiness=false. Elapsed: 50.058220653s STEP: Cleaning up the pod client-a STEP: Creating client pod client-b that should successfully connect to svc-server. Apr 5 20:30:27.530: INFO: Waiting for client-b to complete. Apr 5 20:30:31.536: INFO: Waiting for client-b to complete. Apr 5 20:30:31.536: INFO: Waiting up to 5m0s for pod "client-b" in namespace "e2e-tests-network-policy-qddtn" to be "success or failure" Apr 5 20:30:31.537: INFO: Pod "client-b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1.536574ms STEP: Saw pod success Apr 5 20:30:31.537: INFO: Pod "client-b" satisfied condition "success or failure" STEP: Cleaning up the pod client-b STEP: Cleaning up the policy. STEP: Cleaning up the server. STEP: Cleaning up the server's service. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:30:31.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-network-policy-qddtn" for this suite. Apr 5 20:30:37.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:30:37.680: INFO: namespace: e2e-tests-network-policy-qddtn, resource: bindings, ignored listing per whitelist Apr 5 20:30:37.685: INFO: namespace e2e-tests-network-policy-qddtn deletion completed in 6.109301207s • [SLOW TEST:74.362 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should enforce policy based on Ports [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:132 ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router converges when multiple routers are writing conflicting status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:168 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:30:37.685: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:30:37.705: INFO: configPath is now "/tmp/extended-test-router-stress-psp7x-d7w7p-user.kubeconfig" Apr 5 20:30:37.705: INFO: The user is now "extended-test-router-stress-psp7x-d7w7p-user" Apr 5 20:30:37.705: INFO: Creating project "extended-test-router-stress-psp7x-d7w7p" Apr 5 20:30:37.775: INFO: Waiting on permissions in project "extended-test-router-stress-psp7x-d7w7p" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:52 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:40 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:30:37.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-stress-psp7x-d7w7p" for this suite. Apr 5 20:30:43.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:30:43.836: INFO: namespace: extended-test-router-stress-psp7x-d7w7p, resource: bindings, ignored listing per whitelist Apr 5 20:30:43.898: INFO: namespace extended-test-router-stress-psp7x-d7w7p deletion completed in 6.111135857s S [SKIPPING] in Spec Setup (BeforeEach) [6.213 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:30 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:86 converges when multiple routers are writing conflicting status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:168 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:57 ------------------------------ SSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:42 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:30:43.898: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:30:43.916: INFO: configPath is now "/tmp/openshift-extended-tests/extended-test-scoped-router-7lkv6-z2vq8-user.kubeconfig" Apr 5 20:30:43.916: INFO: The user is now "extended-test-scoped-router-7lkv6-z2vq8-user" Apr 5 20:30:43.916: INFO: Creating project "extended-test-scoped-router-7lkv6-z2vq8" Apr 5 20:30:43.966: INFO: Waiting on permissions in project "extended-test-scoped-router-7lkv6-z2vq8" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:32 Apr 5 20:30:44.000: INFO: Running 'oc new-app --config=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-scoped-router-7lkv6-z2vq8 -f /tmp/fixture-testdata-dir011284646/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "extended-test-scoped-router-7lkv6-z2vq8/" for "/tmp/fixture-testdata-dir011284646/test/extended/testdata/scoped-router.yaml" to project extended-test-scoped-router-7lkv6-z2vq8 * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-scoped", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first"] --> Creating resources ... pod "scoped-router" created pod "router-override" created rolebinding "system-router" created route "route-1" created route "route-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Run 'oc status' to view your app. [It] should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:42 Apr 5 20:30:44.360: INFO: Creating new exec pod STEP: creating a scoped router from a config file "/tmp/fixture-testdata-dir011284646/test/extended/testdata/scoped-router.yaml" STEP: waiting for the healthz endpoint to respond Apr 5 20:30:49.374: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-7lkv6-z2vq8 execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.128.0.51' "http://10.128.0.51:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:30:49.666: INFO: stderr: "" STEP: waiting for the valid route to respond Apr 5 20:30:49.666: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-7lkv6-z2vq8 execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: FIRST.example.com' "http://10.128.0.51/Letter" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:30:53.974: INFO: stderr: "" STEP: checking that second.example.com does not match a route Apr 5 20:30:53.974: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-7lkv6-z2vq8 execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: second.example.com' "http://10.128.0.51/Letter"' Apr 5 20:30:54.257: INFO: stderr: "" STEP: checking that third.example.com does not match a route Apr 5 20:30:54.257: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-7lkv6-z2vq8 execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: third.example.com' "http://10.128.0.51/Letter"' Apr 5 20:30:54.564: INFO: stderr: "" [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:30:54.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-scoped-router-7lkv6-z2vq8" for this suite. Apr 5 20:31:22.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:31:22.669: INFO: namespace: extended-test-scoped-router-7lkv6-z2vq8, resource: bindings, ignored listing per whitelist Apr 5 20:31:22.680: INFO: namespace extended-test-scoped-router-7lkv6-z2vq8 deletion completed in 28.108875596s • [SLOW TEST:38.782 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:25 The HAProxy router /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:41 should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [Area:Networking] multicast when using one of the plugins 'redhat/openshift-ovs-subnet' should block multicast traffic [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:31 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-subnet' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:441 Apr 5 20:31:22.680: INFO: Not using one of the specified plugins [AfterEach] when using one of the plugins 'redhat/openshift-ovs-subnet' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:31:22.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] multicast /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:21 when using one of the plugins 'redhat/openshift-ovs-subnet' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:439 should block multicast traffic [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:31 Apr 5 20:31:22.680: Not using one of the specified plugins /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should allow communication from default to non-default namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:45 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:31:22.682: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:31:22.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:31:22.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow communication from default to non-default namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:45 Apr 5 20:31:22.682: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ [Area:Networking] network isolation when using a plugin that does not isolate namespaces by default should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:404 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:31:22.683: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:31:22.722: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15 Apr 5 20:31:22.918: INFO: Using nettest-node-1 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:31:26.940: INFO: Target pod IP:port is 10.128.0.52:8080 Apr 5 20:31:26.940: INFO: Creating an exec pod on node nettest-node-1 Apr 5 20:31:26.940: INFO: Creating new exec pod Apr 5 20:31:30.955: INFO: Waiting up to 10s to wget 10.128.0.52:8080 Apr 5 20:31:30.955: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-isolation2-vqd52 execpod-sourceip-nettest-node-1g9z8n -- /bin/sh -c wget -T 30 -qO- 10.128.0.52:8080' Apr 5 20:31:31.249: INFO: stderr: "" Apr 5 20:31:31.249: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:31:31.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation1-6m4vm" for this suite. Apr 5 20:31:43.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:31:43.316: INFO: namespace: e2e-tests-net-isolation1-6m4vm, resource: bindings, ignored listing per whitelist Apr 5 20:31:43.371: INFO: namespace e2e-tests-net-isolation1-6m4vm deletion completed in 12.109114271s [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:31:43.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation2-vqd52" for this suite. Apr 5 20:31:49.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:31:49.480: INFO: namespace: e2e-tests-net-isolation2-vqd52, resource: bindings, ignored listing per whitelist Apr 5 20:31:49.483: INFO: namespace e2e-tests-net-isolation2-vqd52 deletion completed in 6.109243707s • [SLOW TEST:26.800 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:403 should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15 ------------------------------ SSSSS ------------------------------ [Area:Networking] services when using a plugin that does not isolate namespaces by default should allow connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:27 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:404 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:31:49.483: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:31:49.541: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:27 Apr 5 20:31:49.718: INFO: Using nettest-node-1 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:31:53.732: INFO: Target pod IP:port is 10.128.0.54:8080 Apr 5 20:31:53.745: INFO: Endpoint e2e-tests-net-services1-fmvmh/service-ng6fb is not ready yet Apr 5 20:31:58.749: INFO: Target service IP:port is 172.30.175.206:8080 Apr 5 20:31:58.749: INFO: Creating an exec pod on node nettest-node-1 Apr 5 20:31:58.749: INFO: Creating new exec pod Apr 5 20:32:02.762: INFO: Waiting up to 10s to wget 172.30.175.206:8080 Apr 5 20:32:02.763: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-7tj6t execpod-sourceip-nettest-node-175dtm -- /bin/sh -c wget -T 30 -qO- 172.30.175.206:8080' Apr 5 20:32:03.072: INFO: stderr: "" Apr 5 20:32:03.072: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:32:03.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-fmvmh" for this suite. Apr 5 20:32:15.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:32:15.200: INFO: namespace: e2e-tests-net-services1-fmvmh, resource: bindings, ignored listing per whitelist Apr 5 20:32:15.212: INFO: namespace e2e-tests-net-services1-fmvmh deletion completed in 12.113506125s [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:32:15.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services2-7tj6t" for this suite. Apr 5 20:32:21.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:32:21.320: INFO: namespace: e2e-tests-net-services2-7tj6t, resource: bindings, ignored listing per whitelist Apr 5 20:32:21.330: INFO: namespace e2e-tests-net-services2-7tj6t deletion completed in 6.115716352s • [SLOW TEST:31.847 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:403 should allow connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:27 ------------------------------ SS ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should allow communication from non-default to default namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:49 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:32:21.330: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:32:21.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:32:21.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow communication from non-default to default namespace on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:49 Apr 5 20:32:21.330: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SS ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should prevent communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:28 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:32:21.331: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:32:21.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:32:21.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should prevent communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:28 Apr 5 20:32:21.331: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should expose a health check on the metrics port [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:83 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:32:21.333: INFO: >>> kubeConfig: /tmp/openshift/networking/networkpolicy/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:32:21.354: INFO: configPath is now "/tmp/extended-test-router-metrics-r6dbc-ckdxq-user.kubeconfig" Apr 5 20:32:21.354: INFO: The user is now "extended-test-router-metrics-r6dbc-ckdxq-user" Apr 5 20:32:21.354: INFO: Creating project "extended-test-router-metrics-r6dbc-ckdxq" Apr 5 20:32:21.411: INFO: Waiting on permissions in project "extended-test-router-metrics-r6dbc-ckdxq" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:32:21.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-metrics-r6dbc-ckdxq" for this suite. Apr 5 20:32:27.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:32:27.494: INFO: namespace: extended-test-router-metrics-r6dbc-ckdxq, resource: bindings, ignored listing per whitelist Apr 5 20:32:27.539: INFO: namespace extended-test-router-metrics-r6dbc-ckdxq deletion completed in 6.111416784s [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:76 S [SKIPPING] in Spec Setup (BeforeEach) [6.207 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:26 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:82 should expose a health check on the metrics port [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:83 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:39 ------------------------------ SSSSSSSSSS ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should prevent communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:32 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 Apr 5 20:32:27.540: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:32:27.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:32:27.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should prevent communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:32 Apr 5 20:32:27.540: This plugin does not isolate namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ Apr 5 20:32:27.541: INFO: Running AfterSuite actions on all node Apr 5 20:32:27.541: INFO: Running AfterSuite actions on node 1 Ran 22 of 440 Specs in 1115.829 seconds SUCCESS! -- 22 Passed | 0 Failed | 0 Pending | 418 Skipped Apr 5 20:32:27.544: INFO: Dumping logs locally to: /data/src/github.com/openshift/origin/_output/scripts/networking/artifacts/junit Apr 5 20:32:27.544: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec cluster/log-dump/log-dump.sh: no such file or directory --- PASS: TestExtended (1115.92s) PASS [INFO] [20:32:27+0000] Saving container logs [INFO] [20:32:30+0000] Shutting down docker-in-docker cluster for the networkpolicy plugin Stopping dind cluster 'nettest' [INFO] [20:32:39+0000] Targeting multitenant plugin: redhat/openshift-ovs-multitenant [INFO] [20:32:39+0000] Launching a docker-in-docker cluster for the multitenant plugin Stopping dind cluster 'nettest' cat: /tmp/openshift/networking/multitenant/dind-env: No such file or directory Starting dind cluster 'nettest' with plugin 'redhat/openshift-ovs-multitenant' and runtime 'dockershim' Waiting for ok .......................................................................... Done Waiting for 3 nodes to report readiness .............. Done Before invoking the openshift cli, make sure to source the cluster's rc file to configure the bash environment: $ . dind-nettest.rc $ oc get nodes [INFO] [20:34:40+0000] Saving cluster configuration [INFO] [20:34:40+0000] Running networking e2e tests against the multitenant plugin I0405 20:34:41.026946 23177 test.go:94] Extended test version v3.10.0-alpha.0+4253ab3-549 === RUN TestExtended Running Suite: Extended ======================= Random Seed: 1522960480 - Will randomize all specs Will run 45 of 440 specs Apr 5 20:34:41.118: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig I0405 20:34:41.118384 23177 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. Apr 5 20:34:41.121: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable Apr 5 20:34:41.135: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 5 20:34:41.140: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 5 20:34:41.140: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready. Apr 5 20:34:41.142: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller] Apr 5 20:34:41.142: INFO: Dumping network health container logs from all nodes... Apr 5 20:34:41.143: INFO: e2e test version: v1.9.1+a0ce1bc657 Apr 5 20:34:41.144: INFO: kube-apiserver version: v1.9.1+a0ce1bc657 I0405 20:34:41.144555 23177 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. [Area:Networking] services when using a plugin that isolates namespaces by default should prevent connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:44 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:34:41.363: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:34:41.461: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:34:41.465: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should prevent connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:44 Apr 5 20:34:41.574: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:36:43.589: INFO: Target pod IP:port is 10.128.0.37:8080 Apr 5 20:36:43.607: INFO: Endpoint e2e-tests-net-services1-smf9h/service-nn227 is not ready yet Apr 5 20:36:48.610: INFO: Target service IP:port is 172.30.188.69:8080 Apr 5 20:36:48.610: INFO: Creating an exec pod on node nettest-node-2 Apr 5 20:36:48.610: INFO: Creating new exec pod Apr 5 20:36:58.623: INFO: Waiting up to 10s to wget 172.30.188.69:8080 Apr 5 20:36:58.623: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-mtvnb execpod-sourceip-nettest-node-2wnrpn -- /bin/sh -c wget -T 30 -qO- 172.30.188.69:8080' Apr 5 20:37:28.924: INFO: rc: 127 Apr 5 20:37:28.924: INFO: got err: error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl [kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-mtvnb execpod-sourceip-nettest-node-2wnrpn -- /bin/sh -c wget -T 30 -qO- 172.30.188.69:8080] [] <nil> wget: download timed out command terminated with exit code 1 [] <nil> 0xc4201b3140 exit status 1 <nil> <nil> true [0xc421796118 0xc421796130 0xc421796148] [0xc421796118 0xc421796130 0xc421796148] [0xc421796128 0xc421796140] [0x989690 0x989690] 0xc421424480 <nil>}: Command stdout: stderr: wget: download timed out command terminated with exit code 1 error: exit status 1 , retry until timeout Apr 5 20:37:28.924: INFO: Creating new exec pod Apr 5 20:38:08.938: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-mtvnb debugpod-sourceip-nettest-node-2n27cl -- /bin/sh -c ovs-ofctl -O OpenFlow13 dump-flows br0' Apr 5 20:38:09.225: INFO: stderr: "" Apr 5 20:38:09.225: INFO: DEBUG: OFPST_FLOW reply (OF1.3) (xid=0x2): cookie=0x0, duration=217.718s, table=0, n_packets=0, n_bytes=0, priority=250,ip,in_port=2,nw_dst=224.0.0.0/4 actions=drop cookie=0x0, duration=217.751s, table=0, n_packets=0, n_bytes=0, priority=200,arp,in_port=1,arp_spa=10.128.0.0/14,arp_tpa=10.129.0.0/23 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 cookie=0x0, duration=217.745s, table=0, n_packets=0, n_bytes=0, priority=200,ip,in_port=1,nw_src=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 cookie=0x0, duration=217.731s, table=0, n_packets=0, n_bytes=0, priority=200,ip,in_port=1,nw_dst=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 cookie=0x0, duration=217.701s, table=0, n_packets=1, n_bytes=42, priority=200,arp,in_port=2,arp_spa=10.129.0.1,arp_tpa=10.128.0.0/14 actions=goto_table:30 cookie=0x0, duration=217.696s, table=0, n_packets=0, n_bytes=0, priority=200,ip,in_port=2 actions=goto_table:30 cookie=0x0, duration=217.724s, table=0, n_packets=0, n_bytes=0, priority=150,in_port=1 actions=drop cookie=0x0, duration=217.685s, table=0, n_packets=8, n_bytes=648, priority=150,in_port=2 actions=drop cookie=0x0, duration=217.678s, table=0, n_packets=1, n_bytes=42, priority=100,arp actions=goto_table:20 cookie=0x0, duration=217.670s, table=0, n_packets=5, n_bytes=370, priority=100,ip actions=goto_table:20 cookie=0x0, duration=217.661s, table=0, n_packets=7, n_bytes=558, priority=0 actions=drop cookie=0x80c973ba, duration=217.325s, table=10, n_packets=0, n_bytes=0, priority=100,tun_src=172.17.0.3 actions=goto_table:30 cookie=0xfb5876b8, duration=217.300s, table=10, n_packets=0, n_bytes=0, priority=100,tun_src=172.17.0.2 actions=goto_table:30 cookie=0x0, duration=217.652s, table=10, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=73.401s, table=20, n_packets=1, n_bytes=42, priority=100,arp,in_port=3,arp_spa=10.129.0.2,arp_sha=00:00:0a:81:00:02/00:00:ff:ff:ff:ff actions=load:0x64c0eb->NXM_NX_REG0[],goto_table:21 cookie=0x0, duration=73.398s, table=20, n_packets=5, n_bytes=370, priority=100,ip,in_port=3,nw_src=10.129.0.2 actions=load:0x64c0eb->NXM_NX_REG0[],goto_table:21 cookie=0x0, duration=217.645s, table=20, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=217.637s, table=21, n_packets=6, n_bytes=412, priority=0 actions=goto_table:30 cookie=0x0, duration=217.629s, table=30, n_packets=1, n_bytes=42, priority=300,arp,arp_tpa=10.129.0.1 actions=output:2 cookie=0x0, duration=217.607s, table=30, n_packets=0, n_bytes=0, priority=300,ip,nw_dst=10.129.0.1 actions=output:2 cookie=0x0, duration=217.622s, table=30, n_packets=1, n_bytes=42, priority=200,arp,arp_tpa=10.129.0.0/23 actions=goto_table:40 cookie=0x0, duration=217.597s, table=30, n_packets=0, n_bytes=0, priority=200,ip,nw_dst=10.129.0.0/23 actions=goto_table:70 cookie=0x0, duration=217.615s, table=30, n_packets=0, n_bytes=0, priority=100,arp,arp_tpa=10.128.0.0/14 actions=goto_table:50 cookie=0x0, duration=217.590s, table=30, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=10.128.0.0/14 actions=goto_table:90 cookie=0x0, duration=217.602s, table=30, n_packets=5, n_bytes=370, priority=100,ip,nw_dst=172.30.0.0/16 actions=goto_table:60 cookie=0x0, duration=217.586s, table=30, n_packets=0, n_bytes=0, priority=50,ip,in_port=1,nw_dst=224.0.0.0/4 actions=goto_table:120 cookie=0x0, duration=217.577s, table=30, n_packets=0, n_bytes=0, priority=25,ip,nw_dst=224.0.0.0/4 actions=goto_table:110 cookie=0x0, duration=217.571s, table=30, n_packets=0, n_bytes=0, priority=0,ip actions=goto_table:100 cookie=0x0, duration=217.566s, table=30, n_packets=0, n_bytes=0, priority=0,arp actions=drop cookie=0x0, duration=73.395s, table=40, n_packets=1, n_bytes=42, priority=100,arp,arp_tpa=10.129.0.2 actions=output:3 cookie=0x0, duration=217.558s, table=40, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x80c973ba, duration=217.320s, table=50, n_packets=0, n_bytes=0, priority=100,arp,arp_tpa=10.128.0.0/23 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.17.0.3->tun_dst,output:1 cookie=0xfb5876b8, duration=217.295s, table=50, n_packets=0, n_bytes=0, priority=100,arp,arp_tpa=10.130.0.0/23 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.17.0.2->tun_dst,output:1 cookie=0x0, duration=217.552s, table=50, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=217.545s, table=60, n_packets=0, n_bytes=0, priority=200,reg0=0 actions=output:2 cookie=0x0, duration=217.330s, table=60, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=172.30.0.1,nw_frag=later actions=load:0->NXM_NX_REG1[],load:0x2->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=85.604s, table=60, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=172.30.188.69,nw_frag=later actions=load:0x815ec7->NXM_NX_REG1[],load:0x2->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=217.325s, table=60, n_packets=0, n_bytes=0, priority=100,tcp,nw_dst=172.30.0.1,tp_dst=443 actions=load:0->NXM_NX_REG1[],load:0x2->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=217.316s, table=60, n_packets=0, n_bytes=0, priority=100,udp,nw_dst=172.30.0.1,tp_dst=53 actions=load:0->NXM_NX_REG1[],load:0x2->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=217.309s, table=60, n_packets=0, n_bytes=0, priority=100,tcp,nw_dst=172.30.0.1,tp_dst=53 actions=load:0->NXM_NX_REG1[],load:0x2->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=85.594s, table=60, n_packets=5, n_bytes=370, priority=100,tcp,nw_dst=172.30.188.69,tp_dst=8080 actions=load:0x815ec7->NXM_NX_REG1[],load:0x2->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=217.540s, table=60, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=73.393s, table=70, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=10.129.0.2 actions=load:0x64c0eb->NXM_NX_REG1[],load:0x3->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=217.531s, table=70, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=217.525s, table=80, n_packets=0, n_bytes=0, priority=300,ip,nw_src=10.129.0.1 actions=output:NXM_NX_REG2[] cookie=0x0, duration=217.439s, table=80, n_packets=0, n_bytes=0, priority=200,reg0=0 actions=output:NXM_NX_REG2[] cookie=0x0, duration=217.433s, table=80, n_packets=0, n_bytes=0, priority=200,reg1=0 actions=output:NXM_NX_REG2[] cookie=0x0, duration=207.817s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x815ec7,reg1=0x815ec7 actions=output:NXM_NX_REG2[] cookie=0x0, duration=207.710s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x64c0eb,reg1=0x64c0eb actions=output:NXM_NX_REG2[] cookie=0x0, duration=217.516s, table=80, n_packets=5, n_bytes=370, priority=0 actions=drop cookie=0x80c973ba, duration=217.316s, table=90, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=10.128.0.0/23 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.17.0.3->tun_dst,output:1 cookie=0xfb5876b8, duration=217.283s, table=90, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=10.130.0.0/23 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.17.0.2->tun_dst,output:1 cookie=0x0, duration=217.509s, table=90, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=217.503s, table=100, n_packets=0, n_bytes=0, priority=0 actions=goto_table:101 cookie=0x0, duration=217.498s, table=101, n_packets=0, n_bytes=0, priority=51,tcp,nw_dst=172.17.0.4,tp_dst=53 actions=output:2 cookie=0x0, duration=217.488s, table=101, n_packets=0, n_bytes=0, priority=51,udp,nw_dst=172.17.0.4,tp_dst=53 actions=output:2 cookie=0x0, duration=217.478s, table=101, n_packets=0, n_bytes=0, priority=0 actions=output:2 cookie=0x0, duration=217.474s, table=110, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=217.278s, table=111, n_packets=0, n_bytes=0, priority=100 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.17.0.2->tun_dst,output:1,set_field:172.17.0.3->tun_dst,output:1,goto_table:120 cookie=0x0, duration=217.459s, table=120, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=217.448s, table=253, n_packets=0, n_bytes=0, actions=note:01.07.00.00.00.00 Apr 5 20:38:09.225: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-mtvnb debugpod-sourceip-nettest-node-2n27cl -- /bin/sh -c iptables-save' Apr 5 20:38:09.522: INFO: stderr: "" Apr 5 20:38:09.522: INFO: DEBUG: # Generated by iptables-save v1.4.21 on Thu Apr 5 20:38:09 2018 *nat :PREROUTING ACCEPT [4:240] :INPUT ACCEPT [4:240] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :DOCKER - [0:0] :KUBE-HOSTPORTS - [0:0] :KUBE-MARK-DROP - [0:0] :KUBE-MARK-MASQ - [0:0] :KUBE-NODEPORT-CONTAINER - [0:0] :KUBE-NODEPORT-HOST - [0:0] :KUBE-NODEPORTS - [0:0] :KUBE-PORTALS-CONTAINER - [0:0] :KUBE-PORTALS-HOST - [0:0] :KUBE-POSTROUTING - [0:0] :KUBE-SEP-CKTKXEMIKRIIY55M - [0:0] :KUBE-SEP-EZ5ESXJRZ36JV4D4 - [0:0] :KUBE-SEP-PATXOTJBHFPU4CNS - [0:0] :KUBE-SEP-V33LWMNJZUDMD5TN - [0:0] :KUBE-SERVICES - [0:0] :KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0] :KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0] :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0] :KUBE-SVC-RUGAUB7C4343TUNN - [0:0] :OPENSHIFT-MASQUERADE - [0:0] -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A PREROUTING -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-CONTAINER -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A PREROUTING -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-CONTAINER -A PREROUTING -m comment --comment "kube hostport portals" -m addrtype --dst-type LOCAL -j KUBE-HOSTPORTS -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-HOST -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-HOST -A OUTPUT -m comment --comment "kube hostport portals" -m addrtype --dst-type LOCAL -j KUBE-HOSTPORTS -A POSTROUTING -m comment --comment "rules for masquerading OpenShift traffic" -j OPENSHIFT-MASQUERADE -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING -A POSTROUTING -s 172.19.0.0/16 ! -o docker0 -j MASQUERADE -A POSTROUTING -s 127.0.0.0/8 -o tun0 -m comment --comment "SNAT for localhost access to hostports" -j MASQUERADE -A DOCKER -i docker0 -j RETURN -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000 -A KUBE-MARK-MASQ -j MARK --set-xmark 0x1/0x1 -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x1/0x1 -j MASQUERADE -A KUBE-SEP-CKTKXEMIKRIIY55M -s 172.17.0.2/32 -m comment --comment "default/kubernetes:dns" -j KUBE-MARK-MASQ -A KUBE-SEP-CKTKXEMIKRIIY55M -p udp -m comment --comment "default/kubernetes:dns" -m recent --set --name KUBE-SEP-CKTKXEMIKRIIY55M --mask 255.255.255.255 --rsource -m udp -j DNAT --to-destination 172.17.0.2:8053 -A KUBE-SEP-EZ5ESXJRZ36JV4D4 -s 172.17.0.2/32 -m comment --comment "default/kubernetes:dns-tcp" -j KUBE-MARK-MASQ -A KUBE-SEP-EZ5ESXJRZ36JV4D4 -p tcp -m comment --comment "default/kubernetes:dns-tcp" -m recent --set --name KUBE-SEP-EZ5ESXJRZ36JV4D4 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 172.17.0.2:8053 -A KUBE-SEP-PATXOTJBHFPU4CNS -s 172.17.0.2/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ -A KUBE-SEP-PATXOTJBHFPU4CNS -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-PATXOTJBHFPU4CNS --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 172.17.0.2:8443 -A KUBE-SEP-V33LWMNJZUDMD5TN -s 10.128.0.37/32 -m comment --comment "e2e-tests-net-services1-smf9h/service-nn227:" -j KUBE-MARK-MASQ -A KUBE-SEP-V33LWMNJZUDMD5TN -p tcp -m comment --comment "e2e-tests-net-services1-smf9h/service-nn227:" -m tcp -j DNAT --to-destination 10.128.0.37:8080 -A KUBE-SERVICES -d 172.30.0.1/32 -p udp -m comment --comment "default/kubernetes:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4 -A KUBE-SERVICES -d 172.30.0.1/32 -p tcp -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56 -A KUBE-SERVICES -d 172.30.188.69/32 -p tcp -m comment --comment "e2e-tests-net-services1-smf9h/service-nn227: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-RUGAUB7C4343TUNN -A KUBE-SERVICES -d 172.30.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS -A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment "default/kubernetes:dns" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-CKTKXEMIKRIIY55M --mask 255.255.255.255 --rsource -j KUBE-SEP-CKTKXEMIKRIIY55M -A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment "default/kubernetes:dns" -j KUBE-SEP-CKTKXEMIKRIIY55M -A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment "default/kubernetes:dns-tcp" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-EZ5ESXJRZ36JV4D4 --mask 255.255.255.255 --rsource -j KUBE-SEP-EZ5ESXJRZ36JV4D4 -A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment "default/kubernetes:dns-tcp" -j KUBE-SEP-EZ5ESXJRZ36JV4D4 -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-PATXOTJBHFPU4CNS --mask 255.255.255.255 --rsource -j KUBE-SEP-PATXOTJBHFPU4CNS -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-PATXOTJBHFPU4CNS -A KUBE-SVC-RUGAUB7C4343TUNN -m comment --comment "e2e-tests-net-services1-smf9h/service-nn227:" -j KUBE-SEP-V33LWMNJZUDMD5TN -A OPENSHIFT-MASQUERADE -s 10.128.0.0/14 -m comment --comment "masquerade pod-to-service and pod-to-external traffic" -j MASQUERADE COMMIT # Completed on Thu Apr 5 20:38:09 2018 # Generated by iptables-save v1.4.21 on Thu Apr 5 20:38:09 2018 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [169:33270] :DOCKER - [0:0] :DOCKER-ISOLATION - [0:0] :KUBE-EXTERNAL-SERVICES - [0:0] :KUBE-FIREWALL - [0:0] :KUBE-FORWARD - [0:0] :KUBE-NODEPORT-NON-LOCAL - [0:0] :KUBE-SERVICES - [0:0] :OPENSHIFT-ADMIN-OUTPUT-RULES - [0:0] :OPENSHIFT-FIREWALL-ALLOW - [0:0] :OPENSHIFT-FIREWALL-FORWARD - [0:0] -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES -A INPUT -m comment --comment "Ensure that non-local NodePort traffic can flow" -j KUBE-NODEPORT-NON-LOCAL -A INPUT -m comment --comment "firewall overrides" -j OPENSHIFT-FIREWALL-ALLOW -A INPUT -j KUBE-FIREWALL -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 10250 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 1936 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD -A FORWARD -i tun0 ! -o tun0 -m comment --comment "administrator overrides" -j OPENSHIFT-ADMIN-OUTPUT-RULES -A FORWARD -m comment --comment "firewall overrides" -j OPENSHIFT-FIREWALL-FORWARD -A FORWARD -j DOCKER-ISOLATION -A FORWARD -o docker0 -j DOCKER -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -j KUBE-FIREWALL -A DOCKER-ISOLATION -j RETURN -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x1/0x1 -j ACCEPT -A OPENSHIFT-FIREWALL-ALLOW -p udp -m udp --dport 4789 -m comment --comment "VXLAN incoming" -j ACCEPT -A OPENSHIFT-FIREWALL-ALLOW -i tun0 -m comment --comment "from SDN to localhost" -j ACCEPT -A OPENSHIFT-FIREWALL-ALLOW -i docker0 -m comment --comment "from docker to localhost" -j ACCEPT -A OPENSHIFT-FIREWALL-FORWARD -s 10.128.0.0/14 -m comment --comment "attempted resend after connection close" -m conntrack --ctstate INVALID -j DROP -A OPENSHIFT-FIREWALL-FORWARD -d 10.128.0.0/14 -m comment --comment "forward traffic from SDN" -j ACCEPT -A OPENSHIFT-FIREWALL-FORWARD -s 10.128.0.0/14 -m comment --comment "forward traffic to SDN" -j ACCEPT COMMIT # Completed on Thu Apr 5 20:38:09 2018 Apr 5 20:38:09.522: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-mtvnb debugpod-sourceip-nettest-node-2n27cl -- /bin/sh -c ss -ant' Apr 5 20:38:09.816: INFO: stderr: "" Apr 5 20:38:09.816: INFO: DEBUG: State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* TIME-WAIT 0 0 172.17.0.4:34440 52.216.228.248:443 TIME-WAIT 0 0 172.17.0.4:34430 52.216.228.248:443 ESTAB 0 0 172.17.0.4:46188 172.17.0.2:8443 TIME-WAIT 0 0 172.17.0.4:34436 52.216.228.248:443 TIME-WAIT 0 0 172.17.0.4:34432 52.216.228.248:443 TIME-WAIT 0 0 172.17.0.4:37792 52.216.228.136:443 LISTEN 0 128 :::10250 :::* LISTEN 0 128 :::10256 :::* LISTEN 0 128 :::22 :::* ESTAB 0 0 ::ffff:172.17.0.4:10250 ::ffff:172.17.0.2:41518 Apr 5 20:38:09.821: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:38:09.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-smf9h" for this suite. Apr 5 20:38:15.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:38:15.920: INFO: namespace: e2e-tests-net-services1-smf9h, resource: bindings, ignored listing per whitelist Apr 5 20:38:15.970: INFO: namespace e2e-tests-net-services1-smf9h deletion completed in 6.12310069s [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:38:15.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services2-mtvnb" for this suite. Apr 5 20:38:21.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:38:22.078: INFO: namespace: e2e-tests-net-services2-mtvnb, resource: bindings, ignored listing per whitelist Apr 5 20:38:22.084: INFO: namespace e2e-tests-net-services2-mtvnb deletion completed in 6.112531453s • [SLOW TEST:220.940 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should prevent connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:44 ------------------------------ SS ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should allow communication from default to non-default namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:45 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:38:22.085: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:38:22.157: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow communication from default to non-default namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:45 Apr 5 20:38:22.295: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:38:28.316: INFO: Target pod IP:port is 10.128.0.38:8080 Apr 5 20:38:28.316: INFO: Creating an exec pod on node nettest-node-2 Apr 5 20:38:28.316: INFO: Creating new exec pod Apr 5 20:38:32.328: INFO: Waiting up to 10s to wget 10.128.0.38:8080 Apr 5 20:38:32.328: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-isolation2-h9rbf execpod-sourceip-nettest-node-2vd6t6 -- /bin/sh -c wget -T 30 -qO- 10.128.0.38:8080' Apr 5 20:38:32.609: INFO: stderr: "" Apr 5 20:38:32.609: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:38:32.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation1-v4t6z" for this suite. Apr 5 20:38:38.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:38:38.725: INFO: namespace: e2e-tests-net-isolation1-v4t6z, resource: bindings, ignored listing per whitelist Apr 5 20:38:38.732: INFO: namespace e2e-tests-net-isolation1-v4t6z deletion completed in 6.108671777s [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:38:38.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation2-h9rbf" for this suite. Apr 5 20:38:44.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:38:44.772: INFO: namespace: e2e-tests-net-isolation2-h9rbf, resource: bindings, ignored listing per whitelist Apr 5 20:38:44.849: INFO: namespace e2e-tests-net-isolation2-h9rbf deletion completed in 6.11570635s • [SLOW TEST:22.765 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow communication from default to non-default namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:45 ------------------------------ SSSSSS ------------------------------ [Area:Networking] services when using a plugin that does not isolate namespaces by default should allow connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:31 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:404 Apr 5 20:38:44.849: INFO: This plugin isolates namespaces by default. [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:38:44.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:38:44.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:403 should allow connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:31 Apr 5 20:38:44.849: This plugin isolates namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSS ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should allow connections from pods in the default namespace to a service in another namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:56 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:38:44.851: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:38:44.947: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections from pods in the default namespace to a service in another namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:56 Apr 5 20:38:45.065: INFO: Using nettest-node-1 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:38:49.085: INFO: Target pod IP:port is 10.128.0.39:8080 Apr 5 20:38:49.099: INFO: Endpoint e2e-tests-net-services1-42nmw/service-w4klx is not ready yet Apr 5 20:38:54.103: INFO: Target service IP:port is 172.30.78.149:8080 Apr 5 20:38:54.103: INFO: Creating an exec pod on node nettest-node-1 Apr 5 20:38:54.103: INFO: Creating new exec pod Apr 5 20:39:00.114: INFO: Waiting up to 10s to wget 172.30.78.149:8080 Apr 5 20:39:00.114: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-4gwhx execpod-sourceip-nettest-node-17fp8j -- /bin/sh -c wget -T 30 -qO- 172.30.78.149:8080' Apr 5 20:39:00.404: INFO: stderr: "" Apr 5 20:39:00.404: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:39:00.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-42nmw" for this suite. Apr 5 20:39:10.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:39:10.530: INFO: namespace: e2e-tests-net-services1-42nmw, resource: bindings, ignored listing per whitelist Apr 5 20:39:10.549: INFO: namespace e2e-tests-net-services1-42nmw deletion completed in 10.12008546s [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:39:10.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services2-4gwhx" for this suite. Apr 5 20:39:16.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:39:16.605: INFO: namespace: e2e-tests-net-services2-4gwhx, resource: bindings, ignored listing per whitelist Apr 5 20:39:16.660: INFO: namespace e2e-tests-net-services2-4gwhx deletion completed in 6.108387441s • [SLOW TEST:31.809 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow connections from pods in the default namespace to a service in another namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:56 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should support reencrypt to services backed by a serving certificate automatically [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:53 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:39:16.660: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:39:16.686: INFO: configPath is now "/tmp/extended-test-router-reencrypt-6vdnr-599kq-user.kubeconfig" Apr 5 20:39:16.686: INFO: The user is now "extended-test-router-reencrypt-6vdnr-599kq-user" Apr 5 20:39:16.686: INFO: Creating project "extended-test-router-reencrypt-6vdnr-599kq" Apr 5 20:39:16.740: INFO: Waiting on permissions in project "extended-test-router-reencrypt-6vdnr-599kq" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:41 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:29 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:39:16.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-reencrypt-6vdnr-599kq" for this suite. Apr 5 20:39:22.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:39:22.838: INFO: namespace: extended-test-router-reencrypt-6vdnr-599kq, resource: bindings, ignored listing per whitelist Apr 5 20:39:22.908: INFO: namespace extended-test-router-reencrypt-6vdnr-599kq deletion completed in 6.115117259s S [SKIPPING] in Spec Setup (BeforeEach) [6.248 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:18 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:52 should support reencrypt to services backed by a serving certificate automatically [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:53 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:44 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:39 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:39:22.908: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:39:22.928: INFO: configPath is now "/tmp/extended-test-weighted-router-csm5h-hxttv-user.kubeconfig" Apr 5 20:39:22.928: INFO: The user is now "extended-test-weighted-router-csm5h-hxttv-user" Apr 5 20:39:22.928: INFO: Creating project "extended-test-weighted-router-csm5h-hxttv" Apr 5 20:39:23.093: INFO: Waiting on permissions in project "extended-test-weighted-router-csm5h-hxttv" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:29 Apr 5 20:39:23.097: INFO: Running 'oc new-app --config=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-weighted-router-csm5h-hxttv -f /tmp/fixture-testdata-dir426060294/test/extended/testdata/weighted-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "extended-test-weighted-router-csm5h-hxttv/" for "/tmp/fixture-testdata-dir426060294/test/extended/testdata/weighted-router.yaml" to project extended-test-weighted-router-csm5h-hxttv * With parameters: * IMAGE=openshift/origin-haproxy-router --> Creating resources ... pod "weighted-router" created rolebinding "system-router" created route "weightedroute" created route "zeroweightroute" created service "weightedendpoints1" created service "weightedendpoints2" created pod "endpoint-1" created pod "endpoint-2" created pod "endpoint-3" created --> Success Access your application via route 'weighted.example.com' Access your application via route 'zeroweight.example.com' Run 'oc status' to view your app. [It] should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:39 Apr 5 20:39:23.453: INFO: Creating new exec pod STEP: creating a weighted router from a config file "/tmp/fixture-testdata-dir426060294/test/extended/testdata/weighted-router.yaml" STEP: waiting for the healthz endpoint to respond Apr 5 20:39:29.480: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-csm5h-hxttv execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.129.0.4' "http://10.129.0.4:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:39:29.780: INFO: stderr: "" STEP: checking that 100 requests go through successfully Apr 5 20:39:29.780: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-csm5h-hxttv execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: weighted.example.com' "http://10.129.0.4" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:39:30.072: INFO: stderr: "" Apr 5 20:39:30.072: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-csm5h-hxttv execpod -- /bin/sh -c set -e for i in $(seq 1 100); do code=$( curl -s -o /dev/null -w '%{http_code}\n' --header 'Host: weighted.example.com' "http://10.129.0.4" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -ne 200 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi done ' Apr 5 20:39:30.799: INFO: stderr: "" STEP: checking that there are three weighted backends in the router stats Apr 5 20:39:30.799: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-csm5h-hxttv execpod -- /bin/sh -c curl -s -u admin:password --header 'Host: weighted.example.com' "http://10.129.0.4:1936/;csv"' Apr 5 20:39:31.107: INFO: stderr: "" STEP: checking that zero weights are also respected by the router Apr 5 20:39:31.107: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-csm5h-hxttv execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: zeroweight.example.com' "http://10.129.0.4"' Apr 5 20:39:31.409: INFO: stderr: "" [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:39:31.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-weighted-router-csm5h-hxttv" for this suite. Apr 5 20:39:39.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:39:39.519: INFO: namespace: extended-test-weighted-router-csm5h-hxttv, resource: bindings, ignored listing per whitelist Apr 5 20:39:39.529: INFO: namespace extended-test-weighted-router-csm5h-hxttv deletion completed in 8.111919973s • [SLOW TEST:16.621 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:22 The HAProxy router /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:38 should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:39 ------------------------------ SSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:42 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:39:39.529: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:39:39.548: INFO: configPath is now "/tmp/extended-test-scoped-router-t54ld-9tdmn-user.kubeconfig" Apr 5 20:39:39.548: INFO: The user is now "extended-test-scoped-router-t54ld-9tdmn-user" Apr 5 20:39:39.548: INFO: Creating project "extended-test-scoped-router-t54ld-9tdmn" Apr 5 20:39:39.618: INFO: Waiting on permissions in project "extended-test-scoped-router-t54ld-9tdmn" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:32 Apr 5 20:39:39.634: INFO: Running 'oc new-app --config=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-scoped-router-t54ld-9tdmn -f /tmp/fixture-testdata-dir426060294/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "extended-test-scoped-router-t54ld-9tdmn/" for "/tmp/fixture-testdata-dir426060294/test/extended/testdata/scoped-router.yaml" to project extended-test-scoped-router-t54ld-9tdmn * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-scoped", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first"] --> Creating resources ... pod "scoped-router" created pod "router-override" created rolebinding "system-router" created route "route-1" created route "route-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Run 'oc status' to view your app. [It] should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:42 Apr 5 20:39:39.974: INFO: Creating new exec pod STEP: creating a scoped router from a config file "/tmp/fixture-testdata-dir426060294/test/extended/testdata/scoped-router.yaml" STEP: waiting for the healthz endpoint to respond Apr 5 20:40:17.001: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-t54ld-9tdmn execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.128.0.43' "http://10.128.0.43:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:40:17.307: INFO: stderr: "" STEP: waiting for the valid route to respond Apr 5 20:40:17.307: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-t54ld-9tdmn execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: FIRST.example.com' "http://10.128.0.43/Letter" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:40:17.591: INFO: stderr: "" STEP: checking that second.example.com does not match a route Apr 5 20:40:17.591: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-t54ld-9tdmn execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: second.example.com' "http://10.128.0.43/Letter"' Apr 5 20:40:17.877: INFO: stderr: "" STEP: checking that third.example.com does not match a route Apr 5 20:40:17.877: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-t54ld-9tdmn execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: third.example.com' "http://10.128.0.43/Letter"' Apr 5 20:40:18.168: INFO: stderr: "" [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:40:18.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-scoped-router-t54ld-9tdmn" for this suite. Apr 5 20:40:32.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:40:32.233: INFO: namespace: extended-test-scoped-router-t54ld-9tdmn, resource: bindings, ignored listing per whitelist Apr 5 20:40:32.288: INFO: namespace extended-test-scoped-router-t54ld-9tdmn deletion completed in 14.113071565s • [SLOW TEST:52.759 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:25 The HAProxy router /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:41 should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:42 ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should allow communication from default to non-default namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:41 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:40:32.289: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:40:32.372: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow communication from default to non-default namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:41 Apr 5 20:40:32.575: INFO: Using nettest-node-1 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:40:36.591: INFO: Target pod IP:port is 10.128.0.45:8080 Apr 5 20:40:36.591: INFO: Creating an exec pod on node nettest-node-1 Apr 5 20:40:36.591: INFO: Creating new exec pod Apr 5 20:40:42.604: INFO: Waiting up to 10s to wget 10.128.0.45:8080 Apr 5 20:40:42.604: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-isolation2-vrndl execpod-sourceip-nettest-node-158nhj -- /bin/sh -c wget -T 30 -qO- 10.128.0.45:8080' Apr 5 20:40:42.890: INFO: stderr: "" Apr 5 20:40:42.890: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:40:42.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation1-gpxvm" for this suite. Apr 5 20:40:48.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:40:48.978: INFO: namespace: e2e-tests-net-isolation1-gpxvm, resource: bindings, ignored listing per whitelist Apr 5 20:40:49.018: INFO: namespace e2e-tests-net-isolation1-gpxvm deletion completed in 6.114545497s [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:40:49.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation2-vrndl" for this suite. Apr 5 20:40:55.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:40:55.088: INFO: namespace: e2e-tests-net-isolation2-vrndl, resource: bindings, ignored listing per whitelist Apr 5 20:40:55.130: INFO: namespace e2e-tests-net-isolation2-vrndl deletion completed in 6.1099159s • [SLOW TEST:22.841 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow communication from default to non-default namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:41 ------------------------------ SS ------------------------------ [Area:Networking] services basic functionality should allow connections to another pod on a different node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:18 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:40:55.130: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to another pod on a different node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:18 Apr 5 20:40:55.246: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:40:59.260: INFO: Target pod IP:port is 10.128.0.47:8080 Apr 5 20:40:59.273: INFO: Endpoint e2e-tests-net-services1-hfr66/service-xzsdw is not ready yet Apr 5 20:41:04.277: INFO: Target service IP:port is 172.30.44.56:8080 Apr 5 20:41:04.277: INFO: Creating an exec pod on node nettest-node-2 Apr 5 20:41:04.277: INFO: Creating new exec pod Apr 5 20:41:08.290: INFO: Waiting up to 10s to wget 172.30.44.56:8080 Apr 5 20:41:08.290: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services1-hfr66 execpod-sourceip-nettest-node-2j24b8 -- /bin/sh -c wget -T 30 -qO- 172.30.44.56:8080' Apr 5 20:41:08.581: INFO: stderr: "" Apr 5 20:41:08.581: INFO: Cleaning up the exec pod [AfterEach] basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:41:08.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-hfr66" for this suite. Apr 5 20:41:14.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:41:14.687: INFO: namespace: e2e-tests-net-services1-hfr66, resource: bindings, ignored listing per whitelist Apr 5 20:41:14.727: INFO: namespace e2e-tests-net-services1-hfr66 deletion completed in 6.117953463s • [SLOW TEST:19.598 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 basic functionality /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:11 should allow connections to another pod on a different node via a service IP [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:18 ------------------------------ SSSS ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should allow connections from pods in the default namespace to a service in another namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:60 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:41:14.728: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:41:14.808: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections from pods in the default namespace to a service in another namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:60 Apr 5 20:41:14.945: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:41:18.963: INFO: Target pod IP:port is 10.128.0.48:8080 Apr 5 20:41:18.975: INFO: Endpoint e2e-tests-net-services1-7gnmx/service-t54sb is not ready yet Apr 5 20:41:23.979: INFO: Target service IP:port is 172.30.69.91:8080 Apr 5 20:41:23.979: INFO: Creating an exec pod on node nettest-node-2 Apr 5 20:41:23.979: INFO: Creating new exec pod Apr 5 20:41:29.992: INFO: Waiting up to 10s to wget 172.30.69.91:8080 Apr 5 20:41:29.992: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-hc7vq execpod-sourceip-nettest-node-28p5fw -- /bin/sh -c wget -T 30 -qO- 172.30.69.91:8080' Apr 5 20:41:30.279: INFO: stderr: "" Apr 5 20:41:30.279: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:41:30.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-7gnmx" for this suite. Apr 5 20:41:36.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:41:36.414: INFO: namespace: e2e-tests-net-services1-7gnmx, resource: bindings, ignored listing per whitelist Apr 5 20:41:36.429: INFO: namespace e2e-tests-net-services1-7gnmx deletion completed in 6.117880913s [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:41:36.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services2-hc7vq" for this suite. Apr 5 20:41:42.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:41:42.493: INFO: namespace: e2e-tests-net-services2-hc7vq, resource: bindings, ignored listing per whitelist Apr 5 20:41:42.540: INFO: namespace e2e-tests-net-services2-hc7vq deletion completed in 6.110033325s • [SLOW TEST:27.813 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow connections from pods in the default namespace to a service in another namespace on a different node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:60 ------------------------------ SSSSSS ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should allow connections to services in the default namespace from a pod in another namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:48 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:41:42.541: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:41:42.627: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to services in the default namespace from a pod in another namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:48 Apr 5 20:41:42.750: INFO: Using nettest-node-1 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:41:46.771: INFO: Target pod IP:port is 10.128.0.49:8080 Apr 5 20:41:46.788: INFO: Endpoint e2e-tests-net-services1-zh6n5/service-jj4bs is not ready yet Apr 5 20:41:51.791: INFO: Target service IP:port is 172.30.218.183:8080 Apr 5 20:41:51.791: INFO: Creating an exec pod on node nettest-node-1 Apr 5 20:41:51.791: INFO: Creating new exec pod Apr 5 20:41:55.802: INFO: Waiting up to 10s to wget 172.30.218.183:8080 Apr 5 20:41:55.802: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-vc2l9 execpod-sourceip-nettest-node-1rpj5k -- /bin/sh -c wget -T 30 -qO- 172.30.218.183:8080' Apr 5 20:41:56.097: INFO: stderr: "" Apr 5 20:41:56.097: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:41:56.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-zh6n5" for this suite. Apr 5 20:42:02.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:42:02.229: INFO: namespace: e2e-tests-net-services1-zh6n5, resource: bindings, ignored listing per whitelist Apr 5 20:42:02.248: INFO: namespace e2e-tests-net-services1-zh6n5 deletion completed in 6.120021281s [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:42:02.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services2-vc2l9" for this suite. Apr 5 20:42:08.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:42:08.328: INFO: namespace: e2e-tests-net-services2-vc2l9, resource: bindings, ignored listing per whitelist Apr 5 20:42:08.360: INFO: namespace e2e-tests-net-services2-vc2l9 deletion completed in 6.110403508s • [SLOW TEST:25.820 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow connections to services in the default namespace from a pod in another namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:48 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:42:08.360: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:42:08.448: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-d22gk STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 5 20:42:08.483: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 5 20:42:34.535: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.128.0.52:8080/dial?request=hostName&protocol=http&host=10.129.0.9&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-d22gk PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:42:34.535: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig Apr 5 20:42:34.627: INFO: Waiting for endpoints: map[] Apr 5 20:42:34.629: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.128.0.52:8080/dial?request=hostName&protocol=http&host=10.128.0.51&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-d22gk PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:42:34.629: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig Apr 5 20:42:34.711: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:42:34.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-d22gk" for this suite. Apr 5 20:42:56.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:42:56.812: INFO: namespace: e2e-tests-pod-network-test-d22gk, resource: bindings, ignored listing per whitelist Apr 5 20:42:56.826: INFO: namespace e2e-tests-pod-network-test-d22gk deletion completed in 22.112122492s • [SLOW TEST:48.465 seconds] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-storage] EmptyDir volumes /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:42:56.826: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:42:56.911: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 5 20:42:56.958: INFO: Waiting up to 5m0s for pod "pod-ebe26a8c-3911-11e8-a5ec-0e3b9f19c974" in namespace "e2e-tests-emptydir-8dxlm" to be "success or failure" Apr 5 20:42:56.960: INFO: Pod "pod-ebe26a8c-3911-11e8-a5ec-0e3b9f19c974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084554ms Apr 5 20:42:58.962: INFO: Pod "pod-ebe26a8c-3911-11e8-a5ec-0e3b9f19c974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004425867s Apr 5 20:43:00.964: INFO: Pod "pod-ebe26a8c-3911-11e8-a5ec-0e3b9f19c974": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006755054s STEP: Saw pod success Apr 5 20:43:00.964: INFO: Pod "pod-ebe26a8c-3911-11e8-a5ec-0e3b9f19c974" satisfied condition "success or failure" Apr 5 20:43:00.966: INFO: Trying to get logs from node nettest-node-1 pod pod-ebe26a8c-3911-11e8-a5ec-0e3b9f19c974 container test-container: <nil> STEP: delete the pod Apr 5 20:43:00.987: INFO: Waiting for pod pod-ebe26a8c-3911-11e8-a5ec-0e3b9f19c974 to disappear Apr 5 20:43:00.992: INFO: Pod pod-ebe26a8c-3911-11e8-a5ec-0e3b9f19c974 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:43:00.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8dxlm" for this suite. Apr 5 20:43:07.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:43:07.083: INFO: namespace: e2e-tests-emptydir-8dxlm, resource: bindings, ignored listing per whitelist Apr 5 20:43:07.111: INFO: namespace e2e-tests-emptydir-8dxlm deletion completed in 6.117590734s • [SLOW TEST:10.285 seconds] [sig-storage] EmptyDir volumes /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ S ------------------------------ [Area:Networking] network isolation when using a plugin that does not isolate namespaces by default should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:404 Apr 5 20:43:07.111: INFO: This plugin isolates namespaces by default. [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:43:07.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:43:07.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:403 should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15 Apr 5 20:43:07.111: This plugin isolates namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SS ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should allow communication from non-default to default namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:49 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:43:07.113: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:43:07.200: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should allow communication from non-default to default namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:49 Apr 5 20:43:07.322: INFO: Using nettest-node-1 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:43:11.337: INFO: Target pod IP:port is 10.128.0.54:8080 Apr 5 20:43:11.337: INFO: Creating an exec pod on node nettest-node-1 Apr 5 20:43:11.337: INFO: Creating new exec pod Apr 5 20:43:15.349: INFO: Waiting up to 10s to wget 10.128.0.54:8080 Apr 5 20:43:15.349: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-isolation2-xwt8b execpod-sourceip-nettest-node-1kgpr4 -- /bin/sh -c wget -T 30 -qO- 10.128.0.54:8080' Apr 5 20:43:15.642: INFO: stderr: "" Apr 5 20:43:15.642: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:43:15.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation1-js957" for this suite. Apr 5 20:43:21.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:43:21.742: INFO: namespace: e2e-tests-net-isolation1-js957, resource: bindings, ignored listing per whitelist Apr 5 20:43:21.765: INFO: namespace e2e-tests-net-isolation1-js957 deletion completed in 6.111471825s [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:43:21.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation2-xwt8b" for this suite. Apr 5 20:43:27.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:43:27.862: INFO: namespace: e2e-tests-net-isolation2-xwt8b, resource: bindings, ignored listing per whitelist Apr 5 20:43:27.876: INFO: namespace e2e-tests-net-isolation2-xwt8b deletion completed in 6.108872241s • [SLOW TEST:20.763 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should allow communication from non-default to default namespace on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:49 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:43:27.876: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:43:27.940: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-r78bm STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 5 20:43:27.993: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 5 20:43:52.046: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | timeout -t 2 nc -w 1 -u 10.129.0.10 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-r78bm PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:43:52.046: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig Apr 5 20:43:53.125: INFO: Found all expected endpoints: [netserver-0] Apr 5 20:43:53.127: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | timeout -t 2 nc -w 1 -u 10.128.0.56 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-r78bm PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 5 20:43:53.127: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig Apr 5 20:43:54.207: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:43:54.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-r78bm" for this suite. Apr 5 20:44:16.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:44:16.260: INFO: namespace: e2e-tests-pod-network-test-r78bm, resource: bindings, ignored listing per whitelist Apr 5 20:44:16.326: INFO: namespace e2e-tests-pod-network-test-r78bm deletion completed in 22.115135387s • [SLOW TEST:48.450 seconds] [sig-network] Networking /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648 ------------------------------ SSSSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should run even if it has no access to update status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:38 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:44:16.326: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:44:16.346: INFO: configPath is now "/tmp/extended-test-unprivileged-router-z926g-n6lfh-user.kubeconfig" Apr 5 20:44:16.346: INFO: The user is now "extended-test-unprivileged-router-z926g-n6lfh-user" Apr 5 20:44:16.346: INFO: Creating project "extended-test-unprivileged-router-z926g-n6lfh" Apr 5 20:44:16.413: INFO: Waiting on permissions in project "extended-test-unprivileged-router-z926g-n6lfh" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:25 Apr 5 20:44:16.439: INFO: Running 'oc new-app --config=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-unprivileged-router-z926g-n6lfh -f /tmp/fixture-testdata-dir426060294/test/extended/testdata/scoped-router.yaml -p=IMAGE=openshift/origin-haproxy-router -p=SCOPE=["--name=test-unprivileged", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first", "--update-status=false"]' warning: --param no longer accepts comma-separated lists of values. "SCOPE=[\"--name=test-unprivileged\", \"--namespace=$(POD_NAMESPACE)\", \"--loglevel=4\", \"--labels=select=first\", \"--update-status=false\"]" will be treated as a single key-value pair. --> Deploying template "extended-test-unprivileged-router-z926g-n6lfh/" for "/tmp/fixture-testdata-dir426060294/test/extended/testdata/scoped-router.yaml" to project extended-test-unprivileged-router-z926g-n6lfh * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-unprivileged", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first", "--update-status=false"] --> Creating resources ... pod "scoped-router" created pod "router-override" created rolebinding "system-router" created route "route-1" created route "route-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Run 'oc status' to view your app. [It] should run even if it has no access to update status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:38 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:44:16.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-unprivileged-router-z926g-n6lfh" for this suite. Apr 5 20:44:34.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:44:34.939: INFO: namespace: extended-test-unprivileged-router-z926g-n6lfh, resource: bindings, ignored listing per whitelist Apr 5 20:44:34.946: INFO: namespace extended-test-unprivileged-router-z926g-n6lfh deletion completed in 18.112803125s S [SKIPPING] [18.620 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:18 The HAProxy router /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:37 should run even if it has no access to update status [Suite:openshift/conformance/parallel] [It] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:38 test temporarily disabled /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:44 ------------------------------ SSSSSSSSSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should enforce policy based on NamespaceSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:282 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 Apr 5 20:44:34.946: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:44:34.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should enforce policy based on NamespaceSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:282 Apr 5 20:44:34.946: This plugin does not implement NetworkPolicy. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should expose prometheus metrics for a route [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:95 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:44:34.948: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:44:34.967: INFO: configPath is now "/tmp/extended-test-router-metrics-f9vcs-h9qt4-user.kubeconfig" Apr 5 20:44:34.967: INFO: The user is now "extended-test-router-metrics-f9vcs-h9qt4-user" Apr 5 20:44:34.967: INFO: Creating project "extended-test-router-metrics-f9vcs-h9qt4" Apr 5 20:44:35.040: INFO: Waiting on permissions in project "extended-test-router-metrics-f9vcs-h9qt4" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:44:35.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-metrics-f9vcs-h9qt4" for this suite. Apr 5 20:44:41.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:44:41.137: INFO: namespace: extended-test-router-metrics-f9vcs-h9qt4, resource: bindings, ignored listing per whitelist Apr 5 20:44:41.193: INFO: namespace extended-test-router-metrics-f9vcs-h9qt4 deletion completed in 6.112655749s [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:76 S [SKIPPING] in Spec Setup (BeforeEach) [6.245 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:26 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:82 should expose prometheus metrics for a route [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:95 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:39 ------------------------------ SSSSSS ------------------------------ [Area:Networking] network isolation when using a plugin that does not isolate namespaces by default should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:404 Apr 5 20:44:41.193: INFO: This plugin isolates namespaces by default. [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:44:41.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:44:41.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:403 should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19 Apr 5 20:44:41.193: This plugin isolates namespaces by default. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSS ------------------------------ [Area:Networking] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:45 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:441 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:44:41.195: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:44:41.218: INFO: configPath is now "/tmp/extended-test-multicast-bfjdr-th6nm-user.kubeconfig" Apr 5 20:44:41.218: INFO: The user is now "extended-test-multicast-bfjdr-th6nm-user" Apr 5 20:44:41.218: INFO: Creating project "extended-test-multicast-bfjdr-th6nm" Apr 5 20:44:41.294: INFO: Waiting on permissions in project "extended-test-multicast-bfjdr-th6nm" ... STEP: Waiting for a default service account to be provisioned in namespace [It] should allow multicast traffic in namespaces where it is enabled [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:45 Apr 5 20:44:41.424: INFO: Using nettest-node-1 and nettest-node-2 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:44:41.435: INFO: Waiting up to 5m0s for pod multicast-0 status to be running Apr 5 20:44:41.438: INFO: Waiting for pod multicast-0 in namespace 'extended-test-multicast-bfjdr-th6nm' status to be 'running'(found phase: "Pending", readiness: false) (2.867005ms elapsed) Apr 5 20:44:46.440: INFO: Waiting for pod multicast-0 in namespace 'extended-test-multicast-bfjdr-th6nm' status to be 'running'(found phase: "Pending", readiness: false) (5.005393223s elapsed) Apr 5 20:44:51.443: INFO: Waiting for pod multicast-0 in namespace 'extended-test-multicast-bfjdr-th6nm' status to be 'running'(found phase: "Pending", readiness: false) (10.0080353s elapsed) Apr 5 20:44:56.450: INFO: Waiting up to 5m0s for pod multicast-1 status to be running Apr 5 20:44:56.452: INFO: Waiting for pod multicast-1 in namespace 'extended-test-multicast-bfjdr-th6nm' status to be 'running'(found phase: "Pending", readiness: false) (1.862185ms elapsed) Apr 5 20:45:01.459: INFO: Waiting up to 5m0s for pod multicast-2 status to be running Apr 5 20:45:01.461: INFO: Waiting for pod multicast-2 in namespace 'extended-test-multicast-bfjdr-th6nm' status to be 'running'(found phase: "Pending", readiness: false) (1.738028ms elapsed) Apr 5 20:45:06.465: INFO: Waiting for pod multicast-2 in namespace 'extended-test-multicast-bfjdr-th6nm' status to be 'running'(found phase: "Pending", readiness: false) (5.005958938s elapsed) Apr 5 20:45:11.467: INFO: Running 'oc exec --config=/tmp/extended-test-multicast-bfjdr-th6nm-user.kubeconfig --namespace=extended-test-multicast-bfjdr-th6nm multicast-2 -- omping -c 1 -T 60 -q -q 10.128.0.58 10.128.0.59 10.129.0.14' Apr 5 20:45:11.467: INFO: Running 'oc exec --config=/tmp/extended-test-multicast-bfjdr-th6nm-user.kubeconfig --namespace=extended-test-multicast-bfjdr-th6nm multicast-0 -- omping -c 1 -T 60 -q -q 10.128.0.58 10.128.0.59 10.129.0.14' Apr 5 20:45:11.467: INFO: Running 'oc exec --config=/tmp/extended-test-multicast-bfjdr-th6nm-user.kubeconfig --namespace=extended-test-multicast-bfjdr-th6nm multicast-1 -- omping -c 1 -T 60 -q -q 10.128.0.58 10.128.0.59 10.129.0.14' [AfterEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:45:18.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-multicast-bfjdr-th6nm" for this suite. Apr 5 20:45:24.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:45:24.965: INFO: namespace: extended-test-multicast-bfjdr-th6nm, resource: bindings, ignored listing per whitelist Apr 5 20:45:25.005: INFO: namespace extended-test-multicast-bfjdr-th6nm deletion completed in 6.118261561s • [SLOW TEST:43.811 seconds] [Area:Networking] multicast /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:21 when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:439 should allow multicast traffic in namespaces where it is enabled [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:45 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should support allow-all policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:245 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 Apr 5 20:45:25.006: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:45:25.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should support allow-all policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:245 Apr 5 20:45:25.006: This plugin does not implement NetworkPolicy. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should expose the profiling endpoints [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:206 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:45:25.008: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:45:25.027: INFO: configPath is now "/tmp/extended-test-router-metrics-f9vcs-6rzpn-user.kubeconfig" Apr 5 20:45:25.027: INFO: The user is now "extended-test-router-metrics-f9vcs-6rzpn-user" Apr 5 20:45:25.027: INFO: Creating project "extended-test-router-metrics-f9vcs-6rzpn" Apr 5 20:45:25.078: INFO: Waiting on permissions in project "extended-test-router-metrics-f9vcs-6rzpn" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:45:25.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-metrics-f9vcs-6rzpn" for this suite. Apr 5 20:45:31.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:45:31.201: INFO: namespace: extended-test-router-metrics-f9vcs-6rzpn, resource: bindings, ignored listing per whitelist Apr 5 20:45:31.239: INFO: namespace extended-test-router-metrics-f9vcs-6rzpn deletion completed in 6.117296323s [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:76 S [SKIPPING] in Spec Setup (BeforeEach) [6.231 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:26 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:82 should expose the profiling endpoints [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:206 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:39 ------------------------------ SSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should override the route host with a custom value [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:91 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:45:31.239: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:45:31.257: INFO: configPath is now "/tmp/openshift-extended-tests/extended-test-scoped-router-t54ld-k7vk7-user.kubeconfig" Apr 5 20:45:31.257: INFO: The user is now "extended-test-scoped-router-t54ld-k7vk7-user" Apr 5 20:45:31.257: INFO: Creating project "extended-test-scoped-router-t54ld-k7vk7" Apr 5 20:45:31.323: INFO: Waiting on permissions in project "extended-test-scoped-router-t54ld-k7vk7" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:32 Apr 5 20:45:31.341: INFO: Running 'oc new-app --config=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-scoped-router-t54ld-k7vk7 -f /tmp/fixture-testdata-dir426060294/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "extended-test-scoped-router-t54ld-k7vk7/" for "/tmp/fixture-testdata-dir426060294/test/extended/testdata/scoped-router.yaml" to project extended-test-scoped-router-t54ld-k7vk7 * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-scoped", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first"] --> Creating resources ... pod "scoped-router" created pod "router-override" created rolebinding "system-router" created route "route-1" created route "route-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Run 'oc status' to view your app. [It] should override the route host with a custom value [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:91 Apr 5 20:45:31.682: INFO: Creating new exec pod STEP: creating a scoped router from a config file "/tmp/fixture-testdata-dir426060294/test/extended/testdata/scoped-router.yaml" STEP: waiting for the healthz endpoint to respond Apr 5 20:45:34.701: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-t54ld-k7vk7 execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.129.0.15' "http://10.129.0.15:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:45:35.128: INFO: stderr: "" STEP: waiting for the valid route to respond Apr 5 20:45:35.129: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-t54ld-k7vk7 execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: route-1-extended-test-scoped-router-t54ld-k7vk7.myapps.mycompany.com' "http://10.129.0.15/Letter" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Apr 5 20:45:39.496: INFO: stderr: "" STEP: checking that the stored domain name does not match a route Apr 5 20:45:39.496: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-t54ld-k7vk7 execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: first.example.com' "http://10.129.0.15/Letter"' Apr 5 20:45:39.788: INFO: stderr: "" STEP: checking that route-1-extended-test-scoped-router-t54ld-k7vk7.myapps.mycompany.com matches a route Apr 5 20:45:39.788: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-t54ld-k7vk7 execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-1-extended-test-scoped-router-t54ld-k7vk7.myapps.mycompany.com' "http://10.129.0.15/Letter"' Apr 5 20:45:40.078: INFO: stderr: "" STEP: checking that route-2-extended-test-scoped-router-t54ld-k7vk7.myapps.mycompany.com matches a route Apr 5 20:45:40.078: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-t54ld-k7vk7 execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-2-extended-test-scoped-router-t54ld-k7vk7.myapps.mycompany.com' "http://10.129.0.15/Letter"' Apr 5 20:45:40.363: INFO: stderr: "" STEP: checking that the router reported the correct ingress and override [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:45:40.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-scoped-router-t54ld-k7vk7" for this suite. Apr 5 20:45:50.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:45:50.469: INFO: namespace: extended-test-scoped-router-t54ld-k7vk7, resource: bindings, ignored listing per whitelist Apr 5 20:45:50.490: INFO: namespace extended-test-scoped-router-t54ld-k7vk7 deletion completed in 10.113087458s • [SLOW TEST:19.252 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:25 The HAProxy router /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:41 should override the route host with a custom value [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:91 ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router converges when multiple routers are writing status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:87 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:45:50.491: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:45:50.508: INFO: configPath is now "/tmp/extended-test-router-stress-6kw5d-42jmd-user.kubeconfig" Apr 5 20:45:50.508: INFO: The user is now "extended-test-router-stress-6kw5d-42jmd-user" Apr 5 20:45:50.508: INFO: Creating project "extended-test-router-stress-6kw5d-42jmd" Apr 5 20:45:50.579: INFO: Waiting on permissions in project "extended-test-router-stress-6kw5d-42jmd" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:52 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:40 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:45:50.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-stress-6kw5d-42jmd" for this suite. Apr 5 20:45:56.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:45:56.701: INFO: namespace: extended-test-router-stress-6kw5d-42jmd, resource: bindings, ignored listing per whitelist Apr 5 20:45:56.725: INFO: namespace extended-test-router-stress-6kw5d-42jmd deletion completed in 6.120710167s S [SKIPPING] in Spec Setup (BeforeEach) [6.234 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:30 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:86 converges when multiple routers are writing status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:87 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:57 ------------------------------ SSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should serve routes that were created from an ingress [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:79 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:45:56.725: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:45:56.743: INFO: configPath is now "/tmp/extended-test-router-stress-hgqpr-pzk8w-user.kubeconfig" Apr 5 20:45:56.743: INFO: The user is now "extended-test-router-stress-hgqpr-pzk8w-user" Apr 5 20:45:56.743: INFO: Creating project "extended-test-router-stress-hgqpr-pzk8w" Apr 5 20:45:56.779: INFO: Waiting on permissions in project "extended-test-router-stress-hgqpr-pzk8w" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:45 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:32 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:45:56.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-stress-hgqpr-pzk8w" for this suite. Apr 5 20:46:02.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:46:02.964: INFO: namespace: extended-test-router-stress-hgqpr-pzk8w, resource: bindings, ignored listing per whitelist Apr 5 20:46:03.023: INFO: namespace extended-test-router-stress-hgqpr-pzk8w deletion completed in 6.109703897s S [SKIPPING] in Spec Setup (BeforeEach) [6.298 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:21 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:68 should serve routes that were created from an ingress [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:79 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:48 ------------------------------ SSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should expose a health check on the metrics port [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:83 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:46:03.023: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:46:03.041: INFO: configPath is now "/tmp/extended-test-router-metrics-f9vcs-tzp8r-user.kubeconfig" Apr 5 20:46:03.041: INFO: The user is now "extended-test-router-metrics-f9vcs-tzp8r-user" Apr 5 20:46:03.041: INFO: Creating project "extended-test-router-metrics-f9vcs-tzp8r" Apr 5 20:46:03.135: INFO: Waiting on permissions in project "extended-test-router-metrics-f9vcs-tzp8r" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:46:03.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-metrics-f9vcs-tzp8r" for this suite. Apr 5 20:46:09.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:46:09.210: INFO: namespace: extended-test-router-metrics-f9vcs-tzp8r, resource: bindings, ignored listing per whitelist Apr 5 20:46:09.265: INFO: namespace extended-test-router-metrics-f9vcs-tzp8r deletion completed in 6.111071469s [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:76 S [SKIPPING] in Spec Setup (BeforeEach) [6.242 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:26 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:82 should expose a health check on the metrics port [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:83 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:39 ------------------------------ SSSSS ------------------------------ NetworkPolicy when using a plugin that implements NetworkPolicy should support a 'default-deny' policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:52 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:428 Apr 5 20:46:09.265: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:46:09.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:427 should support a 'default-deny' policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:52 Apr 5 20:46:09.265: This plugin does not implement NetworkPolicy. /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:289 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router converges when multiple routers are writing conflicting status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:168 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:46:09.266: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:46:09.284: INFO: configPath is now "/tmp/extended-test-router-stress-6kw5d-jd9bk-user.kubeconfig" Apr 5 20:46:09.284: INFO: The user is now "extended-test-router-stress-6kw5d-jd9bk-user" Apr 5 20:46:09.284: INFO: Creating project "extended-test-router-stress-6kw5d-jd9bk" Apr 5 20:46:09.346: INFO: Waiting on permissions in project "extended-test-router-stress-6kw5d-jd9bk" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:52 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:40 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:46:09.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-stress-6kw5d-jd9bk" for this suite. Apr 5 20:46:15.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:46:15.460: INFO: namespace: extended-test-router-stress-6kw5d-jd9bk, resource: bindings, ignored listing per whitelist Apr 5 20:46:15.491: INFO: namespace extended-test-router-stress-6kw5d-jd9bk deletion completed in 6.113885866s S [SKIPPING] in Spec Setup (BeforeEach) [6.225 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:30 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:86 converges when multiple routers are writing conflicting status [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:168 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:57 ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should respond with 503 to unrecognized hosts [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:69 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:46:15.491: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Apr 5 20:46:15.510: INFO: configPath is now "/tmp/extended-test-router-stress-hgqpr-cffjx-user.kubeconfig" Apr 5 20:46:15.510: INFO: The user is now "extended-test-router-stress-hgqpr-cffjx-user" Apr 5 20:46:15.510: INFO: Creating project "extended-test-router-stress-hgqpr-cffjx" Apr 5 20:46:15.596: INFO: Waiting on permissions in project "extended-test-router-stress-hgqpr-cffjx" ... STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:45 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:32 [AfterEach] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:46:15.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-stress-hgqpr-cffjx" for this suite. Apr 5 20:46:21.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:46:21.723: INFO: namespace: extended-test-router-stress-hgqpr-cffjx, resource: bindings, ignored listing per whitelist Apr 5 20:46:21.734: INFO: namespace extended-test-router-stress-hgqpr-cffjx deletion completed in 6.116206297s S [SKIPPING] in Spec Setup (BeforeEach) [6.243 seconds] [Conformance][Area:Networking][Feature:Router] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:21 The HAProxy router [BeforeEach] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:68 should respond with 503 to unrecognized hosts [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:69 no router installed on the cluster /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:48 ------------------------------ [Area:Networking] services when using a plugin that isolates namespaces by default should prevent connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:40 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:46:21.734: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:46:21.853: INFO: >>> kubeConfig: /tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [It] should prevent connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:40 Apr 5 20:46:22.047: INFO: Using nettest-node-1 for test ([nettest-node-1 nettest-node-2] out of [nettest-node-1 nettest-node-2]) Apr 5 20:46:26.060: INFO: Target pod IP:port is 10.128.0.62:8080 Apr 5 20:46:26.074: INFO: Endpoint e2e-tests-net-services1-pmlk5/service-mtrjj is not ready yet Apr 5 20:46:31.078: INFO: Target service IP:port is 172.30.7.22:8080 Apr 5 20:46:31.078: INFO: Creating an exec pod on node nettest-node-1 Apr 5 20:46:31.078: INFO: Creating new exec pod Apr 5 20:46:35.094: INFO: Waiting up to 10s to wget 172.30.7.22:8080 Apr 5 20:46:35.094: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-gzdwh execpod-sourceip-nettest-node-1946td -- /bin/sh -c wget -T 30 -qO- 172.30.7.22:8080' Apr 5 20:47:05.367: INFO: rc: 127 Apr 5 20:47:05.368: INFO: got err: error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl [kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-gzdwh execpod-sourceip-nettest-node-1946td -- /bin/sh -c wget -T 30 -qO- 172.30.7.22:8080] [] <nil> wget: download timed out command terminated with exit code 1 [] <nil> 0xc420d9cc30 exit status 1 <nil> <nil> true [0xc421796130 0xc421796148 0xc421796160] [0xc421796130 0xc421796148 0xc421796160] [0xc421796140 0xc421796158] [0x989690 0x989690] 0xc421c326c0 <nil>}: Command stdout: stderr: wget: download timed out command terminated with exit code 1 error: exit status 1 , retry until timeout Apr 5 20:47:05.368: INFO: Creating new exec pod Apr 5 20:47:13.381: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-gzdwh debugpod-sourceip-nettest-node-16qbrg -- /bin/sh -c ovs-ofctl -O OpenFlow13 dump-flows br0' Apr 5 20:47:13.684: INFO: stderr: "" Apr 5 20:47:13.684: INFO: DEBUG: OFPST_FLOW reply (OF1.3) (xid=0x2): cookie=0x0, duration=762.133s, table=0, n_packets=0, n_bytes=0, priority=250,ip,in_port=2,nw_dst=224.0.0.0/4 actions=drop cookie=0x0, duration=762.150s, table=0, n_packets=16, n_bytes=672, priority=200,arp,in_port=1,arp_spa=10.128.0.0/14,arp_tpa=10.128.0.0/23 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 cookie=0x0, duration=762.146s, table=0, n_packets=582, n_bytes=67551, priority=200,ip,in_port=1,nw_src=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 cookie=0x0, duration=762.142s, table=0, n_packets=0, n_bytes=0, priority=200,ip,in_port=1,nw_dst=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 cookie=0x0, duration=762.127s, table=0, n_packets=18, n_bytes=756, priority=200,arp,in_port=2,arp_spa=10.128.0.1,arp_tpa=10.128.0.0/14 actions=goto_table:30 cookie=0x0, duration=762.121s, table=0, n_packets=186, n_bytes=52052, priority=200,ip,in_port=2 actions=goto_table:30 cookie=0x0, duration=762.137s, table=0, n_packets=0, n_bytes=0, priority=150,in_port=1 actions=drop cookie=0x0, duration=762.109s, table=0, n_packets=8, n_bytes=648, priority=150,in_port=2 actions=drop cookie=0x0, duration=762.100s, table=0, n_packets=40, n_bytes=1680, priority=100,arp actions=goto_table:20 cookie=0x0, duration=762.094s, table=0, n_packets=613, n_bytes=77973, priority=100,ip actions=goto_table:20 cookie=0x0, duration=762.087s, table=0, n_packets=344, n_bytes=28128, priority=0 actions=drop cookie=0xfaa865db, duration=761.720s, table=10, n_packets=598, n_bytes=68223, priority=100,tun_src=172.17.0.4 actions=goto_table:30 cookie=0xfb5876b8, duration=761.700s, table=10, n_packets=0, n_bytes=0, priority=100,tun_src=172.17.0.2 actions=goto_table:30 cookie=0x0, duration=762.079s, table=10, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=50.448s, table=20, n_packets=0, n_bytes=0, priority=100,arp,in_port=63,arp_spa=10.128.0.62,arp_sha=00:00:0a:80:00:3e/00:00:ff:ff:ff:ff actions=load:0x29803e->NXM_NX_REG0[],goto_table:21 cookie=0x0, duration=42.063s, table=20, n_packets=1, n_bytes=42, priority=100,arp,in_port=64,arp_spa=10.128.0.63,arp_sha=00:00:0a:80:00:3f/00:00:ff:ff:ff:ff actions=load:0x96439b->NXM_NX_REG0[],goto_table:21 cookie=0x0, duration=50.445s, table=20, n_packets=0, n_bytes=0, priority=100,ip,in_port=63,nw_src=10.128.0.62 actions=load:0x29803e->NXM_NX_REG0[],goto_table:21 cookie=0x0, duration=42.058s, table=20, n_packets=5, n_bytes=370, priority=100,ip,in_port=64,nw_src=10.128.0.63 actions=load:0x96439b->NXM_NX_REG0[],goto_table:21 cookie=0x0, duration=762.071s, table=20, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=762.058s, table=21, n_packets=653, n_bytes=79653, priority=0 actions=goto_table:30 cookie=0x0, duration=762.049s, table=30, n_packets=12, n_bytes=504, priority=300,arp,arp_tpa=10.128.0.1 actions=output:2 cookie=0x0, duration=762.023s, table=30, n_packets=47, n_bytes=4369, priority=300,ip,nw_dst=10.128.0.1 actions=output:2 cookie=0x0, duration=762.040s, table=30, n_packets=46, n_bytes=1932, priority=200,arp,arp_tpa=10.128.0.0/23 actions=goto_table:40 cookie=0x0, duration=762.008s, table=30, n_packets=809, n_bytes=123811, priority=200,ip,nw_dst=10.128.0.0/23 actions=goto_table:70 cookie=0x0, duration=762.027s, table=30, n_packets=16, n_bytes=672, priority=100,arp,arp_tpa=10.128.0.0/14 actions=goto_table:50 cookie=0x0, duration=762.001s, table=30, n_packets=364, n_bytes=46550, priority=100,ip,nw_dst=10.128.0.0/14 actions=goto_table:90 cookie=0x0, duration=762.014s, table=30, n_packets=146, n_bytes=21694, priority=100,ip,nw_dst=172.30.0.0/16 actions=goto_table:60 cookie=0x0, duration=761.993s, table=30, n_packets=5, n_bytes=384, priority=50,ip,in_port=1,nw_dst=224.0.0.0/4 actions=goto_table:120 cookie=0x0, duration=761.985s, table=30, n_packets=10, n_bytes=768, priority=25,ip,nw_dst=224.0.0.0/4 actions=goto_table:110 cookie=0x0, duration=761.979s, table=30, n_packets=0, n_bytes=0, priority=0,ip actions=goto_table:100 cookie=0x0, duration=761.975s, table=30, n_packets=0, n_bytes=0, priority=0,arp actions=drop cookie=0x0, duration=50.441s, table=40, n_packets=0, n_bytes=0, priority=100,arp,arp_tpa=10.128.0.62 actions=output:63 cookie=0x0, duration=42.054s, table=40, n_packets=1, n_bytes=42, priority=100,arp,arp_tpa=10.128.0.63 actions=output:64 cookie=0x0, duration=761.970s, table=40, n_packets=6, n_bytes=252, priority=0 actions=drop cookie=0xfaa865db, duration=761.715s, table=50, n_packets=16, n_bytes=672, priority=100,arp,arp_tpa=10.129.0.0/23 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.17.0.4->tun_dst,output:1 cookie=0xfb5876b8, duration=761.696s, table=50, n_packets=0, n_bytes=0, priority=100,arp,arp_tpa=10.130.0.0/23 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.17.0.2->tun_dst,output:1 cookie=0x0, duration=761.959s, table=50, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=761.951s, table=60, n_packets=5, n_bytes=419, priority=200,reg0=0 actions=output:2 cookie=0x0, duration=761.722s, table=60, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=172.30.0.1,nw_frag=later actions=load:0->NXM_NX_REG1[],load:0x2->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=47.597s, table=60, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=172.30.7.22,nw_frag=later actions=load:0x29803e->NXM_NX_REG1[],load:0x2->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=761.718s, table=60, n_packets=131, n_bytes=20485, priority=100,tcp,nw_dst=172.30.0.1,tp_dst=443 actions=load:0->NXM_NX_REG1[],load:0x2->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=761.713s, table=60, n_packets=0, n_bytes=0, priority=100,udp,nw_dst=172.30.0.1,tp_dst=53 actions=load:0->NXM_NX_REG1[],load:0x2->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=761.705s, table=60, n_packets=0, n_bytes=0, priority=100,tcp,nw_dst=172.30.0.1,tp_dst=53 actions=load:0->NXM_NX_REG1[],load:0x2->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=47.587s, table=60, n_packets=5, n_bytes=370, priority=100,tcp,nw_dst=172.30.7.22,tp_dst=8080 actions=load:0x29803e->NXM_NX_REG1[],load:0x2->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=761.940s, table=60, n_packets=4, n_bytes=346, priority=0 actions=drop cookie=0x0, duration=50.437s, table=70, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=10.128.0.62 actions=load:0x29803e->NXM_NX_REG1[],load:0x3f->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=42.051s, table=70, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=10.128.0.63 actions=load:0x96439b->NXM_NX_REG1[],load:0x40->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=761.934s, table=70, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=761.930s, table=80, n_packets=48, n_bytes=4207, priority=300,ip,nw_src=10.128.0.1 actions=output:NXM_NX_REG2[] cookie=0x0, duration=761.858s, table=80, n_packets=197, n_bytes=52941, priority=200,reg0=0 actions=output:NXM_NX_REG2[] cookie=0x0, duration=761.853s, table=80, n_packets=142, n_bytes=21452, priority=200,reg1=0 actions=output:NXM_NX_REG2[] cookie=0x0, duration=752.274s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x815ec7,reg1=0x815ec7 actions=output:NXM_NX_REG2[] cookie=0x0, duration=752.173s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x64c0eb,reg1=0x64c0eb actions=output:NXM_NX_REG2[] cookie=0x0, duration=531.567s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x672905,reg1=0x672905 actions=output:NXM_NX_REG2[] cookie=0x0, duration=531.464s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x7c1b53,reg1=0x7c1b53 actions=output:NXM_NX_REG2[] cookie=0x0, duration=508.790s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x99b8bd,reg1=0x99b8bd actions=output:NXM_NX_REG2[] cookie=0x0, duration=508.690s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x53f6d9,reg1=0x53f6d9 actions=output:NXM_NX_REG2[] cookie=0x0, duration=476.947s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x3fcd53,reg1=0x3fcd53 actions=output:NXM_NX_REG2[] cookie=0x0, duration=470.721s, table=80, n_packets=497, n_bytes=59170, priority=100,reg0=0x74e5be,reg1=0x74e5be actions=output:NXM_NX_REG2[] cookie=0x0, duration=454.088s, table=80, n_packets=8, n_bytes=939, priority=100,reg0=0x576047,reg1=0x576047 actions=output:NXM_NX_REG2[] cookie=0x0, duration=401.359s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0xceaaf,reg1=0xceaaf actions=output:NXM_NX_REG2[] cookie=0x0, duration=401.272s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x608c4b,reg1=0x608c4b actions=output:NXM_NX_REG2[] cookie=0x0, duration=378.519s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0xa6a87b,reg1=0xa6a87b actions=output:NXM_NX_REG2[] cookie=0x0, duration=358.910s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x1a123d,reg1=0x1a123d actions=output:NXM_NX_REG2[] cookie=0x0, duration=358.834s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0xb56ceb,reg1=0xb56ceb actions=output:NXM_NX_REG2[] cookie=0x0, duration=331.102s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x50ae27,reg1=0x50ae27 actions=output:NXM_NX_REG2[] cookie=0x0, duration=331.006s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x6eb4eb,reg1=0x6eb4eb actions=output:NXM_NX_REG2[] cookie=0x0, duration=305.290s, table=80, n_packets=14, n_bytes=1309, priority=100,reg0=0x6d89c7,reg1=0x6d89c7 actions=output:NXM_NX_REG2[] cookie=0x0, duration=256.818s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0xa0447d,reg1=0xa0447d actions=output:NXM_NX_REG2[] cookie=0x0, duration=246.525s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0xf1bdb3,reg1=0xf1bdb3 actions=output:NXM_NX_REG2[] cookie=0x0, duration=246.432s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0xee6f88,reg1=0xee6f88 actions=output:NXM_NX_REG2[] cookie=0x0, duration=225.769s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0xd10031,reg1=0xd10031 actions=output:NXM_NX_REG2[] cookie=0x0, duration=177.292s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0xc2138e,reg1=0xc2138e actions=output:NXM_NX_REG2[] cookie=0x0, duration=158.675s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x1ca9c5,reg1=0x1ca9c5 actions=output:NXM_NX_REG2[] cookie=0x0, duration=152.414s, table=80, n_packets=20, n_bytes=2096, priority=100,reg0=0x6b4fd3,reg1=0x6b4fd3 actions=output:NXM_NX_REG2[] cookie=0x0, duration=108.604s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0xfc881e,reg1=0xfc881e actions=output:NXM_NX_REG2[] cookie=0x0, duration=102.383s, table=80, n_packets=15, n_bytes=2256, priority=100,reg0=0x17488a,reg1=0x17488a actions=output:NXM_NX_REG2[] cookie=0x0, duration=83.137s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x5aa8fe,reg1=0x5aa8fe actions=output:NXM_NX_REG2[] cookie=0x0, duration=76.894s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x6648c9,reg1=0x6648c9 actions=output:NXM_NX_REG2[] cookie=0x0, duration=70.604s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x515a0c,reg1=0x515a0c actions=output:NXM_NX_REG2[] cookie=0x0, duration=64.348s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x33855f,reg1=0x33855f actions=output:NXM_NX_REG2[] cookie=0x0, duration=58.132s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x7dd785,reg1=0x7dd785 actions=output:NXM_NX_REG2[] cookie=0x0, duration=51.914s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x29803e,reg1=0x29803e actions=output:NXM_NX_REG2[] cookie=0x0, duration=51.762s, table=80, n_packets=0, n_bytes=0, priority=100,reg0=0x96439b,reg1=0x96439b actions=output:NXM_NX_REG2[] cookie=0x0, duration=761.926s, table=80, n_packets=5, n_bytes=370, priority=0 actions=drop cookie=0xfaa865db, duration=761.709s, table=90, n_packets=364, n_bytes=46550, priority=100,ip,nw_dst=10.129.0.0/23 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.17.0.4->tun_dst,output:1 cookie=0xfb5876b8, duration=761.693s, table=90, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=10.130.0.0/23 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.17.0.2->tun_dst,output:1 cookie=0x0, duration=761.923s, table=90, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=761.919s, table=100, n_packets=0, n_bytes=0, priority=0 actions=goto_table:101 cookie=0x0, duration=761.910s, table=101, n_packets=0, n_bytes=0, priority=51,tcp,nw_dst=172.17.0.3,tp_dst=53 actions=output:2 cookie=0x0, duration=761.902s, table=101, n_packets=0, n_bytes=0, priority=51,udp,nw_dst=172.17.0.3,tp_dst=53 actions=output:2 cookie=0x0, duration=761.895s, table=101, n_packets=0, n_bytes=0, priority=0 actions=output:2 cookie=0x0, duration=761.888s, table=110, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=761.689s, table=111, n_packets=10, n_bytes=768, priority=100 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.17.0.2->tun_dst,output:1,set_field:172.17.0.4->tun_dst,output:1,goto_table:120 cookie=0x0, duration=761.878s, table=120, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=761.873s, table=253, n_packets=0, n_bytes=0, actions=note:01.07.00.00.00.00 Apr 5 20:47:13.684: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-gzdwh debugpod-sourceip-nettest-node-16qbrg -- /bin/sh -c iptables-save' Apr 5 20:47:13.981: INFO: stderr: "" Apr 5 20:47:13.981: INFO: DEBUG: # Generated by iptables-save v1.4.21 on Thu Apr 5 20:47:13 2018 *nat :PREROUTING ACCEPT [4:240] :INPUT ACCEPT [4:240] :OUTPUT ACCEPT [28:1824] :POSTROUTING ACCEPT [28:1824] :DOCKER - [0:0] :KUBE-HOSTPORTS - [0:0] :KUBE-MARK-DROP - [0:0] :KUBE-MARK-MASQ - [0:0] :KUBE-NODEPORT-CONTAINER - [0:0] :KUBE-NODEPORT-HOST - [0:0] :KUBE-NODEPORTS - [0:0] :KUBE-PORTALS-CONTAINER - [0:0] :KUBE-PORTALS-HOST - [0:0] :KUBE-POSTROUTING - [0:0] :KUBE-SEP-77DLEKBM3D5CRC3D - [0:0] :KUBE-SEP-CKTKXEMIKRIIY55M - [0:0] :KUBE-SEP-EZ5ESXJRZ36JV4D4 - [0:0] :KUBE-SEP-PATXOTJBHFPU4CNS - [0:0] :KUBE-SERVICES - [0:0] :KUBE-SVC-2SDL4S5W77TQOHLU - [0:0] :KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0] :KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0] :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0] :OPENSHIFT-MASQUERADE - [0:0] -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A PREROUTING -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-CONTAINER -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A PREROUTING -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-CONTAINER -A PREROUTING -m comment --comment "kube hostport portals" -m addrtype --dst-type LOCAL -j KUBE-HOSTPORTS -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-HOST -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-HOST -A OUTPUT -m comment --comment "kube hostport portals" -m addrtype --dst-type LOCAL -j KUBE-HOSTPORTS -A POSTROUTING -m comment --comment "rules for masquerading OpenShift traffic" -j OPENSHIFT-MASQUERADE -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING -A POSTROUTING -s 172.19.0.0/16 ! -o docker0 -j MASQUERADE -A POSTROUTING -s 127.0.0.0/8 -o tun0 -m comment --comment "SNAT for localhost access to hostports" -j MASQUERADE -A DOCKER -i docker0 -j RETURN -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000 -A KUBE-MARK-MASQ -j MARK --set-xmark 0x1/0x1 -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x1/0x1 -j MASQUERADE -A KUBE-SEP-77DLEKBM3D5CRC3D -s 10.128.0.62/32 -m comment --comment "e2e-tests-net-services1-pmlk5/service-mtrjj:" -j KUBE-MARK-MASQ -A KUBE-SEP-77DLEKBM3D5CRC3D -p tcp -m comment --comment "e2e-tests-net-services1-pmlk5/service-mtrjj:" -m tcp -j DNAT --to-destination 10.128.0.62:8080 -A KUBE-SEP-CKTKXEMIKRIIY55M -s 172.17.0.2/32 -m comment --comment "default/kubernetes:dns" -j KUBE-MARK-MASQ -A KUBE-SEP-CKTKXEMIKRIIY55M -p udp -m comment --comment "default/kubernetes:dns" -m recent --set --name KUBE-SEP-CKTKXEMIKRIIY55M --mask 255.255.255.255 --rsource -m udp -j DNAT --to-destination 172.17.0.2:8053 -A KUBE-SEP-EZ5ESXJRZ36JV4D4 -s 172.17.0.2/32 -m comment --comment "default/kubernetes:dns-tcp" -j KUBE-MARK-MASQ -A KUBE-SEP-EZ5ESXJRZ36JV4D4 -p tcp -m comment --comment "default/kubernetes:dns-tcp" -m recent --set --name KUBE-SEP-EZ5ESXJRZ36JV4D4 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 172.17.0.2:8053 -A KUBE-SEP-PATXOTJBHFPU4CNS -s 172.17.0.2/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ -A KUBE-SEP-PATXOTJBHFPU4CNS -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-PATXOTJBHFPU4CNS --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 172.17.0.2:8443 -A KUBE-SERVICES -d 172.30.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y -A KUBE-SERVICES -d 172.30.0.1/32 -p udp -m comment --comment "default/kubernetes:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4 -A KUBE-SERVICES -d 172.30.0.1/32 -p tcp -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56 -A KUBE-SERVICES -d 172.30.7.22/32 -p tcp -m comment --comment "e2e-tests-net-services1-pmlk5/service-mtrjj: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-2SDL4S5W77TQOHLU -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS -A KUBE-SVC-2SDL4S5W77TQOHLU -m comment --comment "e2e-tests-net-services1-pmlk5/service-mtrjj:" -j KUBE-SEP-77DLEKBM3D5CRC3D -A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment "default/kubernetes:dns" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-CKTKXEMIKRIIY55M --mask 255.255.255.255 --rsource -j KUBE-SEP-CKTKXEMIKRIIY55M -A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment "default/kubernetes:dns" -j KUBE-SEP-CKTKXEMIKRIIY55M -A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment "default/kubernetes:dns-tcp" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-EZ5ESXJRZ36JV4D4 --mask 255.255.255.255 --rsource -j KUBE-SEP-EZ5ESXJRZ36JV4D4 -A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment "default/kubernetes:dns-tcp" -j KUBE-SEP-EZ5ESXJRZ36JV4D4 -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-PATXOTJBHFPU4CNS --mask 255.255.255.255 --rsource -j KUBE-SEP-PATXOTJBHFPU4CNS -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-PATXOTJBHFPU4CNS -A OPENSHIFT-MASQUERADE -s 10.128.0.0/14 -m comment --comment "masquerade pod-to-service and pod-to-external traffic" -j MASQUERADE COMMIT # Completed on Thu Apr 5 20:47:13 2018 # Generated by iptables-save v1.4.21 on Thu Apr 5 20:47:13 2018 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [5225:267934] :DOCKER - [0:0] :DOCKER-ISOLATION - [0:0] :KUBE-EXTERNAL-SERVICES - [0:0] :KUBE-FIREWALL - [0:0] :KUBE-FORWARD - [0:0] :KUBE-NODEPORT-NON-LOCAL - [0:0] :KUBE-SERVICES - [0:0] :OPENSHIFT-ADMIN-OUTPUT-RULES - [0:0] :OPENSHIFT-FIREWALL-ALLOW - [0:0] :OPENSHIFT-FIREWALL-FORWARD - [0:0] -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES -A INPUT -m comment --comment "Ensure that non-local NodePort traffic can flow" -j KUBE-NODEPORT-NON-LOCAL -A INPUT -m comment --comment "firewall overrides" -j OPENSHIFT-FIREWALL-ALLOW -A INPUT -j KUBE-FIREWALL -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 10250 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 1936 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD -A FORWARD -i tun0 ! -o tun0 -m comment --comment "administrator overrides" -j OPENSHIFT-ADMIN-OUTPUT-RULES -A FORWARD -m comment --comment "firewall overrides" -j OPENSHIFT-FIREWALL-FORWARD -A FORWARD -j DOCKER-ISOLATION -A FORWARD -o docker0 -j DOCKER -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -j KUBE-FIREWALL -A DOCKER-ISOLATION -j RETURN -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x1/0x1 -j ACCEPT -A OPENSHIFT-FIREWALL-ALLOW -p udp -m udp --dport 4789 -m comment --comment "VXLAN incoming" -j ACCEPT -A OPENSHIFT-FIREWALL-ALLOW -i tun0 -m comment --comment "from SDN to localhost" -j ACCEPT -A OPENSHIFT-FIREWALL-ALLOW -i docker0 -m comment --comment "from docker to localhost" -j ACCEPT -A OPENSHIFT-FIREWALL-FORWARD -s 10.128.0.0/14 -m comment --comment "attempted resend after connection close" -m conntrack --ctstate INVALID -j DROP -A OPENSHIFT-FIREWALL-FORWARD -d 10.128.0.0/14 -m comment --comment "forward traffic from SDN" -j ACCEPT -A OPENSHIFT-FIREWALL-FORWARD -s 10.128.0.0/14 -m comment --comment "forward traffic to SDN" -j ACCEPT COMMIT # Completed on Thu Apr 5 20:47:13 2018 Apr 5 20:47:13.981: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.17.0.2:8443 --kubeconfig=/tmp/openshift/networking/multitenant/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-net-services2-gzdwh debugpod-sourceip-nettest-node-16qbrg -- /bin/sh -c ss -ant' Apr 5 20:47:14.288: INFO: stderr: "" Apr 5 20:47:14.288: INFO: DEBUG: State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* TIME-WAIT 0 0 172.17.0.3:39682 52.216.229.83:443 ESTAB 0 0 172.17.0.3:51576 172.17.0.2:8443 TIME-WAIT 0 0 172.17.0.3:39680 52.216.229.83:443 LISTEN 0 128 :::10256 :::* LISTEN 0 128 :::22 :::* LISTEN 0 128 :::10250 :::* ESTAB 0 0 ::ffff:172.17.0.3:10250 ::ffff:172.17.0.2:52316 ESTAB 0 0 ::ffff:172.17.0.3:10250 ::ffff:172.17.0.2:52686 Apr 5 20:47:14.293: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:47:14.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-pmlk5" for this suite. Apr 5 20:47:20.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:47:20.415: INFO: namespace: e2e-tests-net-services1-pmlk5, resource: bindings, ignored listing per whitelist Apr 5 20:47:20.439: INFO: namespace e2e-tests-net-services1-pmlk5 deletion completed in 6.115284974s [AfterEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135 Apr 5 20:47:20.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services2-gzdwh" for this suite. Apr 5 20:47:26.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 5 20:47:26.534: INFO: namespace: e2e-tests-net-services2-gzdwh, resource: bindings, ignored listing per whitelist Apr 5 20:47:26.552: INFO: namespace e2e-tests-net-services2-gzdwh deletion completed in 6.111500027s • [SLOW TEST:64.818 seconds] [Area:Networking] services /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:415 should prevent connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:40 ------------------------------ S ------------------------------ [Area:Networking] network isolation when using a plugin that isolates namespaces by default should prevent communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:28 [BeforeEach] [Top Level] /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:53 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:416 [BeforeEach] when using a plugin that isolates namespaces by default /tmp/openshift/build-rpms/rpm/BUILD/origin-3.10.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134 STEP: Creating a kubernetes client Apr 5 20:47:26.552: INFO: >>> kubeConfig: /tmp/openshift/networking/multite