SuccessConsole Output

Skipping 11,516 KB.. Full Log
st/e2e_federation/service.go:315
    Federated Service
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/service.go:169
      should recreate service shard in underlying clusters when service shard is deleted
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/service.go:168

      skipping tests not in the Origin conformance suite

      /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should implement legacy replacement when the update strategy is OnDelete
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:661
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] StatefulSet [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    should implement legacy replacement when the update strategy is OnDelete
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:661

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] GKE local SSD [Feature:GKELocalSSD] 
  should write and read from node local SSD [Feature:GKELocalSSD]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:44
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] GKE local SSD [Feature:GKELocalSSD] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should write and read from node local SSD [Feature:GKELocalSSD]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:44

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Load capacity 
  [Feature:ManualPerformance] should be able to handle 30 pods per node { ReplicationController} with 0 secrets, 0 configmaps and 2 daemons
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/perf/load.go:287
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Load capacity [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [Feature:ManualPerformance] should be able to handle 30 pods per node { ReplicationController} with 0 secrets, 0 configmaps and 2 daemons
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/perf/load.go:287

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] SchedulerPriorities [Serial] 
  Pod should be schedule to node that satisify the PodAffinity
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:190
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] SchedulerPriorities [Serial] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  Pod should be schedule to node that satisify the PodAffinity
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:190

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Projected 
  should provide container's memory limit [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:935
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Projected [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should provide container's memory limit [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:935

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[builds][Slow] completed builds should have digest of the image in their status S2I build started with normal log level 
  should save the image digest when finished
  /go/src/github.com/openshift/origin/test/extended/builds/digest.go:68
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.000 seconds]
[builds][Slow] completed builds should have digest of the image in their status [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/digest.go:50
  S2I build
  /go/src/github.com/openshift/origin/test/extended/builds/digest.go:39
    started with normal log level
    /go/src/github.com/openshift/origin/test/extended/builds/digest.go:34
      should save the image digest when finished
      /go/src/github.com/openshift/origin/test/extended/builds/digest.go:68

      skipping tests not in the Origin conformance suite

      /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Reboot [Disruptive] [Feature:Reboot] 
  each node by triggering kernel panic and ensure they function upon restart
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/reboot.go:108
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.000 seconds]
[k8s.io] Reboot [Disruptive] [Feature:Reboot] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  each node by triggering kernel panic and ensure they function upon restart
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/reboot.go:108

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[builds][Slow] update failure status Build status push image to registry failure 
  should contain the image push to registry failure reason and message
  /go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:154
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.000 seconds]
[builds][Slow] update failure status [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:213
  Build status push image to registry failure
  /go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:155
    should contain the image push to registry failure reason and message
    /go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:154

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Projected 
  should be consumable in multiple volumes in the same pod [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:795
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Projected [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should be consumable in multiple volumes in the same pod [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:795

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Projected 
  should be consumable from pods in volume with mappings [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:409
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Projected [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should be consumable from pods in volume with mappings [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:409

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Networking [k8s.io] Granular Checks: Pods 
  should function for intra-pod communication: http [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:38
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Networking [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] Granular Checks: Pods
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    should function for intra-pod communication: http [Conformance]
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:38

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[builds][Slow] Capabilities should be dropped for s2i builders s2i build with a rootable builder 
  should not be able to switch to root with an assemble script
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_dropcaps.go:44
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.000 seconds]
[builds][Slow] Capabilities should be dropped for s2i builders [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/s2i_dropcaps.go:47
  s2i build with a rootable builder
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_dropcaps.go:45
    should not be able to switch to root with an assemble script
    /go/src/github.com/openshift/origin/test/extended/builds/s2i_dropcaps.go:44

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:90
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] EmptyDir volumes [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should support (non-root,0666,tmpfs) [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:90

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[job][Conformance] openshift can execute jobs controller 
  should create and run a job in user project
  /go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:53
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[job][Conformance] openshift can execute jobs [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:55
  controller
  /go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:54
    should create and run a job in user project
    /go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:53

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
deploymentconfigs with revision history limits [Conformance] 
  should never persist more old deployments than acceptable after being observed by the controller
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:891
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
deploymentconfigs [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1121
  with revision history limits [Conformance]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:892
    should never persist more old deployments than acceptable after being observed by the controller
    /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:891

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:66
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Proxy [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  version v1
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:275
    should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:66

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
deploymentconfigs with custom deployments [Conformance] 
  should run the custom deployment steps
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:518
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
deploymentconfigs [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1121
  with custom deployments [Conformance]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:519
    should run the custom deployment steps
    /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:518

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[templates] templateinstance cross-namespace test 
  should create and delete objects across namespaces
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_cross_namespace.go:138
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[templates] templateinstance cross-namespace test [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_cross_namespace.go:139
  should create and delete objects across namespaces
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_cross_namespace.go:138

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses [Slow] 
  should not be deleted from underlying clusters when OrphanDependents is nil
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/ingress.go:223
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Federated ingresses [Feature:Federation] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  Federated Ingresses [Slow]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/ingress.go:277
    should not be deleted from underlying clusters when OrphanDependents is nil
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/ingress.go:223

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[image_ecosystem][Slow] openshift images should be SCL enabled using the SCL in s2i images 
  "openshift/nodejs-010-centos7" should be SCL enabled
  /go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:72
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.000 seconds]
[image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76
  using the SCL in s2i images
  /go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:73
    "openshift/nodejs-010-centos7" should be SCL enabled
    /go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:72

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
NetworkPolicy when using a plugin that implements NetworkPolicy 
  should enforce policy based on Ports [Feature:NetworkPolicy]
  /go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:212
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
NetworkPolicy [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:499
  when using a plugin that implements NetworkPolicy
  /go/src/github.com/openshift/origin/test/extended/networking/util.go:286
    should enforce policy based on Ports [Feature:NetworkPolicy]
    /go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:212

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking 
  resource tracking for 100 pods per node
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.000 seconds]
[k8s.io] Kubelet [Serial] [Slow] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] regular resource usage tracking
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    resource tracking for 100 pods per node
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated.
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:58
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] ResourceQuota [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should create a ResourceQuota and ensure its status is promptly calculated.
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:58

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] SchedulerPriorities [Serial] 
  Pod should perfer to scheduled to nodes pod can tolerate
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:362
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] SchedulerPriorities [Serial] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  Pod should perfer to scheduled to nodes pod can tolerate
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:362

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Density 
  [Feature:Performance] should allow starting 30 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/perf/density.go:771
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.000 seconds]
[k8s.io] Density [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [Feature:Performance] should allow starting 30 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/perf/density.go:771

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] 
  should update the taint on a node
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1550
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52
[BeforeEach] [k8s.io] Kubectl client
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:130
STEP: Creating a kubernetes client
Aug  3 23:57:57.897: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Aug  3 23:57:57.973: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubectl client
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:291
[It] should update the taint on a node
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1550
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: adding the taint kubernetes.io/e2e-taint-key-001-1a58e000-78c9-11e7-87ac-0ebf9a948e7a=testing-taint-value:NoSchedule to a node
Aug  3 23:58:02.584: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://internal-api.primg353.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig taint nodes ci-primg353-ig-n-h1xq kubernetes.io/e2e-taint-key-001-1a58e000-78c9-11e7-87ac-0ebf9a948e7a=testing-taint-value:NoSchedule'
Aug  3 23:58:02.958: INFO: stderr: ""
Aug  3 23:58:02.958: INFO: stdout: "node \"ci-primg353-ig-n-h1xq\" tainted\n"
STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-001-1a58e000-78c9-11e7-87ac-0ebf9a948e7a=testing-taint-value:NoSchedule
Aug  3 23:58:02.958: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://internal-api.primg353.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig describe node ci-primg353-ig-n-h1xq'
Aug  3 23:58:03.396: INFO: stderr: ""
Aug  3 23:58:03.396: INFO: stdout: "Name:\t\t\tci-primg353-ig-n-h1xq\nRole:\t\t\t\nLabels:\t\t\tbeta.kubernetes.io/arch=amd64\n\t\t\tbeta.kubernetes.io/instance-type=n1-standard-2\n\t\t\tbeta.kubernetes.io/os=linux\n\t\t\tfailure-domain.beta.kubernetes.io/region=us-central1\n\t\t\tfailure-domain.beta.kubernetes.io/zone=us-central1-a\n\t\t\tkubernetes.io/hostname=ci-primg353-ig-n-h1xq\n\t\t\trole=app\nAnnotations:\t\tvolumes.kubernetes.io/controller-managed-attach-detach=true\nTaints:\t\t\tkubernetes.io/e2e-taint-key-001-1a58e000-78c9-11e7-87ac-0ebf9a948e7a=testing-taint-value:NoSchedule\nCreationTimestamp:\tThu, 03 Aug 2017 23:19:16 -0400\nConditions:\n  Type\t\t\tStatus\tLastHeartbeatTime\t\t\tLastTransitionTime\t\t\tReason\t\t\t\tMessage\n  ----\t\t\t------\t-----------------\t\t\t------------------\t\t\t------\t\t\t\t-------\n  NetworkUnavailable \tFalse \tMon, 01 Jan 0001 00:00:00 +0000 \tThu, 03 Aug 2017 23:19:16 -0400 \tRouteCreated \t\t\topenshift-sdn cleared kubelet-set NoRouteCreated\n  OutOfDisk \t\tFalse \tThu, 03 Aug 2017 23:57:53 -0400 \tThu, 03 Aug 2017 23:19:16 -0400 \tKubeletHasSufficientDisk \tkubelet has sufficient disk space available\n  MemoryPressure \tFalse \tThu, 03 Aug 2017 23:57:53 -0400 \tThu, 03 Aug 2017 23:19:16 -0400 \tKubeletHasSufficientMemory \tkubelet has sufficient memory available\n  DiskPressure \t\tFalse \tThu, 03 Aug 2017 23:57:53 -0400 \tThu, 03 Aug 2017 23:19:16 -0400 \tKubeletHasNoDiskPressure \tkubelet has no disk pressure\n  Ready \t\tTrue \tThu, 03 Aug 2017 23:57:53 -0400 \tThu, 03 Aug 2017 23:24:07 -0400 \tKubeletReady \t\t\tkubelet is posting ready status\nAddresses:\n  InternalIP:\t10.128.0.5\n  ExternalIP:\t35.192.237.230\n  Hostname:\tci-primg353-ig-n-h1xq\nCapacity:\n cpu:\t\t2\n memory:\t7494724Ki\n pods:\t\t20\nAllocatable:\n cpu:\t\t2\n memory:\t7392324Ki\n pods:\t\t20\nSystem Info:\n Machine ID:\t\t\t02f1ddb1415c4feba9880b2b8c4c5925\n System UUID:\t\t\t5E20F61F-5A3F-88A8-51F9-B04D2A959B45\n Boot ID:\t\t\t13ef8a71-48b8-4bf0-8de7-ae27c34bd081\n Kernel Version:\t\t3.10.0-514.6.1.el7.x86_64\n OS Image:\t\t\tRed Hat Enterprise Linux Server 7.3 (Maipo)\n Operating System:\t\tlinux\n Architecture:\t\t\tamd64\n Container Runtime Version:\tdocker://1.12.5\n Kubelet Version:\t\tv1.7.0+695f48a16f\n Kube-Proxy Version:\t\tv1.7.0+695f48a16f\nExternalID:\t\t\t8245047590822763150\nNon-terminated Pods:\t\t(0 in total)\n  Namespace\t\t\tName\t\tCPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n  ---------\t\t\t----\t\t------------\t----------\t---------------\t-------------\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  CPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n  ------------\t----------\t---------------\t-------------\n  0 (0%)\t0 (0%)\t\t0 (0%)\t\t0 (0%)\nEvents:\n  FirstSeen\tLastSeen\tCount\tFrom\t\t\t\tSubObjectPath\tType\t\tReason\t\t\tMessage\n  ---------\t--------\t-----\t----\t\t\t\t-------------\t--------\t------\t\t\t-------\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tStarting\t\tStarting kubelet.\n  38m\t\t38m\t\t2\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasSufficientDisk\tNode ci-primg353-ig-n-h1xq status is now: NodeHasSufficientDisk\n  38m\t\t38m\t\t2\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasSufficientMemory\tNode ci-primg353-ig-n-h1xq status is now: NodeHasSufficientMemory\n  38m\t\t38m\t\t2\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasNoDiskPressure\tNode ci-primg353-ig-n-h1xq status is now: NodeHasNoDiskPressure\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeAllocatableEnforced\tUpdated Node Allocatable limit across pods\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tStarting\t\tStarting kubelet.\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeAllocatableEnforced\tUpdated Node Allocatable limit across pods\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasSufficientDisk\tNode ci-primg353-ig-n-h1xq status is now: NodeHasSufficientDisk\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasSufficientMemory\tNode ci-primg353-ig-n-h1xq status is now: NodeHasSufficientMemory\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasNoDiskPressure\tNode ci-primg353-ig-n-h1xq status is now: NodeHasNoDiskPressure\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeReady\t\tNode ci-primg353-ig-n-h1xq status is now: NodeReady\n  34m\t\t34m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tStarting\t\tStarting kubelet.\n  34m\t\t34m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeAllocatableEnforced\tUpdated Node Allocatable limit across pods\n  34m\t\t34m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasSufficientDisk\tNode ci-primg353-ig-n-h1xq status is now: NodeHasSufficientDisk\n  34m\t\t34m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasSufficientMemory\tNode ci-primg353-ig-n-h1xq status is now: NodeHasSufficientMemory\n  34m\t\t34m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasNoDiskPressure\tNode ci-primg353-ig-n-h1xq status is now: NodeHasNoDiskPressure\n  34m\t\t34m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeNotReady\t\tNode ci-primg353-ig-n-h1xq status is now: NodeNotReady\n  33m\t\t33m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeReady\t\tNode ci-primg353-ig-n-h1xq status is now: NodeReady\n"
STEP: removing the taint kubernetes.io/e2e-taint-key-001-1a58e000-78c9-11e7-87ac-0ebf9a948e7a=testing-taint-value:NoSchedule of a node
Aug  3 23:58:03.396: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://internal-api.primg353.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig taint nodes ci-primg353-ig-n-h1xq kubernetes.io/e2e-taint-key-001-1a58e000-78c9-11e7-87ac-0ebf9a948e7a:NoSchedule-'
Aug  3 23:58:03.771: INFO: stderr: ""
Aug  3 23:58:03.771: INFO: stdout: "node \"ci-primg353-ig-n-h1xq\" untainted\n"
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-001-1a58e000-78c9-11e7-87ac-0ebf9a948e7a
Aug  3 23:58:03.771: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://internal-api.primg353.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig describe node ci-primg353-ig-n-h1xq'
Aug  3 23:58:04.206: INFO: stderr: ""
Aug  3 23:58:04.206: INFO: stdout: "Name:\t\t\tci-primg353-ig-n-h1xq\nRole:\t\t\t\nLabels:\t\t\tbeta.kubernetes.io/arch=amd64\n\t\t\tbeta.kubernetes.io/instance-type=n1-standard-2\n\t\t\tbeta.kubernetes.io/os=linux\n\t\t\tfailure-domain.beta.kubernetes.io/region=us-central1\n\t\t\tfailure-domain.beta.kubernetes.io/zone=us-central1-a\n\t\t\tkubernetes.io/hostname=ci-primg353-ig-n-h1xq\n\t\t\trole=app\nAnnotations:\t\tvolumes.kubernetes.io/controller-managed-attach-detach=true\nTaints:\t\t\t<none>\nCreationTimestamp:\tThu, 03 Aug 2017 23:19:16 -0400\nConditions:\n  Type\t\t\tStatus\tLastHeartbeatTime\t\t\tLastTransitionTime\t\t\tReason\t\t\t\tMessage\n  ----\t\t\t------\t-----------------\t\t\t------------------\t\t\t------\t\t\t\t-------\n  NetworkUnavailable \tFalse \tMon, 01 Jan 0001 00:00:00 +0000 \tThu, 03 Aug 2017 23:19:16 -0400 \tRouteCreated \t\t\topenshift-sdn cleared kubelet-set NoRouteCreated\n  OutOfDisk \t\tFalse \tThu, 03 Aug 2017 23:58:03 -0400 \tThu, 03 Aug 2017 23:19:16 -0400 \tKubeletHasSufficientDisk \tkubelet has sufficient disk space available\n  MemoryPressure \tFalse \tThu, 03 Aug 2017 23:58:03 -0400 \tThu, 03 Aug 2017 23:19:16 -0400 \tKubeletHasSufficientMemory \tkubelet has sufficient memory available\n  DiskPressure \t\tFalse \tThu, 03 Aug 2017 23:58:03 -0400 \tThu, 03 Aug 2017 23:19:16 -0400 \tKubeletHasNoDiskPressure \tkubelet has no disk pressure\n  Ready \t\tTrue \tThu, 03 Aug 2017 23:58:03 -0400 \tThu, 03 Aug 2017 23:24:07 -0400 \tKubeletReady \t\t\tkubelet is posting ready status\nAddresses:\n  InternalIP:\t10.128.0.5\n  ExternalIP:\t35.192.237.230\n  Hostname:\tci-primg353-ig-n-h1xq\nCapacity:\n cpu:\t\t2\n memory:\t7494724Ki\n pods:\t\t20\nAllocatable:\n cpu:\t\t2\n memory:\t7392324Ki\n pods:\t\t20\nSystem Info:\n Machine ID:\t\t\t02f1ddb1415c4feba9880b2b8c4c5925\n System UUID:\t\t\t5E20F61F-5A3F-88A8-51F9-B04D2A959B45\n Boot ID:\t\t\t13ef8a71-48b8-4bf0-8de7-ae27c34bd081\n Kernel Version:\t\t3.10.0-514.6.1.el7.x86_64\n OS Image:\t\t\tRed Hat Enterprise Linux Server 7.3 (Maipo)\n Operating System:\t\tlinux\n Architecture:\t\t\tamd64\n Container Runtime Version:\tdocker://1.12.5\n Kubelet Version:\t\tv1.7.0+695f48a16f\n Kube-Proxy Version:\t\tv1.7.0+695f48a16f\nExternalID:\t\t\t8245047590822763150\nNon-terminated Pods:\t\t(0 in total)\n  Namespace\t\t\tName\t\tCPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n  ---------\t\t\t----\t\t------------\t----------\t---------------\t-------------\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  CPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n  ------------\t----------\t---------------\t-------------\n  0 (0%)\t0 (0%)\t\t0 (0%)\t\t0 (0%)\nEvents:\n  FirstSeen\tLastSeen\tCount\tFrom\t\t\t\tSubObjectPath\tType\t\tReason\t\t\tMessage\n  ---------\t--------\t-----\t----\t\t\t\t-------------\t--------\t------\t\t\t-------\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tStarting\t\tStarting kubelet.\n  38m\t\t38m\t\t2\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasSufficientDisk\tNode ci-primg353-ig-n-h1xq status is now: NodeHasSufficientDisk\n  38m\t\t38m\t\t2\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasSufficientMemory\tNode ci-primg353-ig-n-h1xq status is now: NodeHasSufficientMemory\n  38m\t\t38m\t\t2\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasNoDiskPressure\tNode ci-primg353-ig-n-h1xq status is now: NodeHasNoDiskPressure\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeAllocatableEnforced\tUpdated Node Allocatable limit across pods\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tStarting\t\tStarting kubelet.\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeAllocatableEnforced\tUpdated Node Allocatable limit across pods\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasSufficientDisk\tNode ci-primg353-ig-n-h1xq status is now: NodeHasSufficientDisk\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasSufficientMemory\tNode ci-primg353-ig-n-h1xq status is now: NodeHasSufficientMemory\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasNoDiskPressure\tNode ci-primg353-ig-n-h1xq status is now: NodeHasNoDiskPressure\n  38m\t\t38m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeReady\t\tNode ci-primg353-ig-n-h1xq status is now: NodeReady\n  34m\t\t34m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tStarting\t\tStarting kubelet.\n  34m\t\t34m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeAllocatableEnforced\tUpdated Node Allocatable limit across pods\n  34m\t\t34m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasSufficientDisk\tNode ci-primg353-ig-n-h1xq status is now: NodeHasSufficientDisk\n  34m\t\t34m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasSufficientMemory\tNode ci-primg353-ig-n-h1xq status is now: NodeHasSufficientMemory\n  34m\t\t34m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeHasNoDiskPressure\tNode ci-primg353-ig-n-h1xq status is now: NodeHasNoDiskPressure\n  34m\t\t34m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeNotReady\t\tNode ci-primg353-ig-n-h1xq status is now: NodeNotReady\n  33m\t\t33m\t\t1\tkubelet, ci-primg353-ig-n-h1xq\t\t\tNormal\t\tNodeReady\t\tNode ci-primg353-ig-n-h1xq status is now: NodeReady\n"
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-001-1a58e000-78c9-11e7-87ac-0ebf9a948e7a=testing-taint-value:NoSchedule
[AfterEach] [k8s.io] Kubectl client
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Aug  3 23:58:04.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pp4nm" for this suite.
Aug  3 23:58:12.743: INFO: namespace: e2e-tests-kubectl-pp4nm, resource: bindings, ignored listing per whitelist
Aug  3 23:58:13.069: INFO: namespace e2e-tests-kubectl-pp4nm deletion completed in 8.744787851s

• [SLOW TEST:15.172 seconds]
[k8s.io] Kubectl client
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] Kubectl taint [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    should update the taint on a node
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1550
------------------------------
[image_ecosystem][Slow] openshift images should be SCL enabled using the SCL in s2i images 
  "openshift/perl-520-centos7" should be SCL enabled
  /go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:72
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76
  using the SCL in s2i images
  /go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:73
    "openshift/perl-520-centos7" should be SCL enabled
    /go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:72

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied.
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/limit_range.go:103
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] LimitRange [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should create a LimitRange with defaults and ensure pod has those defaults applied.
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/limit_range.go:103

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[builds][Slow] testing build configuration hooks testing postCommit hook 
  failing postCommit script
  /go/src/github.com/openshift/origin/test/extended/builds/hooks.go:59
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[builds][Slow] testing build configuration hooks [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/hooks.go:79
  testing postCommit hook
  /go/src/github.com/openshift/origin/test/extended/builds/hooks.go:77
    failing postCommit script
    /go/src/github.com/openshift/origin/test/extended/builds/hooks.go:59

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Projected 
  should be consumable from pods in volume with mappings [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:56
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Projected [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should be consumable from pods in volume with mappings [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:56

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[templates] templateinstance impersonation tests 
  should pass impersonation deletion tests
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:316
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[templates] templateinstance impersonation tests [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:317
  should pass impersonation deletion tests
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:316

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
S
------------------------------
[builds][Slow] the s2i build should support proxies start build with broken proxy and a no_proxy override 
  should start a docker build and wait for the build to succeed
  /go/src/github.com/openshift/origin/test/extended/builds/proxy.go:78
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[builds][Slow] the s2i build should support proxies [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/proxy.go:81
  start build with broken proxy and a no_proxy override
  /go/src/github.com/openshift/origin/test/extended/builds/proxy.go:79
    should start a docker build and wait for the build to succeed
    /go/src/github.com/openshift/origin/test/extended/builds/proxy.go:78

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Projected 
  optional updates should be reflected in volume [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:382
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Projected [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  optional updates should be reflected in volume [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:382

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] 
  should be able to add nodes
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resize_nodes.go:200
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Nodes [Disruptive] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] Resize [Slow]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    should be able to add nodes
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resize_nodes.go:200

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request 
  should support a client that connects, sends NO DATA, and disconnects
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/portforward.go:501
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Port forwarding [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] With a server listening on localhost
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    [k8s.io] that expects a client request
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
      should support a client that connects, sends NO DATA, and disconnects
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/portforward.go:501

      skipping tests not in the Origin conformance suite

      /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request 
  should support a client that connects, sends NO DATA, and disconnects
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/portforward.go:479
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Port forwarding [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] With a server listening on 0.0.0.0
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    [k8s.io] that expects a client request
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
      should support a client that connects, sends NO DATA, and disconnects
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/portforward.go:479

      skipping tests not in the Origin conformance suite

      /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[templates] templateinstance object kinds test 
  should create objects from varying API groups
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_objectkinds.go:63
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[templates] templateinstance object kinds test [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_objectkinds.go:64
  should create objects from varying API groups
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_objectkinds.go:63

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:96
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Probing container [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  with readiness probe that fails should never be ready and never restart [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:96

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Daemon set [Serial] 
  Should update pod when spec was updated and update strategy is RollingUpdate
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_set.go:358
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52
[BeforeEach] [k8s.io] Daemon set [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:130
STEP: Creating a kubernetes client
Aug  3 23:58:13.077: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Aug  3 23:58:13.160: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Daemon set [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_set.go:111
[It] Should update pod when spec was updated and update strategy is RollingUpdate
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_set.go:358
Aug  3 23:58:13.768: INFO: Creating simple daemon set daemon-set with templateGeneration 999
STEP: Check that daemon pods launch on every node of the cluster.
Aug  3 23:58:13.872: INFO: Number of nodes with available pods: 0
Aug  3 23:58:13.872: INFO: Node ci-primg353-ig-m-l9ds is running more than one daemon pod
Aug  3 23:58:14.965: INFO: Number of nodes with available pods: 0
Aug  3 23:58:14.965: INFO: Node ci-primg353-ig-m-l9ds is running more than one daemon pod
Aug  3 23:58:15.964: INFO: Number of nodes with available pods: 1
Aug  3 23:58:15.964: INFO: Node ci-primg353-ig-n-98dm is running more than one daemon pod
Aug  3 23:58:16.962: INFO: Number of nodes with available pods: 4
Aug  3 23:58:16.962: INFO: Number of running nodes: 4, number of available pods: 4
STEP: Make sure all daemon pods have correct template generation 999
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug  3 23:58:17.146: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:17.146: INFO: Wrong image for pod: daemon-set-mf5bq. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:17.146: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:17.146: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:18.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:18.177: INFO: Wrong image for pod: daemon-set-mf5bq. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:18.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:18.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:19.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:19.178: INFO: Wrong image for pod: daemon-set-mf5bq. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:19.178: INFO: Pod daemon-set-mf5bq is not available
Aug  3 23:58:19.178: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:19.178: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:20.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:20.177: INFO: Pod daemon-set-8gq9q is not available
Aug  3 23:58:20.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:20.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:21.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:21.177: INFO: Pod daemon-set-8gq9q is not available
Aug  3 23:58:21.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:21.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:22.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:22.177: INFO: Pod daemon-set-8gq9q is not available
Aug  3 23:58:22.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:22.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:23.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:23.177: INFO: Pod daemon-set-8gq9q is not available
Aug  3 23:58:23.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:23.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:24.178: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:24.178: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:24.178: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:25.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:25.177: INFO: Pod daemon-set-82fk7 is not available
Aug  3 23:58:25.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:25.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:26.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:26.177: INFO: Pod daemon-set-82fk7 is not available
Aug  3 23:58:26.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:26.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:27.178: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:27.178: INFO: Pod daemon-set-82fk7 is not available
Aug  3 23:58:27.178: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:27.178: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:28.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:28.177: INFO: Pod daemon-set-82fk7 is not available
Aug  3 23:58:28.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:28.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:29.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:29.177: INFO: Pod daemon-set-82fk7 is not available
Aug  3 23:58:29.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:29.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:30.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:30.177: INFO: Pod daemon-set-82fk7 is not available
Aug  3 23:58:30.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:30.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:31.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:31.177: INFO: Pod daemon-set-82fk7 is not available
Aug  3 23:58:31.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:31.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:32.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:32.177: INFO: Pod daemon-set-82fk7 is not available
Aug  3 23:58:32.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:32.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:33.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:33.177: INFO: Pod daemon-set-82fk7 is not available
Aug  3 23:58:33.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:33.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:34.178: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:34.178: INFO: Pod daemon-set-82fk7 is not available
Aug  3 23:58:34.178: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:34.178: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:35.178: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:35.178: INFO: Pod daemon-set-82fk7 is not available
Aug  3 23:58:35.178: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:35.178: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:36.177: INFO: Wrong image for pod: daemon-set-82fk7. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:36.177: INFO: Pod daemon-set-82fk7 is not available
Aug  3 23:58:36.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:36.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:37.177: INFO: Pod daemon-set-dbn6x is not available
Aug  3 23:58:37.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:37.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:38.177: INFO: Pod daemon-set-dbn6x is not available
Aug  3 23:58:38.177: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:38.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:39.179: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:39.179: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:40.178: INFO: Wrong image for pod: daemon-set-s6d60. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:40.178: INFO: Pod daemon-set-s6d60 is not available
Aug  3 23:58:40.178: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:41.180: INFO: Pod daemon-set-pgflv is not available
Aug  3 23:58:41.180: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:42.179: INFO: Pod daemon-set-pgflv is not available
Aug  3 23:58:42.179: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:43.178: INFO: Pod daemon-set-pgflv is not available
Aug  3 23:58:43.178: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:44.179: INFO: Pod daemon-set-pgflv is not available
Aug  3 23:58:44.179: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:45.177: INFO: Pod daemon-set-pgflv is not available
Aug  3 23:58:45.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:46.177: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:47.178: INFO: Wrong image for pod: daemon-set-sps7h. Expected: gcr.io/k8s-testimages/redis:e2e, got: gcr.io/google_containers/serve_hostname:v1.4.
Aug  3 23:58:47.178: INFO: Pod daemon-set-sps7h is not available
Aug  3 23:58:48.177: INFO: Pod daemon-set-ggfvl is not available
STEP: Make sure all daemon pods have correct template generation 1000
STEP: Check that daemon pods are still running on every node of the cluster.
Aug  3 23:58:48.297: INFO: Number of nodes with available pods: 3
Aug  3 23:58:48.297: INFO: Node ci-primg353-ig-n-m1r7 is running more than one daemon pod
Aug  3 23:58:49.387: INFO: Number of nodes with available pods: 3
Aug  3 23:58:49.387: INFO: Node ci-primg353-ig-n-m1r7 is running more than one daemon pod
Aug  3 23:58:50.387: INFO: Number of nodes with available pods: 4
Aug  3 23:58:50.387: INFO: Number of running nodes: 4, number of available pods: 4
[AfterEach] [k8s.io] Daemon set [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_set.go:95
STEP: Deleting DaemonSet "daemon-set" with reaper
Aug  3 23:58:51.664: INFO: Number of nodes with available pods: 0
Aug  3 23:58:51.664: INFO: Number of running nodes: 0, number of available pods: 0
Aug  3 23:58:51.696: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"extensions/v1beta1","metadata":{"selfLink":"/apis/extensions/v1beta1/namespaces/e2e-tests-daemonsets-l8s9p/daemonsets","resourceVersion":"33457"},"items":null}

Aug  3 23:58:51.728: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-l8s9p/pods","resourceVersion":"33458"},"items":[{"metadata":{"name":"daemon-set-8gq9q","generateName":"daemon-set-","namespace":"e2e-tests-daemonsets-l8s9p","selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-l8s9p/pods/daemon-set-8gq9q","uid":"270a09bd-78c9-11e7-8b3e-42010a800002","resourceVersion":"33451","creationTimestamp":"2017-08-04T03:58:19Z","deletionTimestamp":"2017-08-04T03:59:20Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"2538072950","daemonset-name":"daemon-set","pod-template-generation":"1000"},"annotations":{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"DaemonSet\",\"namespace\":\"e2e-tests-daemonsets-l8s9p\",\"name\":\"daemon-set\",\"uid\":\"2385d485-78c9-11e7-8b3e-42010a800002\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"33348\"}}\n","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"DaemonSet","name":"daemon-set","uid":"2385d485-78c9-11e7-8b3e-42010a800002","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-42lf8","secret":{"secretName":"default-token-42lf8","defaultMode":420}}],"containers":[{"name":"app","image":"gcr.io/k8s-testimages/redis:e2e","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"default-token-42lf8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD","SYS_CHROOT"]},"privileged":false,"seLinuxOptions":{"level":"s0:c59,c44"}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ci-primg353-ig-n-h1xq","securityContext":{"seLinuxOptions":{"level":"s0:c59,c44"}},"imagePullSecrets":[{"name":"default-dockercfg-87s9c"}],"schedulerName":"default-scheduler","tolerations":[{"key":"node.alpha.kubernetes.io/notReady","operator":"Exists","effect":"NoExecute"},{"key":"node.alpha.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-08-04T03:58:19Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2017-08-04T03:58:51Z","reason":"ContainersNotReady","message":"containers with unready status: [app]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-08-04T03:58:50Z"}],"hostIP":"10.128.0.5","startTime":"2017-08-04T03:58:19Z","containerStatuses":[{"name":"app","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2017-08-04T03:58:22Z","finishedAt":"2017-08-04T03:58:50Z","containerID":"docker://d3f5ed049b8cb2449a38f1a5dc40341b86eee304b81c0c4af1f9fe856a59db19"}},"lastState":{},"ready":false,"restartCount":0,"image":"gcr.io/k8s-testimages/redis:e2e","imageID":"docker-pullable://gcr.io/k8s-testimages/redis@sha256:2c3f1112c32a23ad0f14a0a6ff8ac8006e23b5c27de89a4b5287467eb4844dad","containerID":"docker://d3f5ed049b8cb2449a38f1a5dc40341b86eee304b81c0c4af1f9fe856a59db19"}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-dbn6x","generateName":"daemon-set-","namespace":"e2e-tests-daemonsets-l8s9p","selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-l8s9p/pods/daemon-set-dbn6x","uid":"312d3fa9-78c9-11e7-8b3e-42010a800002","resourceVersion":"33452","creationTimestamp":"2017-08-04T03:58:36Z","deletionTimestamp":"2017-08-04T03:59:20Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"2538072950","daemonset-name":"daemon-set","pod-template-generation":"1000"},"annotations":{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"DaemonSet\",\"namespace\":\"e2e-tests-daemonsets-l8s9p\",\"name\":\"daemon-set\",\"uid\":\"2385d485-78c9-11e7-8b3e-42010a800002\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"33370\"}}\n","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"DaemonSet","name":"daemon-set","uid":"2385d485-78c9-11e7-8b3e-42010a800002","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-42lf8","secret":{"secretName":"default-token-42lf8","defaultMode":420}}],"containers":[{"name":"app","image":"gcr.io/k8s-testimages/redis:e2e","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"default-token-42lf8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD","SYS_CHROOT"]},"privileged":false,"seLinuxOptions":{"level":"s0:c59,c44"}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ci-primg353-ig-n-98dm","securityContext":{"seLinuxOptions":{"level":"s0:c59,c44"}},"imagePullSecrets":[{"name":"default-dockercfg-87s9c"}],"schedulerName":"default-scheduler","tolerations":[{"key":"node.alpha.kubernetes.io/notReady","operator":"Exists","effect":"NoExecute"},{"key":"node.alpha.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-08-04T03:58:36Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2017-08-04T03:58:51Z","reason":"ContainersNotReady","message":"containers with unready status: [app]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-08-04T03:58:39Z"}],"hostIP":"10.128.0.4","startTime":"2017-08-04T03:58:36Z","containerStatuses":[{"name":"app","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2017-08-04T03:58:38Z","finishedAt":"2017-08-04T03:58:50Z","containerID":"docker://778a3198c57f7329e06ce5e5599d826a614d4269047e3d3e7a7413d7e821b62d"}},"lastState":{},"ready":false,"restartCount":0,"image":"gcr.io/k8s-testimages/redis:e2e","imageID":"docker-pullable://gcr.io/k8s-testimages/redis@sha256:2c3f1112c32a23ad0f14a0a6ff8ac8006e23b5c27de89a4b5287467eb4844dad","containerID":"docker://778a3198c57f7329e06ce5e5599d826a614d4269047e3d3e7a7413d7e821b62d"}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-ggfvl","generateName":"daemon-set-","namespace":"e2e-tests-daemonsets-l8s9p","selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-l8s9p/pods/daemon-set-ggfvl","uid":"37ab2d14-78c9-11e7-8b3e-42010a800002","resourceVersion":"33458","creationTimestamp":"2017-08-04T03:58:47Z","deletionTimestamp":"2017-08-04T03:59:20Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"2538072950","daemonset-name":"daemon-set","pod-template-generation":"1000"},"annotations":{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"DaemonSet\",\"namespace\":\"e2e-tests-daemonsets-l8s9p\",\"name\":\"daemon-set\",\"uid\":\"2385d485-78c9-11e7-8b3e-42010a800002\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"33414\"}}\n","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"DaemonSet","name":"daemon-set","uid":"2385d485-78c9-11e7-8b3e-42010a800002","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-42lf8","secret":{"secretName":"default-token-42lf8","defaultMode":420}}],"containers":[{"name":"app","image":"gcr.io/k8s-testimages/redis:e2e","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"default-token-42lf8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD","SYS_CHROOT"]},"privileged":false,"seLinuxOptions":{"level":"s0:c59,c44"}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ci-primg353-ig-n-m1r7","securityContext":{"seLinuxOptions":{"level":"s0:c59,c44"}},"imagePullSecrets":[{"name":"default-dockercfg-87s9c"}],"schedulerName":"default-scheduler","tolerations":[{"key":"node.alpha.kubernetes.io/notReady","operator":"Exists","effect":"NoExecute"},{"key":"node.alpha.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-08-04T03:58:47Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-08-04T03:58:49Z"}],"hostIP":"10.128.0.3","podIP":"172.16.4.109","startTime":"2017-08-04T03:58:47Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2017-08-04T03:58:49Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"gcr.io/k8s-testimages/redis:e2e","imageID":"docker-pullable://gcr.io/k8s-testimages/redis@sha256:2c3f1112c32a23ad0f14a0a6ff8ac8006e23b5c27de89a4b5287467eb4844dad","containerID":"docker://b908d5223f7771567e32195ebcc8243f89819c8a00147e31c341af4a9a43b646"}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-pgflv","generateName":"daemon-set-","namespace":"e2e-tests-daemonsets-l8s9p","selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-l8s9p/pods/daemon-set-pgflv","uid":"33d24ea8-78c9-11e7-8b3e-42010a800002","resourceVersion":"33453","creationTimestamp":"2017-08-04T03:58:41Z","deletionTimestamp":"2017-08-04T03:59:20Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"2538072950","daemonset-name":"daemon-set","pod-template-generation":"1000"},"annotations":{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"DaemonSet\",\"namespace\":\"e2e-tests-daemonsets-l8s9p\",\"name\":\"daemon-set\",\"uid\":\"2385d485-78c9-11e7-8b3e-42010a800002\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"33395\"}}\n","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"DaemonSet","name":"daemon-set","uid":"2385d485-78c9-11e7-8b3e-42010a800002","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-42lf8","secret":{"secretName":"default-token-42lf8","defaultMode":420}}],"containers":[{"name":"app","image":"gcr.io/k8s-testimages/redis:e2e","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"default-token-42lf8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD","SYS_CHROOT"]},"privileged":false,"seLinuxOptions":{"level":"s0:c59,c44"}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ci-primg353-ig-m-l9ds","securityContext":{"seLinuxOptions":{"level":"s0:c59,c44"}},"imagePullSecrets":[{"name":"default-dockercfg-87s9c"}],"schedulerName":"default-scheduler","tolerations":[{"key":"node.alpha.kubernetes.io/notReady","operator":"Exists","effect":"NoExecute"},{"key":"node.alpha.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-08-04T03:58:41Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2017-08-04T03:58:51Z","reason":"ContainersNotReady","message":"containers with unready status: [app]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-08-04T03:58:45Z"}],"hostIP":"10.128.0.2","startTime":"2017-08-04T03:58:41Z","containerStatuses":[{"name":"app","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2017-08-04T03:58:44Z","finishedAt":"2017-08-04T03:58:50Z","containerID":"docker://8ade187bfda80e6f20af8f3573a4d32bc76a7db91c9a600b1696f6783235d55a"}},"lastState":{},"ready":false,"restartCount":0,"image":"gcr.io/k8s-testimages/redis:e2e","imageID":"docker-pullable://gcr.io/k8s-testimages/redis@sha256:2c3f1112c32a23ad0f14a0a6ff8ac8006e23b5c27de89a4b5287467eb4844dad","containerID":"docker://8ade187bfda80e6f20af8f3573a4d32bc76a7db91c9a600b1696f6783235d55a"}],"qosClass":"BestEffort"}}]}

[AfterEach] [k8s.io] Daemon set [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Aug  3 23:58:51.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-l8s9p" for this suite.
Aug  3 23:59:17.666: INFO: namespace: e2e-tests-daemonsets-l8s9p, resource: bindings, ignored listing per whitelist
Aug  3 23:59:18.675: INFO: namespace e2e-tests-daemonsets-l8s9p deletion completed in 26.735691541s

• [SLOW TEST:65.599 seconds]
[k8s.io] Daemon set [Serial]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  Should update pod when spec was updated and update strategy is RollingUpdate
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_set.go:358
------------------------------
[builds][pruning] prune builds based on settings in the buildconfig 
  should prune completed builds based on the successfulBuildsHistoryLimit setting
  /go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:69
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[builds][pruning] prune builds based on settings in the buildconfig [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:185
  should prune completed builds based on the successfulBuildsHistoryLimit setting
  /go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:69

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted.
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/namespace.go:268
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52
[BeforeEach] [k8s.io] Namespaces [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:130
STEP: Creating a kubernetes client
Aug  3 23:59:18.677: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Aug  3 23:59:18.761: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted.
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/namespace.go:268
STEP: Creating a test namespace
Aug  3 23:59:19.245: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
Aug  3 23:59:25.776: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Verifying there is no service in the namespace
[AfterEach] [k8s.io] Namespaces [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Aug  3 23:59:26.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-5mds5" for this suite.
Aug  3 23:59:34.723: INFO: namespace: e2e-tests-namespaces-5mds5, resource: bindings, ignored listing per whitelist
Aug  3 23:59:34.931: INFO: namespace e2e-tests-namespaces-5mds5 deletion completed in 8.734329231s
STEP: Destroying namespace "e2e-tests-nsdeletetest-rz9wt" for this suite.
Aug  3 23:59:34.961: INFO: Namespace e2e-tests-nsdeletetest-rz9wt was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-cs2cn" for this suite.
Aug  3 23:59:43.088: INFO: namespace: e2e-tests-nsdeletetest-cs2cn, resource: bindings, ignored listing per whitelist
Aug  3 23:59:43.743: INFO: namespace e2e-tests-nsdeletetest-cs2cn deletion completed in 8.781321108s

• [SLOW TEST:25.066 seconds]
[k8s.io] Namespaces [Serial]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should ensure that all services are removed when a namespace is deleted.
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/namespace.go:268
------------------------------
[builds][Slow] update failure status Build status fetch builder image failure 
  should contain the fetch builder image failure reason and message
  /go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:135
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[builds][Slow] update failure status [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:213
  Build status fetch builder image failure
  /go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:136
    should contain the fetch builder image failure reason and message
    /go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:135

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[Feature:ImageQuota] Image limit range 
  should deny a push of built image exceeding limit on openshift.io/images resource
  /go/src/github.com/openshift/origin/test/extended/imageapis/limitrange_admission.go:123
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[Feature:ImageQuota] Image limit range [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/imageapis/limitrange_admission.go:225
  should deny a push of built image exceeding limit on openshift.io/images resource
  /go/src/github.com/openshift/origin/test/extended/imageapis/limitrange_admission.go:123

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/expansion.go:132
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Variable Expansion [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should allow substituting values in a container's args [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/expansion.go:132

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] ConfigMap 
  updates should be reflected in volume [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:155
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] ConfigMap [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  updates should be reflected in volume [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:155

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[networking] network isolation when using a plugin that isolates namespaces by default 
  should allow communication from non-default to default namespace on the same node
  /go/src/github.com/openshift/origin/test/extended/networking/isolation.go:52
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[networking] network isolation [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:58
  when using a plugin that isolates namespaces by default
  /go/src/github.com/openshift/origin/test/extended/networking/util.go:274
    should allow communication from non-default to default namespace on the same node
    /go/src/github.com/openshift/origin/test/extended/networking/isolation.go:52

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Certificates API 
  should support building a client with a CSR
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/certificates.go:120
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Certificates API [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should support building a client with a CSR
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/certificates.go:120

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] [Feature:Example] [k8s.io] Storm 
  should create and stop Zookeeper, Nimbus and Storm worker servers
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/examples.go:392
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] [Feature:Example] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] Storm
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    should create and stop Zookeeper, Nimbus and Storm worker servers
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/examples.go:392

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[builds][Slow] update failure status Build status fetch S2I source failure 
  should contain the S2I fetch source failure reason and message
  /go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:116
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[builds][Slow] update failure status [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:213
  Build status fetch S2I source failure
  /go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:117
    should contain the S2I fetch source failure reason and message
    /go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:116

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Networking 
  should check kube-proxy urls
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/networking.go:89
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Networking [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should check kube-proxy urls
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/networking.go:89

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Empty [Feature:Empty] 
  starts a pod
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/empty.go:53
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Empty [Feature:Empty] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  starts a pod
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/empty.go:53

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 
  should support forwarding over websockets
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/portforward.go:493
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Port forwarding [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] With a server listening on 0.0.0.0
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    should support forwarding over websockets
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/portforward.go:493

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] 
  should come back up if node goes down [Slow] [Disruptive]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network_partition.go:375
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Network Partition [Disruptive] [Slow] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] [StatefulSet]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    should come back up if node goes down [Slow] [Disruptive]
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network_partition.go:375

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[networking] services basic functionality 
  should allow connections to another pod on the same node via a service IP
  /go/src/github.com/openshift/origin/test/extended/networking/services.go:16
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[networking] services [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/networking/services.go:65
  basic functionality
  /go/src/github.com/openshift/origin/test/extended/networking/services.go:21
    should allow connections to another pod on the same node via a service IP
    /go/src/github.com/openshift/origin/test/extended/networking/services.go:16

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Staging client repo client 
  should create pods, delete pods, watch pods
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/generated_clientset.go:377
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Staging client repo client [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should create pods, delete pods, watch pods
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/generated_clientset.go:377

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1098
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Kubectl client [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] Kubectl logs
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    should be able to retrieve and filter logs [Conformance]
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1098

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Networking 
  should provide unchanging, static URL paths for kubernetes api services
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/networking.go:79
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Networking [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should provide unchanging, static URL paths for kubernetes api services
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/networking.go:79

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
S
------------------------------
[builds][Slow] result image should have proper labels set S2I build from a template 
  should create a image from "/tmp/fixture-testdata-dir109680035/test/extended/testdata/test-s2i-build.json" template with proper Docker labels
  /go/src/github.com/openshift/origin/test/extended/builds/labels.go:54
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[builds][Slow] result image should have proper labels set [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/labels.go:85
  S2I build from a template
  /go/src/github.com/openshift/origin/test/extended/builds/labels.go:55
    should create a image from "/tmp/fixture-testdata-dir109680035/test/extended/testdata/test-s2i-build.json" template with proper Docker labels
    /go/src/github.com/openshift/origin/test/extended/builds/labels.go:54

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:60
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Secrets [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:60

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Deployment 
  iterative rollouts should eventually progress
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/deployment.go:119
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Deployment [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  iterative rollouts should eventually progress
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/deployment.go:119

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Probing container 
  should be restarted with a docker exec liveness probe with timeout [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:266
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Probing container [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should be restarted with a docker exec liveness probe with timeout [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:266

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob 
  should create a ScheduledJob
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:229
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Kubectl alpha client [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] Kubectl run ScheduledJob
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    should create a ScheduledJob
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:229

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
deploymentconfigs with multiple image change triggers [Conformance] 
  should run a successful deployment with a trigger used by different containers
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:456
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
deploymentconfigs [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1121
  with multiple image change triggers [Conformance]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:457
    should run a successful deployment with a trigger used by different containers
    /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:456

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[builds][Slow] incremental s2i build Building from a template 
  should create a build from "/tmp/fixture-testdata-dir109680035/test/extended/testdata/incremental-auth-build.json" template and run it
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_incremental.go:75
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[builds][Slow] incremental s2i build [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/s2i_incremental.go:77
  Building from a template
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_incremental.go:76
    should create a build from "/tmp/fixture-testdata-dir109680035/test/extended/testdata/incremental-auth-build.json" template and run it
    /go/src/github.com/openshift/origin/test/extended/builds/s2i_incremental.go:75

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:81
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Secrets [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:81

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] ReplicaSet 
  should serve a basic image on each replica with a public image [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/replica_set.go:82
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] ReplicaSet [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should serve a basic image on each replica with a public image [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/replica_set.go:82

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] DNS 
  should provide DNS for ExternalName services
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/dns.go:488
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] DNS [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should provide DNS for ExternalName services
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/dns.go:488

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[Conformance][image_ecosystem][mongodb][Slow] openshift mongodb replication (with statefulset) creating from a template 
  should instantiate the template
  /go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_replica_statefulset.go:122
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[Conformance][image_ecosystem][mongodb][Slow] openshift mongodb replication (with statefulset) [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_replica_statefulset.go:125
  creating from a template
  /go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_replica_statefulset.go:123
    should instantiate the template
    /go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_replica_statefulset.go:122

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Deployment 
  paused deployment should be ignored by the controller
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/deployment.go:95
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Deployment [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  paused deployment should be ignored by the controller
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/deployment.go:95

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[image_ecosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image 
  "openshift/nodejs-010-centos7" should print the usage
  /go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:37
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76
  returning s2i usage when running the image
  /go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:38
    "openshift/nodejs-010-centos7" should print the usage
    /go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:37

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] HostPath 
  should give a volume the correct mode [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:56
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] HostPath [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should give a volume the correct mode [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:56

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[builds][Slow] testing build configuration hooks testing postCommit hook 
  failing postCommit default entrypoint
  /go/src/github.com/openshift/origin/test/extended/builds/hooks.go:75
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.000 seconds]
[builds][Slow] testing build configuration hooks [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/hooks.go:79
  testing postCommit hook
  /go/src/github.com/openshift/origin/test/extended/builds/hooks.go:77
    failing postCommit default entrypoint
    /go/src/github.com/openshift/origin/test/extended/builds/hooks.go:75

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] NetworkPolicy 
  should support a 'default-deny' policy [Feature:NetworkPolicy]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network_policy.go:76
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] NetworkPolicy [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should support a 'default-deny' policy [Feature:NetworkPolicy]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network_policy.go:76

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[builds][pruning] prune builds based on settings in the buildconfig 
  should prune failed builds based on the failedBuildsHistoryLimit setting
  /go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:97
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[builds][pruning] prune builds based on settings in the buildconfig [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:185
  should prune failed builds based on the failedBuildsHistoryLimit setting
  /go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:97

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[networking] services when using a plugin that isolates namespaces by default 
  should allow connections from pods in the default namespace to a service in another namespace on the same node
  /go/src/github.com/openshift/origin/test/extended/networking/services.go:59
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[networking] services [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/networking/services.go:65
  when using a plugin that isolates namespaces by default
  /go/src/github.com/openshift/origin/test/extended/networking/util.go:274
    should allow connections from pods in the default namespace to a service in another namespace on the same node
    /go/src/github.com/openshift/origin/test/extended/networking/services.go:59

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[networking] services when using a plugin that isolates namespaces by default 
  should prevent connections to pods in different namespaces on the same node via service IPs
  /go/src/github.com/openshift/origin/test/extended/networking/services.go:42
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[networking] services [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/networking/services.go:65
  when using a plugin that isolates namespaces by default
  /go/src/github.com/openshift/origin/test/extended/networking/util.go:274
    should prevent connections to pods in different namespaces on the same node via service IPs
    /go/src/github.com/openshift/origin/test/extended/networking/services.go:42

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
S
------------------------------
[k8s.io] Pod garbage collector [Feature:PodGarbageCollector] [Slow] 
  should handle the creation of 1000 pods
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pod_gc.go:79
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Pod garbage collector [Feature:PodGarbageCollector] [Slow] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should handle the creation of 1000 pods
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pod_gc.go:79

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] ESIPP [Slow] 
  should handle updates to ExternalTrafficPolicy field
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:1618
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] ESIPP [Slow] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should handle updates to ExternalTrafficPolicy field
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:1618

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Proxy version v1 
  should proxy to cadvisor
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:64
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Proxy [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  version v1
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:275
    should proxy to cadvisor
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:64

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] DisruptionController 
  should create a PodDisruptionBudget
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/disruption.go:56
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] DisruptionController [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should create a PodDisruptionBudget
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/disruption.go:56

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1389
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Kubectl client [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] Kubectl run pod
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    should create a pod from an image when restart is Never [Conformance]
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1389

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
S
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:442
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Pods [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should contain environment variables for services [Conformance]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:442

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that Inter-pod-Affinity is respected if not matching
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:451
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:130
STEP: Creating a kubernetes client
Aug  3 23:59:43.770: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Aug  3 23:59:43.854: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:109
Aug  3 23:59:44.249: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug  3 23:59:44.310: INFO: Waiting for terminating namespaces to be deleted...
Aug  3 23:59:44.370: INFO: 
Logging pods the kubelet thinks is on node ci-primg353-ig-m-l9ds before test
Aug  3 23:59:44.434: INFO: docker-registry-2-qfj1z from default started at 2017-08-03 23:24:10 -0400 EDT (1 container statuses recorded)
Aug  3 23:59:44.434: INFO: 	Container registry ready: true, restart count 0
Aug  3 23:59:44.434: INFO: registry-console-1-wq3fw from default started at 2017-08-03 23:24:27 -0400 EDT (1 container statuses recorded)
Aug  3 23:59:44.434: INFO: 	Container registry-console ready: true, restart count 0
Aug  3 23:59:44.434: INFO: router-1-b211x from default started at 2017-08-03 23:22:09 -0400 EDT (1 container statuses recorded)
Aug  3 23:59:44.434: INFO: 	Container router ready: true, restart count 0
Aug  3 23:59:44.434: INFO: 
Logging pods the kubelet thinks is on node ci-primg353-ig-n-98dm before test
Aug  3 23:59:44.498: INFO: 
Logging pods the kubelet thinks is on node ci-primg353-ig-n-h1xq before test
Aug  3 23:59:44.561: INFO: 
Logging pods the kubelet thinks is on node ci-primg353-ig-n-m1r7 before test
Aug  3 23:59:44.626: INFO: prometheus-1552260379-6cz9c from kube-system started at 2017-08-03 23:39:58 -0400 EDT (2 container statuses recorded)
Aug  3 23:59:44.626: INFO: 	Container oauth-proxy ready: true, restart count 0
Aug  3 23:59:44.626: INFO: 	Container prometheus ready: true, restart count 0
[It] validates that Inter-pod-Affinity is respected if not matching
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:451
STEP: Trying to schedule Pod with nonempty Pod Affinity.
I0803 23:59:44.657788   24839 reflector.go:213] Starting reflector *v1.Event (0s) from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/events.go:135
I0803 23:59:44.657818   24839 reflector.go:251] Listing and watching *v1.Event from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/events.go:135
STEP: Considering event: 
Type = [Warning], Name = [without-label-59b05054-78c9-11e7-87ac-0ebf9a948e7a.14d7889770d3544e], Reason = [FailedScheduling], Message = [No nodes are available that match all of the following predicates:: MatchInterPodAffinity (4), MatchNodeSelector (1).]
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Aug  3 23:59:45.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-v7sg4" for this suite.
Aug  4 00:00:12.004: INFO: namespace: e2e-tests-sched-pred-v7sg4, resource: bindings, ignored listing per whitelist
Aug  4 00:00:12.631: INFO: namespace e2e-tests-sched-pred-v7sg4 deletion completed in 26.762330229s
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:73
I0804 00:00:12.631802   24839 request.go:782] Error in request: resource name may not be empty

• [SLOW TEST:28.861 seconds]
[k8s.io] SchedulerPredicates [Serial]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  validates that Inter-pod-Affinity is respected if not matching
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:451
------------------------------
[networking] services when using a plugin that does not isolate namespaces by default 
  should allow connections to pods in different namespaces on different nodes via service IPs
  /go/src/github.com/openshift/origin/test/extended/networking/services.go:33
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[networking] services [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/networking/services.go:65
  when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/test/extended/networking/util.go:262
    should allow connections to pods in different namespaces on different nodes via service IPs
    /go/src/github.com/openshift/origin/test/extended/networking/services.go:33

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[builds][Slow] build can have Docker image source build with image docker 
  should complete successfully and contain the expected file
  /go/src/github.com/openshift/origin/test/extended/builds/image_source.go:77
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.000 seconds]
[builds][Slow] build can have Docker image source [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/image_source.go:81
  build with image docker
  /go/src/github.com/openshift/origin/test/extended/builds/image_source.go:79
    should complete successfully and contain the expected file
    /go/src/github.com/openshift/origin/test/extended/builds/image_source.go:77

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] GCP Volumes [k8s.io] GlusterFS 
  should be mountable [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/volumes.go:229
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] GCP Volumes [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] GlusterFS
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    should be mountable [Volume]
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/volumes.go:229

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] 
  should not reschedule stateful pods if there is a network partition [Slow] [Disruptive]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network_partition.go:405
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Network Partition [Disruptive] [Slow] [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] [StatefulSet]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    should not reschedule stateful pods if there is a network partition [Slow] [Disruptive]
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network_partition.go:405

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[builds][Slow] builds with a context directory s2i context directory build 
  should s2i build an application using a context directory
  /go/src/github.com/openshift/origin/test/extended/builds/contextdir.go:83
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.000 seconds]
[builds][Slow] builds with a context directory [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/builds/contextdir.go:108
  s2i context directory build
  /go/src/github.com/openshift/origin/test/extended/builds/contextdir.go:84
    should s2i build an application using a context directory
    /go/src/github.com/openshift/origin/test/extended/builds/contextdir.go:83

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] AppArmor load AppArmor profiles 
  should enforce an AppArmor profile
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/apparmor.go:43
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] AppArmor [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  load AppArmor profiles
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/apparmor.go:44
    should enforce an AppArmor profile
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/apparmor.go:43

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that taints-tolerations is respected if not matching
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:746
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:130
STEP: Creating a kubernetes client
Aug  4 00:00:12.635: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Aug  4 00:00:12.741: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:109
Aug  4 00:00:13.124: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug  4 00:00:13.186: INFO: Waiting for terminating namespaces to be deleted...
Aug  4 00:00:13.247: INFO: 
Logging pods the kubelet thinks is on node ci-primg353-ig-m-l9ds before test
Aug  4 00:00:13.309: INFO: router-1-b211x from default started at 2017-08-03 23:22:09 -0400 EDT (1 container statuses recorded)
Aug  4 00:00:13.309: INFO: 	Container router ready: true, restart count 0
Aug  4 00:00:13.309: INFO: registry-console-1-wq3fw from default started at 2017-08-03 23:24:27 -0400 EDT (1 container statuses recorded)
Aug  4 00:00:13.309: INFO: 	Container registry-console ready: true, restart count 0
Aug  4 00:00:13.309: INFO: docker-registry-2-qfj1z from default started at 2017-08-03 23:24:10 -0400 EDT (1 container statuses recorded)
Aug  4 00:00:13.309: INFO: 	Container registry ready: true, restart count 0
Aug  4 00:00:13.309: INFO: 
Logging pods the kubelet thinks is on node ci-primg353-ig-n-98dm before test
Aug  4 00:00:13.370: INFO: 
Logging pods the kubelet thinks is on node ci-primg353-ig-n-h1xq before test
Aug  4 00:00:13.432: INFO: 
Logging pods the kubelet thinks is on node ci-primg353-ig-n-m1r7 before test
Aug  4 00:00:13.495: INFO: prometheus-1552260379-6cz9c from kube-system started at 2017-08-03 23:39:58 -0400 EDT (2 container statuses recorded)
Aug  4 00:00:13.495: INFO: 	Container oauth-proxy ready: true, restart count 0
Aug  4 00:00:13.495: INFO: 	Container prometheus ready: true, restart count 0
[It] validates that taints-tolerations is respected if not matching
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:746
STEP: Trying to launch a pod without a toleration to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random taint on the found node.
STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-6d602a27-78c9-11e7-87ac-0ebf9a948e7a=testing-taint-value:NoSchedule
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-label-key-6d6e5ffd-78c9-11e7-87ac-0ebf9a948e7a testing-label-value
STEP: Trying to relaunch the pod, still no tolerations.
I0804 00:00:17.843742   24839 reflector.go:213] Starting reflector *v1.Event (0s) from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/events.go:135
I0804 00:00:17.843772   24839 reflector.go:251] Listing and watching *v1.Event from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/events.go:135
STEP: Considering event: 
Type = [Normal], Name = [without-toleration.14d7889e2460ae30], Reason = [Scheduled], Message = [Successfully assigned without-toleration to ci-primg353-ig-n-98dm]
STEP: Considering event: 
Type = [Normal], Name = [without-toleration.14d7889e32d93186], Reason = [SuccessfulMountVolume], Message = [MountVolume.SetUp succeeded for volume "default-token-nl5rx" ]
STEP: Considering event: 
Type = [Normal], Name = [without-toleration.14d7889e71724ef3], Reason = [Pulled], Message = [Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [without-toleration.14d7889e87516100], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [without-toleration.14d7889e9819c2e0], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Normal], Name = [without-toleration.14d7889f232264cc], Reason = [Killing], Message = [Killing container with id docker://without-toleration:Need to kill Pod]
STEP: Considering event: 
Type = [Warning], Name = [still-no-tolerations.14d7889f2aebb141], Reason = [FailedScheduling], Message = [No nodes are available that match all of the following predicates:: MatchNodeSelector (3), PodToleratesNodeTaints (1).]
STEP: Removing taint off the node
I0804 00:00:18.999325   24839 reflector.go:213] Starting reflector *v1.Event (0s) from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/events.go:135
I0804 00:00:18.999353   24839 reflector.go:251] Listing and watching *v1.Event from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/events.go:135
STEP: Considering event: 
Type = [Warning], Name = [still-no-tolerations.14d7889f2aebb141], Reason = [FailedScheduling], Message = [No nodes are available that match all of the following predicates:: MatchNodeSelector (3), PodToleratesNodeTaints (1).]
STEP: Considering event: 
Type = [Normal], Name = [without-toleration.14d7889e2460ae30], Reason = [Scheduled], Message = [Successfully assigned without-toleration to ci-primg353-ig-n-98dm]
STEP: Considering event: 
Type = [Normal], Name = [without-toleration.14d7889e32d93186], Reason = [SuccessfulMountVolume], Message = [MountVolume.SetUp succeeded for volume "default-token-nl5rx" ]
STEP: Considering event: 
Type = [Normal], Name = [without-toleration.14d7889e71724ef3], Reason = [Pulled], Message = [Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [without-toleration.14d7889e87516100], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [without-toleration.14d7889e9819c2e0], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Normal], Name = [without-toleration.14d7889f232264cc], Reason = [Killing], Message = [Killing container with id docker://without-toleration:Need to kill Pod]
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-6d602a27-78c9-11e7-87ac-0ebf9a948e7a=testing-taint-value:NoSchedule
STEP: Considering event: 
Type = [Normal], Name = [still-no-tolerations.14d7889fde59b638], Reason = [Scheduled], Message = [Successfully assigned still-no-tolerations to ci-primg353-ig-n-98dm]
STEP: removing the label kubernetes.io/e2e-label-key-6d6e5ffd-78c9-11e7-87ac-0ebf9a948e7a off the node ci-primg353-ig-n-98dm
STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-6d6e5ffd-78c9-11e7-87ac-0ebf9a948e7a
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-6d602a27-78c9-11e7-87ac-0ebf9a948e7a=testing-taint-value:NoSchedule
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Aug  4 00:00:21.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-fvmv9" for this suite.
Aug  4 00:00:47.525: INFO: namespace: e2e-tests-sched-pred-fvmv9, resource: bindings, ignored listing per whitelist
Aug  4 00:00:48.134: INFO: namespace e2e-tests-sched-pred-fvmv9 deletion completed in 26.73847754s
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:73
I0804 00:00:48.134479   24839 request.go:782] Error in request: resource name may not be empty

• [SLOW TEST:35.499 seconds]
[k8s.io] SchedulerPredicates [Serial]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  validates that taints-tolerations is respected if not matching
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:746
------------------------------
[image_ecosystem][Slow] openshift images should be SCL enabled using the SCL in s2i images 
  "openshift/python-27-centos7" should be SCL enabled
  /go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:72
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76
  using the SCL in s2i images
  /go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:73
    "openshift/python-27-centos7" should be SCL enabled
    /go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:72

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[cli][Slow] can use rsync to upload files to pods copy by strategy 
  should copy files with the rsync strategy
  /go/src/github.com/openshift/origin/test/extended/cli/rsync.go:283
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.000 seconds]
[cli][Slow] can use rsync to upload files to pods [BeforeEach]
/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:396
  copy by strategy
  /go/src/github.com/openshift/origin/test/extended/cli/rsync.go:285
    should copy files with the rsync strategy
    /go/src/github.com/openshift/origin/test/extended/cli/rsync.go:283

    skipping tests not in the Origin conformance suite

    /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Projected 
  should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:45
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Projected [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:45

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] Deployment 
  paused deployment should be able to scale
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/deployment.go:107
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] Deployment [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  paused deployment should be able to scale
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/deployment.go:107

  skipping tests not in the Origin conformance suite

  /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
[k8s.io] kubelet [k8s.io] host cleanup with volume mounts [Volume][HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] 
  after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubelet.go:469
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:52

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[k8s.io] kubelet [BeforeEach]
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
  [k8s.io] host cleanup with volume mounts [Volume][HostCleanup][Flaky]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:619
    Host cleanup after disrupting NFS volume [NFS]
    /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubelet.go:471
      after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubelet.go:469

      skipping tests not in the Origin conformance suite

      /go/src/github.com/openshift/origin/test/extended/util/test.go:377
------------------------------
Aug  4 00:00:48.137: INFO: Running AfterSuite actions on all node
Aug  4 00:00:48.137: INFO: Running AfterSuite actions on node 1

Ran 26 of 815 Specs in 881.288 seconds
SUCCESS! -- 26 Passed | 0 Failed | 4 Pending | 785 Skipped Aug  4 00:00:48.143: INFO: Error running cluster/log-dump.sh: fork/exec /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/cluster/log-dump.sh: no such file or directory
PASS

Ginkgo ran 1 suite in 14m41.763525259s
Test Suite Passed
[INFO] [CLEANUP] Beginning cleanup routines...
[INFO] [CLEANUP] Dumping cluster events to _output/scripts/conformance/artifacts/events.txt
Logged into "https://internal-api.primg353.origin-ci-int-gce.dev.rhcloud.com:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * default
    kube-public
    kube-system
    logging
    management-infra
    openshift
    openshift-infra

Using project "default".
[INFO] [CLEANUP] Dumping container logs to _output/scripts/conformance/logs/containers
[INFO] [CLEANUP] Truncating log files over 200M
[INFO] [CLEANUP] Stopping docker containers
[INFO] [CLEANUP] Removing docker containers
[INFO] [CLEANUP] Killing child processes
[INFO] test/extended/conformance.sh exited with code 0 after 00h 24m 38s

real	24m37.933s
user	2m55.034s
sys	0m26.327s
+ [[ branch_success == \b\r\a\n\c\h\_\s\u\c\c\e\s\s ]]
+ [[ '' != 1 ]]
+ [[ 1 == 1 ]]
+ to=docker.io/openshift/origin-gce:latest
+ sudo docker tag openshift/origin-gce:latest docker.io/openshift/origin-gce:latest
+ sudo docker push docker.io/openshift/origin-gce:latest
The push refers to a repository [docker.io/openshift/origin-gce]
d2e57a9986b8: Preparing
94184153d811: Preparing
02d6aa5a7feb: Preparing
773bd37ef574: Preparing
b51149973e6a: Preparing
b51149973e6a: Layer already exists
773bd37ef574: Layer already exists
d2e57a9986b8: Pushed
94184153d811: Pushed
02d6aa5a7feb: Pushed
latest: digest: sha256:283506befbc649cafcb194d7f51ffd7a1b0534fe0f59551c4eba19c73f5266b2 size: 1373
+ exit 0
+ gather
+ set +e
++ pwd
+ export PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/_output/local/bin/linux/amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/origin/.local/bin:/home/origin/bin
+ PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/_output/local/bin/linux/amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/origin/.local/bin:/home/origin/bin
+ oc get nodes --template '{{ range .items }}{{ .metadata.name }}{{ "\n" }}{{ end }}'
+ xargs -L 1 -I X bash -c 'oc get --raw /api/v1/nodes/X/proxy/metrics > /tmp/artifacts/X.metrics' ''
+ oc get --raw /metrics
+ set -e
[PostBuildScript] - Execution post build scripts.
[workspace] $ /bin/bash /tmp/hudson3005256664504577730.sh
~/jobs/zz_origin_gce_image/workspace ~/jobs/zz_origin_gce_image/workspace
Activated service account credentials for: [jenkins-ci-provisioner@openshift-gce-devel.iam.gserviceaccount.com]

PLAY [Terminate running cluster and remove all supporting resources in GCE] ****

TASK [Gathering Facts] *********************************************************
Friday 04 August 2017  04:01:55 +0000 (0:00:00.067)       0:00:00.067 ********* 
ok: [localhost]

TASK [deprovision : Templatize de-provision script] ****************************
Friday 04 August 2017  04:01:59 +0000 (0:00:03.782)       0:00:03.850 ********* 
changed: [localhost]

TASK [deprovision : De-provision GCE resources] ********************************
Friday 04 August 2017  04:01:59 +0000 (0:00:00.524)       0:00:04.374 ********* 
changed: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=3    changed=2    unreachable=0    failed=0   

Friday 04 August 2017  04:08:41 +0000 (0:06:41.857)       0:06:46.232 ********* 
=============================================================================== 
deprovision : De-provision GCE resources ------------------------------ 401.86s
Gathering Facts --------------------------------------------------------- 3.78s
deprovision : Templatize de-provision script ---------------------------- 0.52s
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/ci-primg353-ig-m-l9ds.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/ci-primg353-ig-n-98dm.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/ci-primg353-ig-n-h1xq.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/ci-primg353-ig-n-m1r7.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/master.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/openshift
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/openshift/conformance
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/openshift/conformance/volumes
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/openshift/env
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/openshift/env/volumes
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/events.txt
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/logs
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/logs/containers
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/logs/scripts.log
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/openshift.local.home
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/env
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/env/artifacts
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/env/logs
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/env/logs/scripts.log
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/env/openshift.local.home

PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml

PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****

TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_aws_region)  => {
    "changed": false, 
    "item": "origin_ci_aws_region", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}
skipping: [localhost] => (item=origin_ci_inventory_dir)  => {
    "changed": false, 
    "item": "origin_ci_inventory_dir", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}

PLAY [deprovision virtual hosts in EC2] ****************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost

TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
    "changed": false, 
    "msg": ""
}

TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
    "changed": true, 
    "msg": "Tags {'Name': 'terminate'} created for resource i-038b29f75e1c62fb7."
}

TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
    "changed": true, 
    "instance_ids": [
        "i-038b29f75e1c62fb7"
    ], 
    "instances": [
        {
            "ami_launch_index": "0", 
            "architecture": "x86_64", 
            "block_device_mapping": {
                "/dev/sda1": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-04323baa8cd782357"
                }, 
                "/dev/sdb": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-097536af58bdad466"
                }
            }, 
            "dns_name": "ec2-34-230-63-101.compute-1.amazonaws.com", 
            "ebs_optimized": false, 
            "groups": {
                "sg-7e73221a": "default"
            }, 
            "hypervisor": "xen", 
            "id": "i-038b29f75e1c62fb7", 
            "image_id": "ami-084b6b73", 
            "instance_type": "m4.xlarge", 
            "kernel": null, 
            "key_name": "libra", 
            "launch_time": "2017-08-04T02:56:48.000Z", 
            "placement": "us-east-1d", 
            "private_dns_name": "ip-172-18-6-66.ec2.internal", 
            "private_ip": "172.18.6.66", 
            "public_dns_name": "ec2-34-230-63-101.compute-1.amazonaws.com", 
            "public_ip": "34.230.63.101", 
            "ramdisk": null, 
            "region": "us-east-1", 
            "root_device_name": "/dev/sda1", 
            "root_device_type": "ebs", 
            "state": "running", 
            "state_code": 16, 
            "tags": {
                "Name": "terminate", 
                "openshift_etcd": "", 
                "openshift_master": "", 
                "openshift_node": ""
            }, 
            "tenancy": "default", 
            "virtualization_type": "hvm"
        }
    ], 
    "tagged_instances": []
}

TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:21
changed: [localhost] => {
    "changed": true, 
    "path": "/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.config/origin-ci-tool/inventory/host_vars/172.18.6.66.yml", 
    "state": "absent"
}

PLAY [deprovision virtual hosts locally manged by Vagrant] *********************

PLAY [clean up local configuration for deprovisioned instances] ****************

TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
    "changed": true, 
    "path": "/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.config/origin-ci-tool/inventory", 
    "state": "absent"
}

PLAY RECAP *********************************************************************
localhost                  : ok=7    changed=4    unreachable=0    failed=0   

~/jobs/zz_origin_gce_image/workspace
Recording test results
Archiving artifacts
Finished: SUCCESS