This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 333 succeeded
Started2019-01-11 23:30
Elapsed30m16s
Revision
Buildergke-prow-containerd-pool-99179761-p72z
Refs release-1.11:c6e60c04
72579:bf133295
72627:cba5ff0d
podc27edff1-15f8-11e9-a282-0a580a6c019f
infra-commit2a90eab87
podc27edff1-15f8-11e9-a282-0a580a6c019f
repok8s.io/kubernetes
repo-commitdec49ccd97289a63615fbbf4b468989a36b2988d
repos{u'k8s.io/kubernetes': u'release-1.11:c6e60c047d0313bfc1e95efd9c6b989dcad05cd7,72579:bf133295c8a9f795c2d046513795466bf86f5f05,72627:cba5ff0de3516b1d71928bfbf1fbda50e5280f2e'}

Test Failures


k8s.io/kubernetes/test/integration/client TestAtomicPut 3.52s

go test -v k8s.io/kubernetes/test/integration/client -run TestAtomicPut$
I0111 23:50:08.148944  119996 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0111 23:50:08.149050  119996 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0111 23:50:08.149070  119996 master.go:234] Using reconciler: 
W0111 23:50:08.314713  119996 genericapiserver.go:319] Skipping API batch/v2alpha1 because it has no resources.
W0111 23:50:08.331868  119996 genericapiserver.go:319] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 23:50:08.333031  119996 genericapiserver.go:319] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 23:50:08.334876  119996 genericapiserver.go:319] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 23:50:08.352665  119996 genericapiserver.go:319] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
E0111 23:50:08.374352  119996 controller.go:143] Unable to perform initial Kubernetes service initialization: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
E0111 23:50:08.381335  119996 controller.go:192] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I0111 23:50:09.361809  119996 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0111 23:50:09.365482  119996 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0111 23:50:09.365512  119996 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0111 23:50:09.375675  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0111 23:50:09.379673  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0111 23:50:09.383595  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0111 23:50:09.389616  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0111 23:50:09.394590  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0111 23:50:09.399191  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0111 23:50:09.402980  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0111 23:50:09.407870  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0111 23:50:09.413173  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0111 23:50:09.417907  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0111 23:50:09.422150  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0111 23:50:09.433267  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0111 23:50:09.437508  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0111 23:50:09.441845  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0111 23:50:09.447881  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0111 23:50:09.452180  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0111 23:50:09.455545  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0111 23:50:09.462047  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 23:50:09.472616  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0111 23:50:09.476689  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0111 23:50:09.480869  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0111 23:50:09.485458  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0111 23:50:09.488722  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0111 23:50:09.492799  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 23:50:09.495656  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0111 23:50:09.504512  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0111 23:50:09.515225  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0111 23:50:09.521235  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 23:50:09.524993  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 23:50:09.528287  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 23:50:09.533026  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 23:50:09.539770  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 23:50:09.546311  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 23:50:09.550976  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 23:50:09.557392  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 23:50:09.561330  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 23:50:09.567472  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 23:50:09.572516  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0111 23:50:09.575866  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 23:50:09.581286  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0111 23:50:09.588300  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 23:50:09.592120  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 23:50:09.599990  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 23:50:09.607627  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 23:50:09.611380  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 23:50:09.615504  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0111 23:50:09.622373  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 23:50:09.629565  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0111 23:50:09.650947  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 23:50:09.658964  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 23:50:09.662919  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 23:50:09.669190  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 23:50:09.675274  119996 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 23:50:09.686536  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0111 23:50:09.718769  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0111 23:50:09.761270  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0111 23:50:09.798520  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0111 23:50:09.838297  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 23:50:09.899021  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0111 23:50:09.917830  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0111 23:50:09.959017  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 23:50:09.999035  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0111 23:50:10.037523  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0111 23:50:10.077723  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 23:50:10.119771  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 23:50:10.157988  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 23:50:10.198071  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 23:50:10.241113  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 23:50:10.277328  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 23:50:10.317312  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 23:50:10.357916  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 23:50:10.398232  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 23:50:10.437809  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 23:50:10.478168  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0111 23:50:10.517499  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 23:50:10.558059  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0111 23:50:10.598355  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 23:50:10.641026  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 23:50:10.680033  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 23:50:10.718591  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 23:50:10.770306  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 23:50:10.797917  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0111 23:50:10.840440  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 23:50:10.882556  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0111 23:50:10.918491  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 23:50:10.967067  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 23:50:11.001160  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 23:50:11.049521  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 23:50:11.077643  119996 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 23:50:11.125035  119996 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0111 23:50:11.175900  119996 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 23:50:11.197855  119996 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 23:50:11.238397  119996 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 23:50:11.280413  119996 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 23:50:11.318469  119996 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 23:50:11.358391  119996 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 23:50:11.399166  119996 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 23:50:11.437756  119996 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 23:50:11.477994  119996 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 23:50:11.519377  119996 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 23:50:11.558242  119996 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 23:50:11.599421  119996 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
client_test.go:160: Failed creating atomicRC: 0-length response
				from junit_cae8d27844a37937152775ec7fb068d1755ac188_20190111-234815.xml

Filter through log files


Show 333 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I0111 23:30:33.249] process 210 exited with code 0 after 0.0m
I0111 23:30:33.250] Call:  gcloud config get-value account
I0111 23:30:33.662] process 222 exited with code 0 after 0.0m
I0111 23:30:33.663] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 23:30:33.663] Call:  kubectl get -oyaml pods/c27edff1-15f8-11e9-a282-0a580a6c019f
W0111 23:30:33.785] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0111 23:30:33.789] Command failed
I0111 23:30:33.790] process 234 exited with code 1 after 0.0m
E0111 23:30:33.790] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/c27edff1-15f8-11e9-a282-0a580a6c019f']' returned non-zero exit status 1
I0111 23:30:33.790] Root: /workspace
I0111 23:30:33.791] cd to /workspace
I0111 23:30:33.791] Checkout: /workspace/k8s.io/kubernetes release-1.11:c6e60c047d0313bfc1e95efd9c6b989dcad05cd7,72579:bf133295c8a9f795c2d046513795466bf86f5f05,72627:cba5ff0de3516b1d71928bfbf1fbda50e5280f2e to /workspace/k8s.io/kubernetes
I0111 23:30:33.791] Call:  git init k8s.io/kubernetes
... skipping 493 lines ...
W0111 23:41:28.221] I0111 23:41:28.221050   72790 controllermanager.go:479] Started "statefulset"
W0111 23:41:28.221] I0111 23:41:28.221147   72790 stateful_set.go:151] Starting stateful set controller
W0111 23:41:28.221] I0111 23:41:28.221166   72790 controller_utils.go:1025] Waiting for caches to sync for stateful set controller
W0111 23:41:28.221] I0111 23:41:28.221449   72790 controllermanager.go:479] Started "cronjob"
W0111 23:41:28.222] W0111 23:41:28.221472   72790 controllermanager.go:476] Skipping "nodeipam"
W0111 23:41:28.222] I0111 23:41:28.221697   72790 cronjob_controller.go:94] Starting CronJob Manager
W0111 23:41:28.222] E0111 23:41:28.221966   72790 core.go:72] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0111 23:41:28.222] W0111 23:41:28.221988   72790 controllermanager.go:476] Skipping "service"
W0111 23:41:28.222] I0111 23:41:28.222388   72790 controllermanager.go:479] Started "ttl"
W0111 23:41:28.223] W0111 23:41:28.222407   72790 controllermanager.go:463] "bootstrapsigner" is disabled
W0111 23:41:28.223] I0111 23:41:28.223001   72790 ttl_controller.go:116] Starting TTL controller
W0111 23:41:28.223] I0111 23:41:28.223183   72790 taint_manager.go:184] Sending events to api server.
W0111 23:41:28.223] I0111 23:41:28.223304   72790 controllermanager.go:479] Started "nodelifecycle"
... skipping 39 lines ...
W0111 23:41:28.236] I0111 23:41:28.233922   72790 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {apps statefulsets}
W0111 23:41:28.236] I0111 23:41:28.233960   72790 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {rbac.authorization.k8s.io roles}
W0111 23:41:28.236] I0111 23:41:28.233982   72790 controllermanager.go:479] Started "resourcequota"
W0111 23:41:28.237] I0111 23:41:28.234069   72790 resource_quota_controller.go:278] Starting resource quota controller
W0111 23:41:28.237] I0111 23:41:28.234108   72790 controller_utils.go:1025] Waiting for caches to sync for resource quota controller
W0111 23:41:28.237] I0111 23:41:28.234133   72790 resource_quota_monitor.go:301] QuotaMonitor running
W0111 23:41:28.237] W0111 23:41:28.236497   72790 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
W0111 23:41:28.239] I0111 23:41:28.239242   72790 controllermanager.go:479] Started "garbagecollector"
W0111 23:41:28.239] I0111 23:41:28.239300   72790 garbagecollector.go:133] Starting garbage collector controller
W0111 23:41:28.240] I0111 23:41:28.239320   72790 controller_utils.go:1025] Waiting for caches to sync for garbage collector controller
W0111 23:41:28.240] I0111 23:41:28.239353   72790 graph_builder.go:308] GraphBuilder running
W0111 23:41:28.240] I0111 23:41:28.239866   72790 controllermanager.go:479] Started "csrcleaner"
W0111 23:41:28.240] I0111 23:41:28.239914   72790 cleaner.go:81] Starting CSR cleaner controller
... skipping 39 lines ...
W0111 23:41:28.324] I0111 23:41:28.323786   72790 taint_manager.go:205] Starting NoExecuteTaintManager
W0111 23:41:28.324] I0111 23:41:28.324048   72790 controller_utils.go:1032] Caches are synced for TTL controller
W0111 23:41:28.324] I0111 23:41:28.324160   72790 controller_utils.go:1032] Caches are synced for ClusterRoleAggregator controller
W0111 23:41:28.325] I0111 23:41:28.324873   72790 controller_utils.go:1032] Caches are synced for PVC protection controller
W0111 23:41:28.325] I0111 23:41:28.325241   72790 controller_utils.go:1032] Caches are synced for ReplicationController controller
W0111 23:41:28.326] I0111 23:41:28.326664   72790 controller_utils.go:1032] Caches are synced for PV protection controller
W0111 23:41:28.334] E0111 23:41:28.334197   72790 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0111 23:41:28.335] E0111 23:41:28.334202   72790 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0111 23:41:28.342] I0111 23:41:28.341517   72790 controller_utils.go:1032] Caches are synced for expand controller
W0111 23:41:28.343] I0111 23:41:28.342077   72790 controller_utils.go:1032] Caches are synced for GC controller
W0111 23:41:28.343] I0111 23:41:28.342336   72790 controller_utils.go:1032] Caches are synced for daemon sets controller
W0111 23:41:28.343] I0111 23:41:28.342772   72790 controller_utils.go:1032] Caches are synced for certificate controller
W0111 23:41:28.344] I0111 23:41:28.344584   72790 controller_utils.go:1032] Caches are synced for HPA controller
W0111 23:41:28.347] I0111 23:41:28.346766   72790 controller_utils.go:1032] Caches are synced for endpoint controller
... skipping 8 lines ...
W0111 23:41:28.541] I0111 23:41:28.540993   72790 controller_utils.go:1032] Caches are synced for persistent volume controller
I0111 23:41:28.691] +++ [0111 23:41:28] On try 3, controller-manager: ok
I0111 23:41:28.892] node/127.0.0.1 created
I0111 23:41:28.903] +++ [0111 23:41:28] Checking kubectl version
I0111 23:41:28.991] Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.7-beta.0.38+dec49ccd97289a", GitCommit:"dec49ccd97289a63615fbbf4b468989a36b2988d", GitTreeState:"clean", BuildDate:"2019-01-11T23:38:21Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}
I0111 23:41:28.991] Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.7-beta.0.38+dec49ccd97289a", GitCommit:"dec49ccd97289a63615fbbf4b468989a36b2988d", GitTreeState:"clean", BuildDate:"2019-01-11T23:38:55Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}
W0111 23:41:29.092] W0111 23:41:28.893856   72790 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0111 23:41:29.370] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
I0111 23:41:29.470] NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
I0111 23:41:29.471] kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   54s
I0111 23:41:29.474] Recording: run_kubectl_version_tests
I0111 23:41:29.475] Running command: run_kubectl_version_tests
I0111 23:41:29.495] 
... skipping 80 lines ...
I0111 23:41:35.125] +++ working dir: /go/src/k8s.io/kubernetes
I0111 23:41:35.127] +++ command: run_RESTMapper_evaluation_tests
I0111 23:41:35.138] +++ [0111 23:41:35] Creating namespace namespace-1547250095-20168
I0111 23:41:35.220] namespace/namespace-1547250095-20168 created
I0111 23:41:35.303] Context "test" modified.
I0111 23:41:35.309] +++ [0111 23:41:35] Testing RESTMapper
I0111 23:41:35.433] +++ [0111 23:41:35] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0111 23:41:35.446] +++ exit code: 0
I0111 23:41:37.152] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0111 23:41:37.152] bindings                                                                      true         Binding
I0111 23:41:37.153] componentstatuses                 cs                                          false        ComponentStatus
I0111 23:41:37.153] configmaps                        cm                                          true         ConfigMap
I0111 23:41:37.153] endpoints                         ep                                          true         Endpoints
... skipping 583 lines ...
I0111 23:41:58.364] test-cmd-util.sh:444: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 23:41:58.555] (Btest-cmd-util.sh:448: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 23:41:58.659] (Btest-cmd-util.sh:452: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 23:41:58.847] (Btest-cmd-util.sh:456: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 23:41:58.952] (Btest-cmd-util.sh:460: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 23:41:59.049] (Bpod "valid-pod" force deleted
W0111 23:41:59.150] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0111 23:41:59.150] error: setting 'all' parameter but found a non empty selector. 
W0111 23:41:59.151] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 23:41:59.251] test-cmd-util.sh:464: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0111 23:41:59.267] (Btest-cmd-util.sh:469: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0111 23:41:59.350] (Bnamespace/test-kubectl-describe-pod created
I0111 23:41:59.454] test-cmd-util.sh:473: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0111 23:41:59.560] (Btest-cmd-util.sh:477: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0111 23:42:00.658] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0111 23:42:00.815] test-cmd-util.sh:506: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0111 23:42:00.902] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0111 23:42:01.020] test-cmd-util.sh:510: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0111 23:42:01.214] (Btest-cmd-util.sh:516: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:42:01.470] (Bpod/env-test-pod created
W0111 23:42:01.571] error: min-available and max-unavailable cannot be both specified
I0111 23:42:01.686] test-cmd-util.sh:519: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0111 23:42:01.686] Name:               env-test-pod
I0111 23:42:01.687] Namespace:          test-kubectl-describe-pod
I0111 23:42:01.687] Priority:           0
I0111 23:42:01.687] PriorityClassName:  <none>
I0111 23:42:01.687] Node:               <none>
... skipping 161 lines ...
I0111 23:42:19.114] (Bpod/valid-pod patched
I0111 23:42:19.249] test-cmd-util.sh:721: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0111 23:42:19.356] (Bpod/valid-pod patched
I0111 23:42:19.487] test-cmd-util.sh:726: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0111 23:42:19.717] (Bpod/valid-pod patched
I0111 23:42:19.853] test-cmd-util.sh:742: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 23:42:20.096] (B+++ [0111 23:42:20] "kubectl patch with resourceVersion 491" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0111 23:42:20.498] pod "valid-pod" deleted
I0111 23:42:20.512] pod/valid-pod replaced
I0111 23:42:20.643] test-cmd-util.sh:766: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0111 23:42:20.838] (BSuccessful
I0111 23:42:20.838] message:error: --grace-period must have --force specified
I0111 23:42:20.839] has:\-\-grace-period must have \-\-force specified
I0111 23:42:21.039] Successful
I0111 23:42:21.039] message:error: --timeout must have --force specified
I0111 23:42:21.040] has:\-\-timeout must have \-\-force specified
I0111 23:42:21.228] node/node-v1-test created
W0111 23:42:21.329] W0111 23:42:21.228215   72790 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0111 23:42:21.437] node/node-v1-test replaced
I0111 23:42:21.567] test-cmd-util.sh:803: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0111 23:42:21.689] (Bnode "node-v1-test" deleted
I0111 23:42:21.824] test-cmd-util.sh:810: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 23:42:22.178] (Btest-cmd-util.sh:813: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0111 23:42:26.543] (Btest-cmd-util.sh:826: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 17 lines ...
W0111 23:42:26.836] Edit cancelled, no changes made.
W0111 23:42:26.836] Edit cancelled, no changes made.
I0111 23:42:26.937] test-cmd-util.sh:836: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0111 23:42:27.047] (Btest-cmd-util.sh:840: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0111 23:42:27.153] (Btest-cmd-util.sh:844: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0111 23:42:27.248] (Bpod/valid-pod labeled
W0111 23:42:27.348] error: 'name' already has a value (valid-pod), and --overwrite is false
I0111 23:42:27.449] test-cmd-util.sh:848: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0111 23:42:27.470] (Btest-cmd-util.sh:852: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 23:42:27.578] (Bpod "valid-pod" force deleted
W0111 23:42:27.679] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 23:42:27.780] test-cmd-util.sh:856: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:42:27.780] (B+++ [0111 23:42:27] Creating namespace namespace-1547250147-15949
... skipping 81 lines ...
I0111 23:42:36.798] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0111 23:42:36.801] +++ working dir: /go/src/k8s.io/kubernetes
I0111 23:42:36.803] +++ command: run_kubectl_create_error_tests
I0111 23:42:36.814] +++ [0111 23:42:36] Creating namespace namespace-1547250156-27152
I0111 23:42:36.909] namespace/namespace-1547250156-27152 created
I0111 23:42:37.005] Context "test" modified.
I0111 23:42:37.011] +++ [0111 23:42:37] Testing kubectl create with error
W0111 23:42:37.112] Error: required flag(s) "filename" not set
W0111 23:42:37.112] 
W0111 23:42:37.112] 
W0111 23:42:37.112] Examples:
W0111 23:42:37.113]   # Create a pod using the data in pod.json.
W0111 23:42:37.113]   kubectl create -f ./pod.json
W0111 23:42:37.113]   
... skipping 38 lines ...
W0111 23:42:37.118]   kubectl create -f FILENAME [options]
W0111 23:42:37.118] 
W0111 23:42:37.118] Use "kubectl <command> --help" for more information about a given command.
W0111 23:42:37.119] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0111 23:42:37.119] 
W0111 23:42:37.119] required flag(s) "filename" not set
I0111 23:42:37.301] +++ [0111 23:42:37] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
I0111 23:42:37.543] +++ exit code: 0
I0111 23:42:37.597] Recording: run_kubectl_apply_tests
I0111 23:42:37.597] Running command: run_kubectl_apply_tests
I0111 23:42:37.618] 
I0111 23:42:37.620] +++ Running case: test-cmd.run_kubectl_apply_tests 
I0111 23:42:37.622] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 20 lines ...
W0111 23:42:40.362] I0111 23:42:39.653730   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250157-18542", Name:"test-deployment-retainkeys-5f667997fd", UID:"94f27846-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-deployment-retainkeys-5f667997fd-2sq6k
I0111 23:42:40.462] test-cmd-util.sh:995: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:42:40.645] (Bpod/selector-test-pod created
I0111 23:42:40.795] test-cmd-util.sh:999: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0111 23:42:40.935] (BSuccessful
I0111 23:42:40.936] message:No resources found.
I0111 23:42:40.936] Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0111 23:42:40.937] has:pods "selector-test-pod-dont-apply" not found
I0111 23:42:41.063] pod "selector-test-pod" deleted
I0111 23:42:41.172] test-cmd-util.sh:1009: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:42:41.370] (Bpod/a created
I0111 23:42:42.876] test-cmd-util.sh:1014: Successful get pods a {{.metadata.name}}: a
I0111 23:42:42.969] (BSuccessful
I0111 23:42:42.970] message:No resources found.
I0111 23:42:42.970] Error from server (NotFound): pods "b" not found
I0111 23:42:42.970] has:pods "b" not found
I0111 23:42:43.148] pod/b created
I0111 23:42:43.159] pod/a pruned
I0111 23:42:44.858] test-cmd-util.sh:1022: Successful get pods b {{.metadata.name}}: b
I0111 23:42:44.953] (BSuccessful
I0111 23:42:44.953] message:No resources found.
I0111 23:42:44.953] Error from server (NotFound): pods "a" not found
I0111 23:42:44.954] has:pods "a" not found
I0111 23:42:45.037] pod "b" deleted
I0111 23:42:45.136] test-cmd-util.sh:1032: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:42:45.287] (Bpod/a created
I0111 23:42:45.391] test-cmd-util.sh:1037: Successful get pods a {{.metadata.name}}: a
I0111 23:42:45.482] (BSuccessful
I0111 23:42:45.482] message:No resources found.
I0111 23:42:45.482] Error from server (NotFound): pods "b" not found
I0111 23:42:45.482] has:pods "b" not found
I0111 23:42:45.640] pod/b created
I0111 23:42:45.744] test-cmd-util.sh:1045: Successful get pods a {{.metadata.name}}: a
I0111 23:42:45.839] (Btest-cmd-util.sh:1046: Successful get pods b {{.metadata.name}}: b
I0111 23:42:45.923] (Bpod "a" deleted
I0111 23:42:45.927] pod "b" deleted
I0111 23:42:46.092] Successful
I0111 23:42:46.092] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector.
I0111 23:42:46.092] has:all resources selected for prune without explicitly passing --all
I0111 23:42:46.246] pod/a created
I0111 23:42:46.252] pod/b created
I0111 23:42:46.278] service/prune-svc created
I0111 23:42:47.786] test-cmd-util.sh:1058: Successful get pods a {{.metadata.name}}: a
I0111 23:42:47.886] (Btest-cmd-util.sh:1059: Successful get pods b {{.metadata.name}}: b
... skipping 125 lines ...
I0111 23:42:59.063] +++ [0111 23:42:59] Testing kubectl create filter
I0111 23:42:59.170] test-cmd-util.sh:1101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:42:59.349] (Bpod/selector-test-pod created
I0111 23:42:59.467] test-cmd-util.sh:1105: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0111 23:42:59.568] (BSuccessful
I0111 23:42:59.568] message:No resources found.
I0111 23:42:59.568] Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0111 23:42:59.569] has:pods "selector-test-pod-dont-apply" not found
I0111 23:42:59.669] pod "selector-test-pod" deleted
I0111 23:42:59.691] +++ exit code: 0
I0111 23:42:59.737] Recording: run_kubectl_apply_deployments_tests
I0111 23:42:59.738] Running command: run_kubectl_apply_deployments_tests
I0111 23:42:59.761] 
... skipping 26 lines ...
I0111 23:43:01.804] (Btest-cmd-util.sh:1144: Successful get deployments my-depl {{.metadata.labels.l2}}: l2
I0111 23:43:01.905] (Bdeployment.extensions "my-depl" deleted
I0111 23:43:01.915] replicaset.extensions "my-depl-574c668485" deleted
I0111 23:43:01.921] replicaset.extensions "my-depl-844db54fcf" deleted
I0111 23:43:01.929] pod "my-depl-574c668485-njldm" deleted
I0111 23:43:01.933] pod "my-depl-844db54fcf-ghqhl" deleted
W0111 23:43:02.034] E0111 23:43:01.929613   72790 replica_set.go:450] Sync "namespace-1547250179-3093/my-depl-844db54fcf" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-844db54fcf": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547250179-3093/my-depl-844db54fcf, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a169b2e0-15fa-11e9-b157-0242ac110002, UID in object meta: 
I0111 23:43:02.134] test-cmd-util.sh:1150: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:43:02.148] (Btest-cmd-util.sh:1151: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:43:02.251] (Btest-cmd-util.sh:1152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:43:02.366] (Btest-cmd-util.sh:1156: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:43:02.535] (Bdeployment.extensions/nginx created
W0111 23:43:02.636] I0111 23:43:02.538479   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250179-3093", Name:"nginx", UID:"a2970eea-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-74d9fbb98 to 3
W0111 23:43:02.636] I0111 23:43:02.541499   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250179-3093", Name:"nginx-74d9fbb98", UID:"a29796d7-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74d9fbb98-hj4lx
W0111 23:43:02.636] I0111 23:43:02.544595   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250179-3093", Name:"nginx-74d9fbb98", UID:"a29796d7-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74d9fbb98-hnqcg
W0111 23:43:02.637] I0111 23:43:02.545706   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250179-3093", Name:"nginx-74d9fbb98", UID:"a29796d7-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74d9fbb98-cpn88
I0111 23:43:02.737] test-cmd-util.sh:1160: Successful get deployment nginx {{.metadata.name}}: nginx
I0111 23:43:06.861] (BSuccessful
I0111 23:43:06.861] message:Error from server (Conflict): error when applying patch:
I0111 23:43:06.862] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547250179-3093\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0111 23:43:06.862] to:
I0111 23:43:06.862] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0111 23:43:06.863] Name: "nginx", Namespace: "namespace-1547250179-3093"
I0111 23:43:06.864] Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["namespace":"namespace-1547250179-3093" "uid":"a2970eea-15fa-11e9-b157-0242ac110002" "creationTimestamp":"2019-01-11T23:43:02Z" "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547250179-3093\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "name":"nginx" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1547250179-3093/deployments/nginx" "resourceVersion":"689" "generation":'\x01'] "spec":map["strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":'\n' "progressDeadlineSeconds":'\u0258' "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["schedulerName":"default-scheduler" "containers":[map["resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]]]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[]]]] "status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["lastTransitionTime":"2019-01-11T23:43:02Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available" "status":"False" "lastUpdateTime":"2019-01-11T23:43:02Z"] map["status":"True" "lastUpdateTime":"2019-01-11T23:43:02Z" "lastTransitionTime":"2019-01-11T23:43:02Z" "reason":"ReplicaSetUpdated" "message":"ReplicaSet \"nginx-74d9fbb98\" is progressing." "type":"Progressing"]]]]}
I0111 23:43:06.865] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0111 23:43:06.865] has:Error from server (Conflict)
W0111 23:43:06.965] I0111 23:43:05.984024   72790 horizontal.go:366] Horizontal Pod Autoscaler has been deleted namespace-1547250153-22761/frontend
I0111 23:43:12.097] deployment.extensions/nginx configured
W0111 23:43:12.198] I0111 23:43:12.104498   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250179-3093", Name:"nginx", UID:"a84a11bc-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-d7576cc9 to 3
W0111 23:43:12.199] I0111 23:43:12.107463   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250179-3093", Name:"nginx-d7576cc9", UID:"a84af316-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-d7576cc9-fpmsl
W0111 23:43:12.199] I0111 23:43:12.111701   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250179-3093", Name:"nginx-d7576cc9", UID:"a84af316-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-d7576cc9-m7pkn
W0111 23:43:12.200] I0111 23:43:12.112407   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250179-3093", Name:"nginx-d7576cc9", UID:"a84af316-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-d7576cc9-8p2qv
... skipping 148 lines ...
I0111 23:43:20.106] namespace/namespace-1547250199-21881 created
I0111 23:43:20.204] Context "test" modified.
I0111 23:43:20.212] +++ [0111 23:43:20] Testing kubectl get
I0111 23:43:20.349] test-cmd-util.sh:1502: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:43:20.479] (BSuccessful
I0111 23:43:20.480] message:No resources found.
I0111 23:43:20.480] Error from server (NotFound): pods "abc" not found
I0111 23:43:20.480] has:pods "abc" not found
I0111 23:43:20.590] test-cmd-util.sh:1510: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:43:20.717] (BSuccessful
I0111 23:43:20.718] message:Error from server (NotFound): pods "abc" not found
I0111 23:43:20.718] has:pods "abc" not found
I0111 23:43:20.863] test-cmd-util.sh:1518: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:43:21.003] (BSuccessful
I0111 23:43:21.003] message:{
I0111 23:43:21.003]     "apiVersion": "v1",
I0111 23:43:21.003]     "items": [],
... skipping 33 lines ...
I0111 23:43:21.904] has not:No resources found
I0111 23:43:22.023] Successful
I0111 23:43:22.024] message:No resources found.
I0111 23:43:22.024] has:No resources found
I0111 23:43:22.160] test-cmd-util.sh:1562: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:43:22.287] (BSuccessful
I0111 23:43:22.288] message:Error from server (NotFound): pods "abc" not found
I0111 23:43:22.288] has:pods "abc" not found
I0111 23:43:22.291] FAIL!
I0111 23:43:22.291] message:Error from server (NotFound): pods "abc" not found
I0111 23:43:22.291] has not:List
I0111 23:43:22.292] 1568 /go/src/k8s.io/kubernetes/hack/make-rules/test-cmd-util.sh
I0111 23:43:22.483] Successful
I0111 23:43:22.483] message:I0111 23:43:22.404580   84505 loader.go:359] Config loaded from file /tmp/tmp.FlZMU2WmR4/.kube/config
I0111 23:43:22.483] I0111 23:43:22.405333   84505 loader.go:359] Config loaded from file /tmp/tmp.FlZMU2WmR4/.kube/config
I0111 23:43:22.484] I0111 23:43:22.406832   84505 round_trippers.go:405] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 991 lines ...
I0111 23:43:26.440]     }
I0111 23:43:26.441] }
I0111 23:43:26.578] test-cmd-util.sh:1621: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 23:43:26.962] (B<no value>Successful
I0111 23:43:26.963] message:valid-pod:
I0111 23:43:26.963] has:valid-pod:
W0111 23:43:27.098] error: error executing jsonpath "{.missing}": missing is not found
I0111 23:43:27.199] Successful
I0111 23:43:27.199] message:Error executing template: missing is not found. Printing more information for debugging the template:
I0111 23:43:27.199] 	template was:
I0111 23:43:27.200] 		{.missing}
I0111 23:43:27.200] 	object given to jsonpath engine was:
I0111 23:43:27.201] 		map[string]interface {}{"status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}, "kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"788", "creationTimestamp":"2019-01-11T23:43:26Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1547250205-31620", "selfLink":"/api/v1/namespaces/namespace-1547250205-31620/pods/valid-pod", "uid":"b0c21908-15fa-11e9-b157-0242ac110002"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"memory":"512Mi", "cpu":"1"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0}}
I0111 23:43:27.201] has:missing is not found
I0111 23:43:27.249] Successful
I0111 23:43:27.251] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0111 23:43:27.251] 	template was:
I0111 23:43:27.251] 		{{.missing}}
I0111 23:43:27.252] 	raw data was:
I0111 23:43:27.253] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-01-11T23:43:26Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1547250205-31620","resourceVersion":"788","selfLink":"/api/v1/namespaces/namespace-1547250205-31620/pods/valid-pod","uid":"b0c21908-15fa-11e9-b157-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0111 23:43:27.253] 	object given to template engine was:
I0111 23:43:27.254] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-01-11T23:43:26Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1547250205-31620 resourceVersion:788 selfLink:/api/v1/namespaces/namespace-1547250205-31620/pods/valid-pod uid:b0c21908-15fa-11e9-b157-0242ac110002] spec:map[containers:[map[terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[memory:512Mi cpu:1]] terminationMessagePath:/dev/termination-log]] dnsPolicy:ClusterFirst priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0111 23:43:27.254] has:map has no entry for key "missing"
W0111 23:43:27.355] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0111 23:43:28.460] E0111 23:43:28.459687   84840 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0111 23:43:28.561] Successful
I0111 23:43:28.561] message:NAME        READY     STATUS    RESTARTS   AGE
I0111 23:43:28.561] valid-pod   0/1       Pending   0          1s
I0111 23:43:28.562] has:STATUS
I0111 23:43:28.562] Successful
... skipping 78 lines ...
I0111 23:43:30.834]   terminationGracePeriodSeconds: 30
I0111 23:43:30.834] status:
I0111 23:43:30.834]   phase: Pending
I0111 23:43:30.834]   qosClass: Guaranteed
I0111 23:43:30.835] has:name: valid-pod
I0111 23:43:30.878] Successful
I0111 23:43:30.878] message:Error from server (NotFound): pods "invalid-pod" not found
I0111 23:43:30.879] has:"invalid-pod" not found
I0111 23:43:31.012] pod "valid-pod" deleted
I0111 23:43:31.150] test-cmd-util.sh:1659: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:43:31.382] (Bpod/redis-master created
I0111 23:43:31.385] pod/valid-pod created
I0111 23:43:31.534] Successful
... skipping 237 lines ...
I0111 23:43:33.079] namespace-1547250199-21881   13s
I0111 23:43:33.079] namespace-1547250205-31620   8s
I0111 23:43:33.079] namespace-1547250211-25715   2s
I0111 23:43:33.079] has:application/json
W0111 23:43:33.318] I0111 23:43:33.317736   68571 controller.go:597] quota admission added evaluator for: {extensions daemonsets}
W0111 23:43:33.338] I0111 23:43:33.338215   68571 controller.go:597] quota admission added evaluator for: {apps controllerrevisions}
W0111 23:43:33.343] I0111 23:43:33.343041   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250211-25715", Name:"bind", UID:"b4f07322-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"804", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:43:33.344] I0111 23:43:33.343203   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250211-25715", Name:"bind", UID:"b4f07322-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"804", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:43:33.344] I0111 23:43:33.343273   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250211-25715", Name:"bind", UID:"b4f07322-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"804", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:43:33.350] I0111 23:43:33.349629   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250211-25715", Name:"bind", UID:"b4f07322-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"807", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:43:33.351] I0111 23:43:33.349930   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250211-25715", Name:"bind", UID:"b4f07322-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"807", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:43:33.351] I0111 23:43:33.350100   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250211-25715", Name:"bind", UID:"b4f07322-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"807", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
I0111 23:43:33.451] daemonset.extensions/bind created
I0111 23:43:33.468] test-cmd-util.sh:1404: Successful get ds {{range.items}}{{.metadata.name}}:{{end}}: bind:
I0111 23:43:33.720] (BSuccessful
I0111 23:43:33.721] message:NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
I0111 23:43:33.721] bind 1 0 0 0 0 <none>
I0111 23:43:33.721] has:NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
... skipping 32 lines ...
I0111 23:43:35.986] message:NAME
I0111 23:43:35.987] sample-role
I0111 23:43:35.987] has:NAME
I0111 23:43:35.988] sample-role
W0111 23:43:40.380] I0111 23:43:40.380214   68571 trace.go:76] Trace[752092072]: "Create /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions" (started: 2019-01-11 23:43:36.349248296 +0000 UTC m=+199.640797223) (total time: 4.03093199s):
W0111 23:43:40.381] Trace[752092072]: [4.003622002s] [4.000963847s] About to store object in database
W0111 23:43:40.386] E0111 23:43:40.385667   68571 autoregister_controller.go:190] v1.company.com failed with : apiservices.apiregistration.k8s.io "v1.company.com" already exists
I0111 23:43:40.486] customresourcedefinition.apiextensions.k8s.io/foos.company.com created
I0111 23:43:40.496] test-cmd-util.sh:1472: Successful get customresourcedefinitions {{range.items}}{{.metadata.name}}:{{end}}: foos.company.com:
I0111 23:43:40.617] (Btest-cmd-util.sh:1475: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:43:40.799] (BSuccessful
I0111 23:43:40.800] message:
I0111 23:43:40.800] has:
... skipping 13 lines ...
I0111 23:43:42.576] 
I0111 23:43:42.577] +++ Running case: test-cmd.run_create_secret_tests 
I0111 23:43:42.579] +++ working dir: /go/src/k8s.io/kubernetes
I0111 23:43:42.581] +++ command: run_create_secret_tests
I0111 23:43:42.677] Successful
I0111 23:43:42.677] message:No resources found.
I0111 23:43:42.678] Error from server (NotFound): secrets "mysecret" not found
I0111 23:43:42.678] has:secrets "mysecret" not found
I0111 23:43:42.852] Successful
I0111 23:43:42.852] message:No resources found.
I0111 23:43:42.852] Error from server (NotFound): secrets "mysecret" not found
I0111 23:43:42.852] has:secrets "mysecret" not found
I0111 23:43:42.853] Successful
I0111 23:43:42.854] message:user-specified
I0111 23:43:42.854] has:user-specified
I0111 23:43:42.935] Successful
I0111 23:43:43.017] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"bab7becb-15fa-11e9-b157-0242ac110002","resourceVersion":"877","creationTimestamp":"2019-01-11T23:43:43Z"}}
... skipping 119 lines ...
I0111 23:43:47.560] Successful
I0111 23:43:47.560] message:kind.mygroup.example.com/myobj
I0111 23:43:47.560] has:kind.mygroup.example.com/myobj
I0111 23:43:47.648] Successful
I0111 23:43:47.649] message:kind.mygroup.example.com/myobj
I0111 23:43:47.649] has:kind.mygroup.example.com/myobj
W0111 23:43:47.750] E0111 23:43:46.192851   68571 autoregister_controller.go:190] v1alpha1.mygroup.example.com failed with : apiservices.apiregistration.k8s.io "v1alpha1.mygroup.example.com" already exists
W0111 23:43:47.750] I0111 23:43:47.465454   68571 controller.go:597] quota admission added evaluator for: {mygroup.example.com resources}
I0111 23:43:47.851] Successful
I0111 23:43:47.851] message:kind.mygroup.example.com/myobj
I0111 23:43:47.851] has:kind.mygroup.example.com/myobj
I0111 23:43:47.854] kind.mygroup.example.com "myobj" deleted
I0111 23:43:47.953] test-cmd-util.sh:2108: Successful get resources {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 115 lines ...
I0111 23:43:50.180] foo.company.com/test patched
I0111 23:43:50.282] test-cmd-util.sh:2143: Successful get foos/test {{.patched}}: value1
I0111 23:43:50.375] (Bfoo.company.com/test patched
I0111 23:43:50.480] test-cmd-util.sh:2145: Successful get foos/test {{.patched}}: value2
I0111 23:43:50.576] (Bfoo.company.com/test patched
I0111 23:43:50.685] test-cmd-util.sh:2147: Successful get foos/test {{.patched}}: <no value>
I0111 23:43:50.866] (B+++ [0111 23:43:50] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0111 23:43:50.944] {
I0111 23:43:50.945]     "apiVersion": "company.com/v1",
I0111 23:43:50.945]     "kind": "Foo",
I0111 23:43:50.945]     "metadata": {
I0111 23:43:50.945]         "annotations": {
I0111 23:43:50.945]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 111 lines ...
I0111 23:43:52.546] bar.company.com "test" deleted
W0111 23:43:52.647] I0111 23:43:52.266346   68571 controller.go:597] quota admission added evaluator for: {company.com bars}
W0111 23:43:52.647] /go/src/k8s.io/kubernetes/hack/make-rules/test-cmd-util.sh: line 2201: 87285 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0111 23:43:52.647] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 87286 Killed                  while [ ${tries} -lt 10 ]; do
W0111 23:43:52.648]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0111 23:43:52.648] done
W0111 23:43:59.690] E0111 23:43:59.689265   72790 resource_quota_controller.go:460] failed to sync resource monitors: [couldn't start monitor for resource {"company.com" "v1" "foos"}: unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource {"company.com" "v1" "validfoos"}: unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource {"mygroup.example.com" "v1alpha1" "resources"}: unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource {"company.com" "v1" "bars"}: unable to monitor quota for resource "company.com/v1, Resource=bars"]
W0111 23:43:59.826] I0111 23:43:59.826232   72790 controller_utils.go:1025] Waiting for caches to sync for garbage collector controller
W0111 23:43:59.927] I0111 23:43:59.926527   72790 controller_utils.go:1032] Caches are synced for garbage collector controller
I0111 23:44:00.056] test-cmd-util.sh:2227: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:44:00.212] (Bfoo.company.com/test created
I0111 23:44:00.314] test-cmd-util.sh:2233: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test:
I0111 23:44:00.407] (Btest-cmd-util.sh:2236: Successful get foos/test {{.someField}}: field1
... skipping 58 lines ...
I0111 23:44:06.681] bar.company.com/test created
I0111 23:44:06.793] test-cmd-util.sh:2362: Successful get bars {{len .items}}: 1
I0111 23:44:06.886] (Bnamespace "non-native-resources" deleted
I0111 23:44:12.136] test-cmd-util.sh:2365: Successful get bars {{len .items}}: 0
I0111 23:44:12.328] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0111 23:44:12.429] No resources found.
W0111 23:44:12.429] Error from server (NotFound): namespaces "non-native-resources" not found
I0111 23:44:12.530] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0111 23:44:12.554] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0111 23:44:12.668] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0111 23:44:12.700] +++ exit code: 0
I0111 23:44:12.787] Recording: run_cmd_with_img_tests
I0111 23:44:12.787] Running command: run_cmd_with_img_tests
... skipping 9 lines ...
W0111 23:44:13.107] I0111 23:44:13.104539   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250252-1614", Name:"test1-7f54676899", UID:"cca672c0-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"988", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-7f54676899-kghdt
I0111 23:44:13.207] Successful
I0111 23:44:13.208] message:deployment.apps/test1 created
I0111 23:44:13.208] has:deployment.apps/test1 created
I0111 23:44:13.210] deployment.extensions "test1" deleted
I0111 23:44:13.299] Successful
I0111 23:44:13.299] message:error: Invalid image name "InvalidImageName": invalid reference format
I0111 23:44:13.299] has:error: Invalid image name "InvalidImageName": invalid reference format
I0111 23:44:13.315] +++ exit code: 0
I0111 23:44:13.368] Recording: run_recursive_resources_tests
I0111 23:44:13.368] Running command: run_recursive_resources_tests
I0111 23:44:13.391] 
I0111 23:44:13.394] +++ Running case: test-cmd.run_recursive_resources_tests 
I0111 23:44:13.396] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I0111 23:44:13.598] Context "test" modified.
I0111 23:44:13.709] test-cmd-util.sh:2385: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:44:14.032] (Btest-cmd-util.sh:2389: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:14.035] (BSuccessful
I0111 23:44:14.035] message:pod/busybox0 created
I0111 23:44:14.035] pod/busybox1 created
I0111 23:44:14.035] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 23:44:14.035] has:error validating data: kind not set
I0111 23:44:14.146] test-cmd-util.sh:2394: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:14.349] (Btest-cmd-util.sh:2402: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0111 23:44:14.351] (BSuccessful
I0111 23:44:14.352] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 23:44:14.352] has:Object 'Kind' is missing
I0111 23:44:14.464] test-cmd-util.sh:2409: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:14.756] (Btest-cmd-util.sh:2413: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0111 23:44:14.758] (BSuccessful
I0111 23:44:14.759] message:pod/busybox0 replaced
I0111 23:44:14.759] pod/busybox1 replaced
I0111 23:44:14.759] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 23:44:14.759] has:error validating data: kind not set
I0111 23:44:14.863] test-cmd-util.sh:2418: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:14.973] (BSuccessful
I0111 23:44:14.973] message:Name:               busybox0
I0111 23:44:14.973] Namespace:          namespace-1547250253-6270
I0111 23:44:14.974] Priority:           0
I0111 23:44:14.974] PriorityClassName:  <none>
... skipping 159 lines ...
I0111 23:44:14.989] has:Object 'Kind' is missing
I0111 23:44:15.089] test-cmd-util.sh:2428: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:15.298] (Btest-cmd-util.sh:2432: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0111 23:44:15.300] (BSuccessful
I0111 23:44:15.301] message:pod/busybox0 annotated
I0111 23:44:15.301] pod/busybox1 annotated
I0111 23:44:15.301] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 23:44:15.301] has:Object 'Kind' is missing
I0111 23:44:15.408] test-cmd-util.sh:2437: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:15.701] (Btest-cmd-util.sh:2441: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0111 23:44:15.704] (BSuccessful
I0111 23:44:15.704] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 23:44:15.704] pod/busybox0 configured
I0111 23:44:15.704] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 23:44:15.704] pod/busybox1 configured
I0111 23:44:15.704] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 23:44:15.705] has:error validating data: kind not set
I0111 23:44:15.811] test-cmd-util.sh:2447: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:44:15.988] (Bdeployment.extensions/nginx created
W0111 23:44:16.089] I0111 23:44:15.991880   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250253-6270", Name:"nginx", UID:"ce5efe61-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1013", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-794c6b99b4 to 3
W0111 23:44:16.089] I0111 23:44:15.996306   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250253-6270", Name:"nginx-794c6b99b4", UID:"ce5f9fc3-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1014", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-794c6b99b4-hjq6f
W0111 23:44:16.090] I0111 23:44:15.999883   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250253-6270", Name:"nginx-794c6b99b4", UID:"ce5f9fc3-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1014", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-794c6b99b4-9cfsp
W0111 23:44:16.090] I0111 23:44:16.000370   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250253-6270", Name:"nginx-794c6b99b4", UID:"ce5f9fc3-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1014", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-794c6b99b4-s7pr5
... skipping 42 lines ...
I0111 23:44:16.417] status: {}
I0111 23:44:16.417] has:apps/v1beta1
I0111 23:44:16.512] deployment.extensions "nginx" deleted
I0111 23:44:16.630] test-cmd-util.sh:2463: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:16.853] (Btest-cmd-util.sh:2467: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:16.856] (BSuccessful
I0111 23:44:16.857] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 23:44:16.857] has:Object 'Kind' is missing
I0111 23:44:16.971] test-cmd-util.sh:2472: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:17.073] (BSuccessful
I0111 23:44:17.073] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 23:44:17.073] has:busybox0:busybox1:
I0111 23:44:17.074] Successful
I0111 23:44:17.075] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 23:44:17.075] has:Object 'Kind' is missing
W0111 23:44:17.176] I0111 23:44:17.022269   72790 namespace_controller.go:171] Namespace has been deleted non-native-resources
I0111 23:44:17.276] test-cmd-util.sh:2481: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:17.298] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 23:44:17.430] test-cmd-util.sh:2486: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0111 23:44:17.433] (BSuccessful
I0111 23:44:17.433] message:pod/busybox0 labeled
I0111 23:44:17.433] pod/busybox1 labeled
I0111 23:44:17.433] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 23:44:17.434] has:Object 'Kind' is missing
I0111 23:44:17.552] test-cmd-util.sh:2491: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:17.663] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 23:44:17.777] test-cmd-util.sh:2496: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0111 23:44:17.779] (BSuccessful
I0111 23:44:17.779] message:pod/busybox0 patched
I0111 23:44:17.779] pod/busybox1 patched
I0111 23:44:17.780] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 23:44:17.780] has:Object 'Kind' is missing
I0111 23:44:17.895] test-cmd-util.sh:2501: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:18.112] (Btest-cmd-util.sh:2505: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:44:18.114] (BSuccessful
I0111 23:44:18.115] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 23:44:18.115] pod "busybox0" force deleted
I0111 23:44:18.115] pod "busybox1" force deleted
I0111 23:44:18.115] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 23:44:18.115] has:Object 'Kind' is missing
I0111 23:44:18.233] test-cmd-util.sh:2510: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:44:18.420] (Breplicationcontroller/busybox0 created
I0111 23:44:18.439] replicationcontroller/busybox1 created
W0111 23:44:18.540] I0111 23:44:18.423670   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250253-6270", Name:"busybox0", UID:"cfd21e76-15fa-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"1044", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-j64qf
W0111 23:44:18.540] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 23:44:18.541] I0111 23:44:18.442122   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250253-6270", Name:"busybox1", UID:"cfd51822-15fa-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"1049", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-fqm55
I0111 23:44:18.641] test-cmd-util.sh:2514: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:18.690] (Btest-cmd-util.sh:2519: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:18.800] (Btest-cmd-util.sh:2520: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 23:44:18.903] (Btest-cmd-util.sh:2521: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 23:44:19.127] (Btest-cmd-util.sh:2526: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0111 23:44:19.263] (Btest-cmd-util.sh:2527: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0111 23:44:19.265] (BSuccessful
I0111 23:44:19.265] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0111 23:44:19.265] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0111 23:44:19.266] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 23:44:19.266] has:Object 'Kind' is missing
I0111 23:44:19.383] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0111 23:44:19.507] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0111 23:44:19.621] test-cmd-util.sh:2535: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:19.744] (Btest-cmd-util.sh:2536: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 23:44:19.857] (Btest-cmd-util.sh:2537: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 23:44:20.103] (Btest-cmd-util.sh:2541: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0111 23:44:20.219] (Btest-cmd-util.sh:2542: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0111 23:44:20.221] (BSuccessful
I0111 23:44:20.221] message:service/busybox0 exposed
I0111 23:44:20.221] service/busybox1 exposed
I0111 23:44:20.222] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 23:44:20.222] has:Object 'Kind' is missing
I0111 23:44:20.350] test-cmd-util.sh:2548: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:20.470] (Btest-cmd-util.sh:2549: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 23:44:20.576] (Btest-cmd-util.sh:2550: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 23:44:20.808] (Btest-cmd-util.sh:2554: Successful get rc busybox0 {{.spec.replicas}}: 2
I0111 23:44:20.929] (Btest-cmd-util.sh:2555: Successful get rc busybox1 {{.spec.replicas}}: 2
I0111 23:44:20.931] (BSuccessful
I0111 23:44:20.931] message:replicationcontroller/busybox0 scaled
I0111 23:44:20.932] replicationcontroller/busybox1 scaled
I0111 23:44:20.932] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 23:44:20.932] has:Object 'Kind' is missing
W0111 23:44:21.033] I0111 23:44:20.688433   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250253-6270", Name:"busybox0", UID:"cfd21e76-15fa-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"1066", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-qhlfl
W0111 23:44:21.033] I0111 23:44:20.696386   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250253-6270", Name:"busybox1", UID:"cfd51822-15fa-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-vlqsb
I0111 23:44:21.134] test-cmd-util.sh:2560: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:21.240] (Btest-cmd-util.sh:2564: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:44:21.242] (BSuccessful
I0111 23:44:21.243] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 23:44:21.243] replicationcontroller "busybox0" force deleted
I0111 23:44:21.243] replicationcontroller "busybox1" force deleted
I0111 23:44:21.243] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 23:44:21.243] has:Object 'Kind' is missing
I0111 23:44:21.339] test-cmd-util.sh:2569: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:44:21.554] (Bdeployment.extensions/nginx1-deployment created
I0111 23:44:21.568] deployment.extensions/nginx0-deployment created
W0111 23:44:21.669] I0111 23:44:21.557947   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250253-6270", Name:"nginx1-deployment", UID:"d1b07844-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-5dc485c78 to 2
W0111 23:44:21.669] I0111 23:44:21.560678   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250253-6270", Name:"nginx1-deployment-5dc485c78", UID:"d1b100b9-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-5dc485c78-j5t2r
W0111 23:44:21.670] I0111 23:44:21.563632   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250253-6270", Name:"nginx1-deployment-5dc485c78", UID:"d1b100b9-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-5dc485c78-ztcxh
W0111 23:44:21.670] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 23:44:21.670] I0111 23:44:21.571152   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250253-6270", Name:"nginx0-deployment", UID:"d1b24934-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1094", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-76db6cfd79 to 2
W0111 23:44:21.670] I0111 23:44:21.574375   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250253-6270", Name:"nginx0-deployment-76db6cfd79", UID:"d1b2bef0-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-76db6cfd79-btjw6
W0111 23:44:21.671] I0111 23:44:21.576975   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250253-6270", Name:"nginx0-deployment-76db6cfd79", UID:"d1b2bef0-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-76db6cfd79-f4qm9
I0111 23:44:21.771] test-cmd-util.sh:2573: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0111 23:44:21.778] (Btest-cmd-util.sh:2574: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0111 23:44:22.129] (Btest-cmd-util.sh:2578: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0111 23:44:22.131] (BSuccessful
I0111 23:44:22.131] message:deployment.extensions/nginx1-deployment
I0111 23:44:22.131] deployment.extensions/nginx0-deployment
I0111 23:44:22.132] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 23:44:22.132] has:Object 'Kind' is missing
W0111 23:44:22.233] I0111 23:44:21.890670   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250253-6270", Name:"nginx1-deployment", UID:"d1b07844-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1110", FieldPath:""}): type: 'Warning' reason: 'DeploymentRollbackTemplateUnchanged' The rollback revision contains the same template as current deployment "nginx1-deployment"
W0111 23:44:22.233] I0111 23:44:21.930209   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250253-6270", Name:"nginx0-deployment", UID:"d1b24934-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1114", FieldPath:""}): type: 'Warning' reason: 'DeploymentRollbackTemplateUnchanged' The rollback revision contains the same template as current deployment "nginx0-deployment"
I0111 23:44:22.334] deployment.extensions/nginx1-deployment paused
I0111 23:44:22.334] deployment.extensions/nginx0-deployment paused
I0111 23:44:22.344] test-cmd-util.sh:2585: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0111 23:44:22.346] (BSuccessful
I0111 23:44:22.347] message:error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 23:44:22.347] has:Object 'Kind' is missing
I0111 23:44:22.446] deployment.extensions/nginx1-deployment resumed
I0111 23:44:22.450] deployment.extensions/nginx0-deployment resumed
I0111 23:44:22.559] test-cmd-util.sh:2591: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0111 23:44:22.561] (BSuccessful
I0111 23:44:22.561] message:error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 23:44:22.561] has:Object 'Kind' is missing
I0111 23:44:22.684] Successful
I0111 23:44:22.684] message:deployments "nginx1-deployment"
I0111 23:44:22.684] REVISION  CHANGE-CAUSE
I0111 23:44:22.684] 1         <none>
I0111 23:44:22.684] 
I0111 23:44:22.684] deployments "nginx0-deployment"
I0111 23:44:22.685] REVISION  CHANGE-CAUSE
I0111 23:44:22.685] 1         <none>
I0111 23:44:22.685] 
I0111 23:44:22.685] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 23:44:22.685] has:nginx0-deployment
I0111 23:44:22.686] Successful
I0111 23:44:22.686] message:deployments "nginx1-deployment"
I0111 23:44:22.686] REVISION  CHANGE-CAUSE
I0111 23:44:22.686] 1         <none>
I0111 23:44:22.686] 
I0111 23:44:22.686] deployments "nginx0-deployment"
I0111 23:44:22.686] REVISION  CHANGE-CAUSE
I0111 23:44:22.687] 1         <none>
I0111 23:44:22.687] 
I0111 23:44:22.687] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 23:44:22.687] has:nginx1-deployment
I0111 23:44:22.689] Successful
I0111 23:44:22.689] message:deployments "nginx1-deployment"
I0111 23:44:22.689] REVISION  CHANGE-CAUSE
I0111 23:44:22.689] 1         <none>
I0111 23:44:22.689] 
I0111 23:44:22.690] deployments "nginx0-deployment"
I0111 23:44:22.690] REVISION  CHANGE-CAUSE
I0111 23:44:22.690] 1         <none>
I0111 23:44:22.690] 
I0111 23:44:22.690] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 23:44:22.690] has:Object 'Kind' is missing
I0111 23:44:22.783] deployment.extensions "nginx1-deployment" force deleted
I0111 23:44:22.788] deployment.extensions "nginx0-deployment" force deleted
W0111 23:44:22.888] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 23:44:22.889] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 23:44:23.917] test-cmd-util.sh:2607: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:44:24.077] (Breplicationcontroller/busybox0 created
I0111 23:44:24.081] replicationcontroller/busybox1 created
W0111 23:44:24.182] I0111 23:44:24.080255   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250253-6270", Name:"busybox0", UID:"d3315b5f-15fa-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"1143", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-mptlv
W0111 23:44:24.182] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 23:44:24.183] I0111 23:44:24.083034   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250253-6270", Name:"busybox1", UID:"d332067d-15fa-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"1145", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-zkrdq
I0111 23:44:24.283] test-cmd-util.sh:2611: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 23:44:24.303] (BSuccessful
I0111 23:44:24.304] message:no rollbacker has been implemented for {"" "ReplicationController"}
I0111 23:44:24.304] no rollbacker has been implemented for {"" "ReplicationController"}
I0111 23:44:24.304] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
I0111 23:44:24.306] message:no rollbacker has been implemented for {"" "ReplicationController"}
I0111 23:44:24.306] no rollbacker has been implemented for {"" "ReplicationController"}
I0111 23:44:24.307] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 23:44:24.307] has:Object 'Kind' is missing
I0111 23:44:24.423] Successful
I0111 23:44:24.424] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 23:44:24.424] error: replicationcontrollers "busybox0" pausing is not supported
I0111 23:44:24.424] error: replicationcontrollers "busybox1" pausing is not supported
I0111 23:44:24.424] has:Object 'Kind' is missing
I0111 23:44:24.426] Successful
I0111 23:44:24.426] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 23:44:24.426] error: replicationcontrollers "busybox0" pausing is not supported
I0111 23:44:24.426] error: replicationcontrollers "busybox1" pausing is not supported
I0111 23:44:24.427] has:replicationcontrollers "busybox0" pausing is not supported
I0111 23:44:24.428] Successful
I0111 23:44:24.428] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 23:44:24.428] error: replicationcontrollers "busybox0" pausing is not supported
I0111 23:44:24.429] error: replicationcontrollers "busybox1" pausing is not supported
I0111 23:44:24.429] has:replicationcontrollers "busybox1" pausing is not supported
I0111 23:44:24.538] Successful
I0111 23:44:24.539] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 23:44:24.539] error: replicationcontrollers "busybox0" resuming is not supported
I0111 23:44:24.540] error: replicationcontrollers "busybox1" resuming is not supported
I0111 23:44:24.540] has:Object 'Kind' is missing
I0111 23:44:24.541] Successful
I0111 23:44:24.541] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 23:44:24.541] error: replicationcontrollers "busybox0" resuming is not supported
I0111 23:44:24.541] error: replicationcontrollers "busybox1" resuming is not supported
I0111 23:44:24.542] has:replicationcontrollers "busybox0" resuming is not supported
I0111 23:44:24.543] Successful
I0111 23:44:24.544] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 23:44:24.544] error: replicationcontrollers "busybox0" resuming is not supported
I0111 23:44:24.544] error: replicationcontrollers "busybox1" resuming is not supported
I0111 23:44:24.544] has:replicationcontrollers "busybox0" resuming is not supported
I0111 23:44:24.629] replicationcontroller "busybox0" force deleted
I0111 23:44:24.634] replicationcontroller "busybox1" force deleted
W0111 23:44:24.734] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 23:44:24.735] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 23:44:25.653] +++ exit code: 0
I0111 23:44:25.713] Recording: run_namespace_tests
I0111 23:44:25.713] Running command: run_namespace_tests
I0111 23:44:25.731] 
I0111 23:44:25.733] +++ Running case: test-cmd.run_namespace_tests 
I0111 23:44:25.735] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 6 lines ...
W0111 23:44:29.797] I0111 23:44:29.796583   72790 controller_utils.go:1032] Caches are synced for resource quota controller
W0111 23:44:29.949] I0111 23:44:29.948898   72790 controller_utils.go:1025] Waiting for caches to sync for garbage collector controller
W0111 23:44:30.049] I0111 23:44:30.049204   72790 controller_utils.go:1032] Caches are synced for garbage collector controller
I0111 23:44:31.127] namespace/my-namespace condition met
I0111 23:44:31.220] Successful
I0111 23:44:31.220] message:No resources found.
I0111 23:44:31.220] Error from server (NotFound): namespaces "my-namespace" not found
I0111 23:44:31.220] has: not found
I0111 23:44:31.335] test-cmd-util.sh:2665: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0111 23:44:31.412] (Bnamespace/other created
I0111 23:44:31.508] test-cmd-util.sh:2669: Successful get namespaces/other {{.metadata.name}}: other
I0111 23:44:31.599] (Btest-cmd-util.sh:2673: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:44:31.756] (Bpod/valid-pod created
I0111 23:44:31.861] test-cmd-util.sh:2677: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 23:44:31.961] (Btest-cmd-util.sh:2679: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 23:44:32.050] (BSuccessful
I0111 23:44:32.051] message:error: a resource cannot be retrieved by name across all namespaces
I0111 23:44:32.051] has:a resource cannot be retrieved by name across all namespaces
I0111 23:44:32.153] test-cmd-util.sh:2686: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 23:44:32.245] (Bpod "valid-pod" force deleted
W0111 23:44:32.346] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 23:44:32.447] test-cmd-util.sh:2690: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:44:32.447] (Bnamespace "other" deleted
... skipping 112 lines ...
I0111 23:44:53.687] +++ command: run_client_config_tests
I0111 23:44:53.700] +++ [0111 23:44:53] Creating namespace namespace-1547250293-13796
I0111 23:44:53.785] namespace/namespace-1547250293-13796 created
I0111 23:44:53.866] Context "test" modified.
I0111 23:44:53.872] +++ [0111 23:44:53] Testing client config
I0111 23:44:53.952] Successful
I0111 23:44:53.952] message:error: stat missing: no such file or directory
I0111 23:44:53.952] has:missing: no such file or directory
I0111 23:44:54.033] Successful
I0111 23:44:54.033] message:error: stat missing: no such file or directory
I0111 23:44:54.034] has:missing: no such file or directory
I0111 23:44:54.115] Successful
I0111 23:44:54.115] message:error: stat missing: no such file or directory
I0111 23:44:54.115] has:missing: no such file or directory
I0111 23:44:54.198] Successful
I0111 23:44:54.198] message:Error in configuration: context was not found for specified context: missing-context
I0111 23:44:54.198] has:context was not found for specified context: missing-context
I0111 23:44:54.281] Successful
I0111 23:44:54.281] message:error: no server found for cluster "missing-cluster"
I0111 23:44:54.281] has:no server found for cluster "missing-cluster"
I0111 23:44:54.365] Successful
I0111 23:44:54.365] message:auth info "missing-user" does not exist
I0111 23:44:54.365] auth info "missing-user" does not exist
I0111 23:44:54.365] has:auth info "missing-user" does not exist
I0111 23:44:54.528] Successful
I0111 23:44:54.528] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1"
I0111 23:44:54.528] has:Error loading config file
I0111 23:44:54.612] Successful
I0111 23:44:54.612] message:error: stat missing-config: no such file or directory
I0111 23:44:54.612] has:no such file or directory
I0111 23:44:54.625] +++ exit code: 0
I0111 23:44:54.669] Recording: run_service_accounts_tests
I0111 23:44:54.669] Running command: run_service_accounts_tests
I0111 23:44:54.691] 
I0111 23:44:54.694] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 76 lines ...
I0111 23:45:02.321]                 job-name=test-job
I0111 23:45:02.322]                 run=pi
I0111 23:45:02.322] Annotations:    cronjob.kubernetes.io/instantiate=manual
I0111 23:45:02.322] Parallelism:    1
I0111 23:45:02.322] Completions:    1
I0111 23:45:02.322] Start Time:     Fri, 11 Jan 2019 23:45:02 +0000
I0111 23:45:02.322] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0111 23:45:02.322] Pod Template:
I0111 23:45:02.322]   Labels:  controller-uid=e9d02450-15fa-11e9-b157-0242ac110002
I0111 23:45:02.322]            job-name=test-job
I0111 23:45:02.322]            run=pi
I0111 23:45:02.322]   Containers:
I0111 23:45:02.322]    pi:
... skipping 304 lines ...
I0111 23:45:11.418]   selector:
I0111 23:45:11.418]     role: padawan
I0111 23:45:11.419]   sessionAffinity: None
I0111 23:45:11.419]   type: ClusterIP
I0111 23:45:11.419] status:
I0111 23:45:11.419]   loadBalancer: {}
W0111 23:45:11.519] error: you must specify resources by --filename when --local is set.
W0111 23:45:11.520] Example resource specifications include:
W0111 23:45:11.520]    '-f rsrc.yaml'
W0111 23:45:11.520]    '--filename=rsrc.json'
I0111 23:45:11.621] test-cmd-util.sh:2890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0111 23:45:11.831] (Btest-cmd-util.sh:2897: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0111 23:45:11.933] (Bservice "redis-master" deleted
... skipping 40 lines ...
I0111 23:45:15.261] +++ [0111 23:45:15] Creating namespace namespace-1547250315-2464
I0111 23:45:15.345] namespace/namespace-1547250315-2464 created
I0111 23:45:15.427] Context "test" modified.
I0111 23:45:15.433] +++ [0111 23:45:15] Testing kubectl(v1:daemonsets)
I0111 23:45:15.539] test-cmd-util.sh:3650: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:45:15.714] (Bdaemonset.extensions/bind created
W0111 23:45:15.815] I0111 23:45:15.718434   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1307", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:15.815] I0111 23:45:15.718514   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1307", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:15.816] I0111 23:45:15.718537   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1307", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:15.816] I0111 23:45:15.722588   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1310", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:15.817] I0111 23:45:15.722628   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1310", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:15.817] I0111 23:45:15.722637   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1310", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
I0111 23:45:15.917] test-cmd-util.sh:3654: Successful get daemonsets bind {{.spec.templateGeneration}}: 1
I0111 23:45:16.008] (Bdaemonset.extensions/bind configured
I0111 23:45:16.118] test-cmd-util.sh:3657: Successful get daemonsets bind {{.spec.templateGeneration}}: 1
I0111 23:45:16.221] (Bdaemonset.extensions/bind image updated
W0111 23:45:16.322] I0111 23:45:16.225524   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1317", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.323] I0111 23:45:16.225577   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1317", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.323] I0111 23:45:16.225635   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1317", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.324] I0111 23:45:16.229114   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1319", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.324] I0111 23:45:16.229153   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1319", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.325] I0111 23:45:16.229162   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1319", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
I0111 23:45:16.425] test-cmd-util.sh:3660: Successful get daemonsets bind {{.spec.templateGeneration}}: 2
I0111 23:45:16.442] (Bdaemonset.extensions/bind env updated
W0111 23:45:16.543] I0111 23:45:16.446008   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1326", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.572] I0111 23:45:16.446056   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1326", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.573] I0111 23:45:16.446144   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1326", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.573] I0111 23:45:16.451075   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1328", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.573] I0111 23:45:16.451149   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1328", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.573] I0111 23:45:16.451163   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1328", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.659] I0111 23:45:16.659327   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1335", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.660] I0111 23:45:16.659600   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1335", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.660] I0111 23:45:16.659630   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1335", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.664] I0111 23:45:16.663746   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1337", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.665] I0111 23:45:16.663778   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1337", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:16.665] I0111 23:45:16.663786   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250315-2464", Name:"bind", UID:"f1f883a6-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1337", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
I0111 23:45:16.766] test-cmd-util.sh:3662: Successful get daemonsets bind {{.spec.templateGeneration}}: 3
I0111 23:45:16.766] (Bdaemonset.extensions/bind resource requirements updated
I0111 23:45:16.770] test-cmd-util.sh:3664: Successful get daemonsets bind {{.spec.templateGeneration}}: 4
I0111 23:45:17.873] (Bdaemonset.extensions "bind" deleted
I0111 23:45:17.894] +++ exit code: 0
I0111 23:45:17.938] Recording: run_daemonset_history_tests
... skipping 5 lines ...
I0111 23:45:17.983] +++ [0111 23:45:17] Creating namespace namespace-1547250317-10094
I0111 23:45:18.066] namespace/namespace-1547250317-10094 created
I0111 23:45:18.148] Context "test" modified.
I0111 23:45:18.155] +++ [0111 23:45:18] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
I0111 23:45:18.256] test-cmd-util.sh:3682: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 23:45:18.429] (Bdaemonset.extensions/bind created
W0111 23:45:18.530] I0111 23:45:18.433163   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1356", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:18.530] I0111 23:45:18.433206   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1356", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:18.531] I0111 23:45:18.433244   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1356", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:18.531] I0111 23:45:18.437760   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1359", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:18.532] I0111 23:45:18.438003   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1359", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:18.532] I0111 23:45:18.438039   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1359", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
I0111 23:45:18.633] test-cmd-util.sh:3686: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"extensions/v1beta1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"name":"bind","namespace":"namespace-1547250317-10094"},"spec":{"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0111 23:45:18.634]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I0111 23:45:18.665] (Bdaemonset.extensions/bind skipped rollback (current template already matches revision 1)
I0111 23:45:18.772] test-cmd-util.sh:3689: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 23:45:18.877] (Btest-cmd-util.sh:3690: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 23:45:19.071] (Bdaemonset.extensions/bind configured
W0111 23:45:19.172] I0111 23:45:19.079006   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1367", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:19.172] I0111 23:45:19.079062   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1367", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:19.173] I0111 23:45:19.079077   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1367", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:19.173] I0111 23:45:19.084148   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1369", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:19.173] I0111 23:45:19.084207   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1369", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:19.174] I0111 23:45:19.084221   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1369", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
I0111 23:45:19.274] test-cmd-util.sh:3693: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0111 23:45:19.319] (Btest-cmd-util.sh:3694: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 23:45:19.438] (Btest-cmd-util.sh:3695: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 23:45:19.571] (Btest-cmd-util.sh:3696: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"extensions/v1beta1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"name":"bind","namespace":"namespace-1547250317-10094"},"spec":{"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0111 23:45:19.572]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"extensions/v1beta1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"name":"bind","namespace":"namespace-1547250317-10094"},"spec":{"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0111 23:45:19.572]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
... skipping 9 lines ...
I0111 23:45:19.690]   Volumes:	<none>
I0111 23:45:19.690]  (dry run)
I0111 23:45:19.809] test-cmd-util.sh:3699: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0111 23:45:19.928] (Btest-cmd-util.sh:3700: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 23:45:20.054] (Btest-cmd-util.sh:3701: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 23:45:20.194] (Bdaemonset.extensions/bind rolled back
W0111 23:45:20.295] I0111 23:45:20.194194   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1376", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:20.296] I0111 23:45:20.194226   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1376", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:20.296] I0111 23:45:20.194317   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1376", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:20.296] I0111 23:45:20.197909   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1376", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:20.297] I0111 23:45:20.198039   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1376", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:20.297] I0111 23:45:20.198063   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1376", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:20.300] E0111 23:45:20.202526   72790 daemon_controller.go:285] namespace-1547250317-10094/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1547250317-10094", SelfLink:"/apis/apps/v1/namespaces/namespace-1547250317-10094/daemonsets/bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", ResourceVersion:"1376", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63682847118, loc:(*time.Location)(0x56eb260)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true", "deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"name\":\"bind\",\"namespace\":\"namespace-1547250317-10094\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc423a5e9e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc422257db8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc42472e660), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc423a5ea20), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc421178468)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc422257e50)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0111 23:45:20.301] I0111 23:45:20.203389   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1378", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:20.301] I0111 23:45:20.203465   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1378", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:20.301] I0111 23:45:20.203482   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1378", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:20.302] I0111 23:45:20.208381   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1378", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:20.302] I0111 23:45:20.208416   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1378", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:20.303] I0111 23:45:20.208442   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1378", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
I0111 23:45:20.403] test-cmd-util.sh:3704: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 23:45:20.437] (Btest-cmd-util.sh:3705: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 23:45:20.555] (BSuccessful
I0111 23:45:20.555] message:error: unable to find specified revision 1000000 in history
I0111 23:45:20.555] has:unable to find specified revision
I0111 23:45:20.672] test-cmd-util.sh:3709: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 23:45:20.778] (Btest-cmd-util.sh:3710: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 23:45:20.908] (Bdaemonset.extensions/bind rolled back
W0111 23:45:21.008] I0111 23:45:20.903627   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1392", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:21.009] I0111 23:45:20.903764   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1392", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:21.009] I0111 23:45:20.903851   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1392", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:21.010] I0111 23:45:20.910174   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1394", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:21.010] I0111 23:45:20.910215   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1394", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:21.010] I0111 23:45:20.910227   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1394", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:21.011] I0111 23:45:20.910674   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1394", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:21.011] I0111 23:45:20.910706   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1394", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0111 23:45:21.012] I0111 23:45:20.910719   72790 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1547250317-10094", Name:"bind", UID:"f396d31a-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1394", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
I0111 23:45:21.112] test-cmd-util.sh:3713: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0111 23:45:21.129] (Btest-cmd-util.sh:3714: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 23:45:21.244] (Btest-cmd-util.sh:3715: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 23:45:22.350] (Bdaemonset.extensions "bind" deleted
I0111 23:45:22.369] +++ exit code: 0
I0111 23:45:22.422] Recording: run_rc_tests
... skipping 24 lines ...
I0111 23:45:23.612] Namespace:    namespace-1547250322-17807
I0111 23:45:23.612] Selector:     app=guestbook,tier=frontend
I0111 23:45:23.612] Labels:       app=guestbook
I0111 23:45:23.612]               tier=frontend
I0111 23:45:23.612] Annotations:  <none>
I0111 23:45:23.612] Replicas:     3 current / 3 desired
I0111 23:45:23.613] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:23.613] Pod Template:
I0111 23:45:23.613]   Labels:  app=guestbook
I0111 23:45:23.613]            tier=frontend
I0111 23:45:23.613]   Containers:
I0111 23:45:23.613]    php-redis:
I0111 23:45:23.613]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 23:45:23.736] Namespace:    namespace-1547250322-17807
I0111 23:45:23.736] Selector:     app=guestbook,tier=frontend
I0111 23:45:23.737] Labels:       app=guestbook
I0111 23:45:23.737]               tier=frontend
I0111 23:45:23.737] Annotations:  <none>
I0111 23:45:23.737] Replicas:     3 current / 3 desired
I0111 23:45:23.737] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:23.737] Pod Template:
I0111 23:45:23.737]   Labels:  app=guestbook
I0111 23:45:23.737]            tier=frontend
I0111 23:45:23.737]   Containers:
I0111 23:45:23.737]    php-redis:
I0111 23:45:23.737]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0111 23:45:23.869] Namespace:    namespace-1547250322-17807
I0111 23:45:23.869] Selector:     app=guestbook,tier=frontend
I0111 23:45:23.869] Labels:       app=guestbook
I0111 23:45:23.869]               tier=frontend
I0111 23:45:23.869] Annotations:  <none>
I0111 23:45:23.869] Replicas:     3 current / 3 desired
I0111 23:45:23.870] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:23.870] Pod Template:
I0111 23:45:23.870]   Labels:  app=guestbook
I0111 23:45:23.870]            tier=frontend
I0111 23:45:23.870]   Containers:
I0111 23:45:23.870]    php-redis:
I0111 23:45:23.870]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0111 23:45:23.989] Namespace:    namespace-1547250322-17807
I0111 23:45:23.989] Selector:     app=guestbook,tier=frontend
I0111 23:45:23.989] Labels:       app=guestbook
I0111 23:45:23.989]               tier=frontend
I0111 23:45:23.989] Annotations:  <none>
I0111 23:45:23.989] Replicas:     3 current / 3 desired
I0111 23:45:23.990] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:23.990] Pod Template:
I0111 23:45:23.990]   Labels:  app=guestbook
I0111 23:45:23.990]            tier=frontend
I0111 23:45:23.990]   Containers:
I0111 23:45:23.990]    php-redis:
I0111 23:45:23.990]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0111 23:45:24.149] Namespace:    namespace-1547250322-17807
I0111 23:45:24.149] Selector:     app=guestbook,tier=frontend
I0111 23:45:24.149] Labels:       app=guestbook
I0111 23:45:24.149]               tier=frontend
I0111 23:45:24.149] Annotations:  <none>
I0111 23:45:24.149] Replicas:     3 current / 3 desired
I0111 23:45:24.150] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:24.150] Pod Template:
I0111 23:45:24.150]   Labels:  app=guestbook
I0111 23:45:24.150]            tier=frontend
I0111 23:45:24.150]   Containers:
I0111 23:45:24.150]    php-redis:
I0111 23:45:24.150]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 23:45:24.263] Namespace:    namespace-1547250322-17807
I0111 23:45:24.264] Selector:     app=guestbook,tier=frontend
I0111 23:45:24.264] Labels:       app=guestbook
I0111 23:45:24.264]               tier=frontend
I0111 23:45:24.264] Annotations:  <none>
I0111 23:45:24.264] Replicas:     3 current / 3 desired
I0111 23:45:24.264] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:24.264] Pod Template:
I0111 23:45:24.264]   Labels:  app=guestbook
I0111 23:45:24.264]            tier=frontend
I0111 23:45:24.265]   Containers:
I0111 23:45:24.265]    php-redis:
I0111 23:45:24.265]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 23:45:24.373] Namespace:    namespace-1547250322-17807
I0111 23:45:24.373] Selector:     app=guestbook,tier=frontend
I0111 23:45:24.373] Labels:       app=guestbook
I0111 23:45:24.373]               tier=frontend
I0111 23:45:24.374] Annotations:  <none>
I0111 23:45:24.374] Replicas:     3 current / 3 desired
I0111 23:45:24.374] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:24.374] Pod Template:
I0111 23:45:24.374]   Labels:  app=guestbook
I0111 23:45:24.374]            tier=frontend
I0111 23:45:24.374]   Containers:
I0111 23:45:24.374]    php-redis:
I0111 23:45:24.374]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0111 23:45:24.481] Namespace:    namespace-1547250322-17807
I0111 23:45:24.481] Selector:     app=guestbook,tier=frontend
I0111 23:45:24.481] Labels:       app=guestbook
I0111 23:45:24.482]               tier=frontend
I0111 23:45:24.482] Annotations:  <none>
I0111 23:45:24.482] Replicas:     3 current / 3 desired
I0111 23:45:24.482] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:24.482] Pod Template:
I0111 23:45:24.482]   Labels:  app=guestbook
I0111 23:45:24.482]            tier=frontend
I0111 23:45:24.482]   Containers:
I0111 23:45:24.483]    php-redis:
I0111 23:45:24.483]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0111 23:45:25.363] test-cmd-util.sh:3065: Successful get rc frontend {{.spec.replicas}}: 3
I0111 23:45:25.456] (Btest-cmd-util.sh:3069: Successful get rc frontend {{.spec.replicas}}: 3
I0111 23:45:25.554] (Breplicationcontroller/frontend scaled
I0111 23:45:25.657] test-cmd-util.sh:3073: Successful get rc frontend {{.spec.replicas}}: 2
I0111 23:45:25.743] (Breplicationcontroller "frontend" deleted
W0111 23:45:25.844] I0111 23:45:24.677018   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250322-17807", Name:"frontend", UID:"f68483c0-15fa-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"1431", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-29rcs
W0111 23:45:25.844] error: Expected replicas to be 3, was 2
W0111 23:45:25.844] I0111 23:45:25.267414   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250322-17807", Name:"frontend", UID:"f68483c0-15fa-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"1438", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-h9pbk
W0111 23:45:25.845] I0111 23:45:25.558552   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250322-17807", Name:"frontend", UID:"f68483c0-15fa-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"1443", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-h9pbk
W0111 23:45:25.932] I0111 23:45:25.932096   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250322-17807", Name:"redis-master", UID:"f80f4e18-15fa-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"1455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-fmxdz
I0111 23:45:26.033] replicationcontroller/redis-master created
I0111 23:45:26.090] replicationcontroller/redis-slave created
W0111 23:45:26.191] I0111 23:45:26.093176   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250322-17807", Name:"redis-slave", UID:"f827e20c-15fa-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"1460", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-vqvdb
... skipping 56 lines ...
I0111 23:45:30.287] service "frontend" deleted
I0111 23:45:30.293] service "frontend-2" deleted
I0111 23:45:30.298] service "frontend-3" deleted
I0111 23:45:30.303] service "frontend-4" deleted
I0111 23:45:30.308] service "frontend-5" deleted
I0111 23:45:30.416] Successful
I0111 23:45:30.417] message:error: cannot expose a { Node}
I0111 23:45:30.417] has:cannot expose
I0111 23:45:30.543] Successful
I0111 23:45:30.543] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0111 23:45:30.543] has:metadata.name: Invalid value
I0111 23:45:30.648] Successful
I0111 23:45:30.648] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0111 23:45:32.848] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 23:45:32.954] test-cmd-util.sh:3209: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0111 23:45:33.044] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0111 23:45:33.149] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 23:45:33.255] test-cmd-util.sh:3213: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0111 23:45:33.345] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0111 23:45:33.446] Error: required flag(s) "max" not set
W0111 23:45:33.446] 
W0111 23:45:33.446] 
W0111 23:45:33.446] Examples:
W0111 23:45:33.447]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0111 23:45:33.447]   kubectl autoscale deployment foo --min=2 --max=10
W0111 23:45:33.447]   
... skipping 69 lines ...
I0111 23:45:33.754]       dnsPolicy: ClusterFirst
I0111 23:45:33.754]       restartPolicy: Always
I0111 23:45:33.754]       schedulerName: default-scheduler
I0111 23:45:33.754]       securityContext: {}
I0111 23:45:33.754]       terminationGracePeriodSeconds: 0
I0111 23:45:33.754] status: {}
W0111 23:45:33.855] Error from server (NotFound): deployments.extensions "nginx-deployment-resources" not found
I0111 23:45:34.017] deployment.extensions/nginx-deployment-resources created
W0111 23:45:34.118] I0111 23:45:34.020337   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources", UID:"fce14189-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1682", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-57c6b5597b to 3
W0111 23:45:34.118] I0111 23:45:34.025837   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources-57c6b5597b", UID:"fce1e35b-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1683", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-57c6b5597b-l58hf
W0111 23:45:34.119] I0111 23:45:34.029683   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources-57c6b5597b", UID:"fce1e35b-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1683", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-57c6b5597b-xtvxc
W0111 23:45:34.119] I0111 23:45:34.030192   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources-57c6b5597b", UID:"fce1e35b-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1683", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-57c6b5597b-2jzd2
I0111 23:45:34.219] test-cmd-util.sh:3228: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0111 23:45:34.252] (Btest-cmd-util.sh:3229: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 23:45:34.363] (Btest-cmd-util.sh:3230: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0111 23:45:34.470] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0111 23:45:34.571] I0111 23:45:34.474018   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources", UID:"fce14189-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-79bfbb6584 to 1
W0111 23:45:34.571] I0111 23:45:34.477306   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources-79bfbb6584", UID:"fd27086d-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79bfbb6584-gwp4t
W0111 23:45:34.571] I0111 23:45:34.480790   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources", UID:"fce14189-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-57c6b5597b to 2
W0111 23:45:34.572] I0111 23:45:34.486537   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources-57c6b5597b", UID:"fce1e35b-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1701", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-57c6b5597b-l58hf
W0111 23:45:34.572] E0111 23:45:34.488062   72790 replica_set.go:450] Sync "namespace-1547250322-17807/nginx-deployment-resources-79bfbb6584" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-resources-79bfbb6584": the object has been modified; please apply your changes to the latest version and try again
W0111 23:45:34.572] I0111 23:45:34.488527   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources", UID:"fce14189-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-79bfbb6584 to 2
W0111 23:45:34.573] I0111 23:45:34.492931   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources-79bfbb6584", UID:"fd27086d-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1707", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79bfbb6584-sjj46
I0111 23:45:34.673] test-cmd-util.sh:3233: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0111 23:45:34.698] (Btest-cmd-util.sh:3234: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0111 23:45:34.896] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0111 23:45:34.997] error: unable to find container named redis
W0111 23:45:34.998] I0111 23:45:34.906828   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources", UID:"fce14189-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1721", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-57c6b5597b to 0
W0111 23:45:34.998] I0111 23:45:34.911430   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources-57c6b5597b", UID:"fce1e35b-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1725", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-57c6b5597b-xtvxc
W0111 23:45:34.998] I0111 23:45:34.912742   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources", UID:"fce14189-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1723", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-775fc4497d to 2
W0111 23:45:34.999] I0111 23:45:34.914474   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources-57c6b5597b", UID:"fce1e35b-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1725", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-57c6b5597b-2jzd2
W0111 23:45:34.999] I0111 23:45:34.916573   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources-775fc4497d", UID:"fd68173c-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1730", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-775fc4497d-p2m5g
W0111 23:45:34.999] I0111 23:45:34.919781   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250322-17807", Name:"nginx-deployment-resources-775fc4497d", UID:"fd68173c-15fa-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1730", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-775fc4497d-nm6w4
... skipping 82 lines ...
I0111 23:45:35.659]     reason: ReplicaSetUpdated
I0111 23:45:35.659]     status: "True"
I0111 23:45:35.659]     type: Progressing
I0111 23:45:35.659]   observedGeneration: 4
I0111 23:45:35.659]   replicas: 2
I0111 23:45:35.659]   unavailableReplicas: 4
W0111 23:45:35.760] error: you must specify resources by --filename when --local is set.
W0111 23:45:35.760] Example resource specifications include:
W0111 23:45:35.760]    '-f rsrc.yaml'
W0111 23:45:35.760]    '--filename=rsrc.json'
I0111 23:45:35.861] test-cmd-util.sh:3249: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0111 23:45:35.947] (Btest-cmd-util.sh:3250: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0111 23:45:36.071] (Btest-cmd-util.sh:3251: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0111 23:45:37.750]                 pod-template-hash=1594316396
I0111 23:45:37.750] Annotations:    deployment.kubernetes.io/desired-replicas=1
I0111 23:45:37.750]                 deployment.kubernetes.io/max-replicas=2
I0111 23:45:37.750]                 deployment.kubernetes.io/revision=1
I0111 23:45:37.750] Controlled By:  Deployment/test-nginx-apps
I0111 23:45:37.750] Replicas:       1 current / 1 desired
I0111 23:45:37.750] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:37.750] Pod Template:
I0111 23:45:37.750]   Labels:  app=test-nginx-apps
I0111 23:45:37.750]            pod-template-hash=1594316396
I0111 23:45:37.751]   Containers:
I0111 23:45:37.751]    nginx:
I0111 23:45:37.751]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 96 lines ...
W0111 23:45:43.646] I0111 23:45:43.454234   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250336-20800", Name:"nginx", UID:"01180830-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1948", FieldPath:""}): type: 'Warning' reason: 'DeploymentRollbackRevisionNotFound' Unable to find the revision to rollback to.
I0111 23:45:43.747] test-cmd-util.sh:3377: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 23:45:43.847] (Bdeployment.extensions/nginx
W0111 23:45:43.948] I0111 23:45:43.771292   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250336-20800", Name:"nginx", UID:"01180830-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1952", FieldPath:""}): type: 'Normal' reason: 'DeploymentRollback' Rolled back deployment "nginx" to revision 2
I0111 23:45:44.959] test-cmd-util.sh:3381: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0111 23:45:45.064] (Bdeployment.extensions/nginx paused
W0111 23:45:45.166] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0111 23:45:45.270] deployment.extensions/nginx resumed
W0111 23:45:45.384] I0111 23:45:45.384207   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250336-20800", Name:"nginx", UID:"01180830-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1965", FieldPath:""}): type: 'Normal' reason: 'DeploymentRollback' Rolled back deployment "nginx" to revision 3
I0111 23:45:45.485] deployment.extensions/nginx
I0111 23:45:45.659]     deployment.kubernetes.io/revision-history: 1,3
W0111 23:45:45.760] error: desired revision (3) is different from the running revision (5)
I0111 23:45:45.926] deployment.extensions/nginx2 created
W0111 23:45:46.027] I0111 23:45:45.935131   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250336-20800", Name:"nginx2", UID:"03fa8457-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1972", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-5d58d7d8d4 to 3
W0111 23:45:46.028] I0111 23:45:45.938148   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250336-20800", Name:"nginx2-5d58d7d8d4", UID:"03fbce09-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1973", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-5d58d7d8d4-6pgmg
W0111 23:45:46.028] I0111 23:45:45.940960   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250336-20800", Name:"nginx2-5d58d7d8d4", UID:"03fbce09-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1973", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-5d58d7d8d4-76thr
W0111 23:45:46.028] I0111 23:45:45.941548   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250336-20800", Name:"nginx2-5d58d7d8d4", UID:"03fbce09-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1973", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-5d58d7d8d4-2ft7q
I0111 23:45:46.129] deployment.extensions "nginx2" deleted
... skipping 10 lines ...
I0111 23:45:46.839] (Bdeployment.extensions/nginx-deployment image updated
W0111 23:45:46.939] I0111 23:45:46.842059   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250336-20800", Name:"nginx-deployment", UID:"0446f196-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2019", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-78d7b4bff9 to 1
W0111 23:45:46.940] I0111 23:45:46.845033   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250336-20800", Name:"nginx-deployment-78d7b4bff9", UID:"0486522e-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-78d7b4bff9-vg92b
W0111 23:45:46.940] I0111 23:45:46.847556   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250336-20800", Name:"nginx-deployment", UID:"0446f196-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2019", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-84765bf7f9 to 2
W0111 23:45:46.941] I0111 23:45:46.852129   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250336-20800", Name:"nginx-deployment-84765bf7f9", UID:"044790d2-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-84765bf7f9-v9crp
W0111 23:45:46.941] I0111 23:45:46.853579   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250336-20800", Name:"nginx-deployment", UID:"0446f196-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2021", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-78d7b4bff9 to 2
W0111 23:45:46.941] E0111 23:45:46.856497   72790 replica_set.go:450] Sync "namespace-1547250336-20800/nginx-deployment-78d7b4bff9" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-78d7b4bff9": the object has been modified; please apply your changes to the latest version and try again
W0111 23:45:46.942] I0111 23:45:46.858817   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547250336-20800", Name:"nginx-deployment-78d7b4bff9", UID:"0486522e-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2030", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-78d7b4bff9-qmrb2
I0111 23:45:47.042] test-cmd-util.sh:3411: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0111 23:45:47.060] (Btest-cmd-util.sh:3412: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0111 23:45:47.271] (Bdeployment.extensions/nginx-deployment image updated
W0111 23:45:47.371] error: unable to find container named "redis"
I0111 23:45:47.472] test-cmd-util.sh:3417: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 23:45:47.481] (Btest-cmd-util.sh:3418: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0111 23:45:47.582] (Bdeployment.extensions/nginx-deployment image updated
I0111 23:45:47.692] test-cmd-util.sh:3421: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0111 23:45:47.795] (Btest-cmd-util.sh:3422: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0111 23:45:47.987] (Btest-cmd-util.sh:3425: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
... skipping 51 lines ...
I0111 23:45:50.809] deployment.extensions/nginx-deployment env updated
I0111 23:45:50.809] deployment.extensions/nginx-deployment env updated
I0111 23:45:50.812] deployment.extensions/nginx-deployment env updated
W0111 23:45:50.913] I0111 23:45:50.828205   72790 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547250336-20800", Name:"nginx-deployment", UID:"05d9aff5-15fb-11e9-b157-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2187", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-844b494674 to 2
I0111 23:45:51.014] deployment.extensions/nginx-deployment env updated
I0111 23:45:51.019] deployment.extensions "nginx-deployment" deleted
W0111 23:45:51.120] E0111 23:45:51.028500   72790 replica_set.go:450] Sync "namespace-1547250336-20800/nginx-deployment-67c9c8994" failed with replicasets.apps "nginx-deployment-67c9c8994" not found
W0111 23:45:51.120] E0111 23:45:51.079908   72790 replica_set.go:450] Sync "namespace-1547250336-20800/nginx-deployment-cdbc49cff" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-cdbc49cff": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547250336-20800/nginx-deployment-cdbc49cff, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 065d0dd0-15fb-11e9-b157-0242ac110002, UID in object meta: 
W0111 23:45:51.129] E0111 23:45:51.129111   72790 replica_set.go:450] Sync "namespace-1547250336-20800/nginx-deployment-5fcdc7cb99" failed with replicasets.apps "nginx-deployment-5fcdc7cb99" not found
W0111 23:45:51.180] E0111 23:45:51.179713   72790 replica_set.go:450] Sync "namespace-1547250336-20800/nginx-deployment-844b494674" failed with replicasets.apps "nginx-deployment-844b494674" not found
W0111 23:45:51.280] E0111 23:45:51.279397   72790 replica_set.go:450] Sync "namespace-1547250336-20800/nginx-deployment-7b6cf544d6" failed with replicasets.apps "nginx-deployment-7b6cf544d6" not found
I0111 23:45:51.380] configmap "test-set-env-config" deleted
I0111 23:45:51.380] secret "test-set-env-secret" deleted
I0111 23:45:51.381] +++ exit code: 0
I0111 23:45:51.381] Recording: run_rs_tests
I0111 23:45:51.381] Running command: run_rs_tests
I0111 23:45:51.381] 
... skipping 37 lines ...
I0111 23:45:53.472] Namespace:    namespace-1547250351-15665
I0111 23:45:53.472] Selector:     app=guestbook,tier=frontend
I0111 23:45:53.472] Labels:       app=guestbook
I0111 23:45:53.473]               tier=frontend
I0111 23:45:53.473] Annotations:  <none>
I0111 23:45:53.473] Replicas:     3 current / 3 desired
I0111 23:45:53.473] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:53.473] Pod Template:
I0111 23:45:53.473]   Labels:  app=guestbook
I0111 23:45:53.473]            tier=frontend
I0111 23:45:53.473]   Containers:
I0111 23:45:53.473]    php-redis:
I0111 23:45:53.474]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 23:45:53.599] Namespace:    namespace-1547250351-15665
I0111 23:45:53.599] Selector:     app=guestbook,tier=frontend
I0111 23:45:53.599] Labels:       app=guestbook
I0111 23:45:53.599]               tier=frontend
I0111 23:45:53.599] Annotations:  <none>
I0111 23:45:53.599] Replicas:     3 current / 3 desired
I0111 23:45:53.599] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:53.599] Pod Template:
I0111 23:45:53.599]   Labels:  app=guestbook
I0111 23:45:53.600]            tier=frontend
I0111 23:45:53.600]   Containers:
I0111 23:45:53.600]    php-redis:
I0111 23:45:53.600]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0111 23:45:53.725] Namespace:    namespace-1547250351-15665
I0111 23:45:53.725] Selector:     app=guestbook,tier=frontend
I0111 23:45:53.725] Labels:       app=guestbook
I0111 23:45:53.725]               tier=frontend
I0111 23:45:53.725] Annotations:  <none>
I0111 23:45:53.725] Replicas:     3 current / 3 desired
I0111 23:45:53.726] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:53.726] Pod Template:
I0111 23:45:53.726]   Labels:  app=guestbook
I0111 23:45:53.726]            tier=frontend
I0111 23:45:53.726]   Containers:
I0111 23:45:53.726]    php-redis:
I0111 23:45:53.726]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0111 23:45:53.855] Namespace:    namespace-1547250351-15665
I0111 23:45:53.855] Selector:     app=guestbook,tier=frontend
I0111 23:45:53.855] Labels:       app=guestbook
I0111 23:45:53.855]               tier=frontend
I0111 23:45:53.855] Annotations:  <none>
I0111 23:45:53.855] Replicas:     3 current / 3 desired
I0111 23:45:53.855] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:53.855] Pod Template:
I0111 23:45:53.855]   Labels:  app=guestbook
I0111 23:45:53.855]            tier=frontend
I0111 23:45:53.856]   Containers:
I0111 23:45:53.856]    php-redis:
I0111 23:45:53.856]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0111 23:45:53.999] Namespace:    namespace-1547250351-15665
I0111 23:45:53.999] Selector:     app=guestbook,tier=frontend
I0111 23:45:53.999] Labels:       app=guestbook
I0111 23:45:53.999]               tier=frontend
I0111 23:45:53.999] Annotations:  <none>
I0111 23:45:53.999] Replicas:     3 current / 3 desired
I0111 23:45:54.000] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:54.000] Pod Template:
I0111 23:45:54.000]   Labels:  app=guestbook
I0111 23:45:54.000]            tier=frontend
I0111 23:45:54.000]   Containers:
I0111 23:45:54.000]    php-redis:
I0111 23:45:54.000]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 23:45:54.116] Namespace:    namespace-1547250351-15665
I0111 23:45:54.116] Selector:     app=guestbook,tier=frontend
I0111 23:45:54.116] Labels:       app=guestbook
I0111 23:45:54.117]               tier=frontend
I0111 23:45:54.117] Annotations:  <none>
I0111 23:45:54.117] Replicas:     3 current / 3 desired
I0111 23:45:54.117] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:54.117] Pod Template:
I0111 23:45:54.117]   Labels:  app=guestbook
I0111 23:45:54.117]            tier=frontend
I0111 23:45:54.117]   Containers:
I0111 23:45:54.117]    php-redis:
I0111 23:45:54.117]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 23:45:54.231] Namespace:    namespace-1547250351-15665
I0111 23:45:54.231] Selector:     app=guestbook,tier=frontend
I0111 23:45:54.231] Labels:       app=guestbook
I0111 23:45:54.231]               tier=frontend
I0111 23:45:54.231] Annotations:  <none>
I0111 23:45:54.231] Replicas:     3 current / 3 desired
I0111 23:45:54.231] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:54.232] Pod Template:
I0111 23:45:54.232]   Labels:  app=guestbook
I0111 23:45:54.232]            tier=frontend
I0111 23:45:54.232]   Containers:
I0111 23:45:54.232]    php-redis:
I0111 23:45:54.232]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0111 23:45:54.350] Namespace:    namespace-1547250351-15665
I0111 23:45:54.350] Selector:     app=guestbook,tier=frontend
I0111 23:45:54.350] Labels:       app=guestbook
I0111 23:45:54.350]               tier=frontend
I0111 23:45:54.350] Annotations:  <none>
I0111 23:45:54.351] Replicas:     3 current / 3 desired
I0111 23:45:54.351] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 23:45:54.351] Pod Template:
I0111 23:45:54.351]   Labels:  app=guestbook
I0111 23:45:54.351]            tier=frontend
I0111 23:45:54.351]   Containers:
I0111 23:45:54.351]    php-redis:
I0111 23:45:54.351]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 183 lines ...
I0111 23:46:00.031] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 23:46:00.137] test-cmd-util.sh:3625: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0111 23:46:00.227] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0111 23:46:00.328] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 23:46:00.435] test-cmd-util.sh:3629: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0111 23:46:00.522] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0111 23:46:00.623] Error: required flag(s) "max" not set
W0111 23:46:00.623] 
W0111 23:46:00.623] 
W0111 23:46:00.624] Examples:
W0111 23:46:00.624]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0111 23:46:00.624]   kubectl autoscale deployment foo --min=2 --max=10
W0111 23:46:00.624]   
... skipping 89 lines ...
I0111 23:46:04.347] (Btest-cmd-util.sh:3750: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 23:46:04.463] (Btest-cmd-util.sh:3751: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 23:46:04.597] (Bstatefulset.apps/nginx rolled back
I0111 23:46:04.711] test-cmd-util.sh:3754: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0111 23:46:04.847] (Btest-cmd-util.sh:3755: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 23:46:04.994] (BSuccessful
I0111 23:46:04.995] message:error: unable to find specified revision 1000000 in history
I0111 23:46:04.996] has:unable to find specified revision
I0111 23:46:05.124] test-cmd-util.sh:3759: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0111 23:46:05.268] (Btest-cmd-util.sh:3760: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 23:46:05.446] (Bstatefulset.apps/nginx rolled back
I0111 23:46:05.596] test-cmd-util.sh:3763: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0111 23:46:05.728] (Btest-cmd-util.sh:3764: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0111 23:46:08.458] Name:         mock
I0111 23:46:08.458] Namespace:    namespace-1547250366-29834
I0111 23:46:08.458] Selector:     app=mock
I0111 23:46:08.458] Labels:       app=mock
I0111 23:46:08.458] Annotations:  <none>
I0111 23:46:08.458] Replicas:     1 current / 1 desired
I0111 23:46:08.459] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 23:46:08.459] Pod Template:
I0111 23:46:08.459]   Labels:  app=mock
I0111 23:46:08.459]   Containers:
I0111 23:46:08.459]    mock-container:
I0111 23:46:08.459]     Image:        k8s.gcr.io/pause:2.0
I0111 23:46:08.459]     Port:         9949/TCP
... skipping 57 lines ...
I0111 23:46:11.763] Name:         mock
I0111 23:46:11.763] Namespace:    namespace-1547250366-29834
I0111 23:46:11.763] Selector:     app=mock
I0111 23:46:11.763] Labels:       app=mock
I0111 23:46:11.763] Annotations:  <none>
I0111 23:46:11.763] Replicas:     1 current / 1 desired
I0111 23:46:11.763] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 23:46:11.763] Pod Template:
I0111 23:46:11.763]   Labels:  app=mock
I0111 23:46:11.764]   Containers:
I0111 23:46:11.764]    mock-container:
I0111 23:46:11.764]     Image:        k8s.gcr.io/pause:2.0
I0111 23:46:11.764]     Port:         9949/TCP
... skipping 56 lines ...
I0111 23:46:14.301] Name:         mock
I0111 23:46:14.301] Namespace:    namespace-1547250366-29834
I0111 23:46:14.301] Selector:     app=mock
I0111 23:46:14.301] Labels:       app=mock
I0111 23:46:14.301] Annotations:  <none>
I0111 23:46:14.301] Replicas:     1 current / 1 desired
I0111 23:46:14.301] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 23:46:14.301] Pod Template:
I0111 23:46:14.302]   Labels:  app=mock
I0111 23:46:14.302]   Containers:
I0111 23:46:14.302]    mock-container:
I0111 23:46:14.302]     Image:        k8s.gcr.io/pause:2.0
I0111 23:46:14.302]     Port:         9949/TCP
... skipping 42 lines ...
I0111 23:46:16.601] Namespace:    namespace-1547250366-29834
I0111 23:46:16.601] Selector:     app=mock
I0111 23:46:16.601] Labels:       app=mock
I0111 23:46:16.601]               status=replaced
I0111 23:46:16.601] Annotations:  <none>
I0111 23:46:16.601] Replicas:     1 current / 1 desired
I0111 23:46:16.601] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 23:46:16.601] Pod Template:
I0111 23:46:16.601]   Labels:  app=mock
I0111 23:46:16.602]   Containers:
I0111 23:46:16.602]    mock-container:
I0111 23:46:16.602]     Image:        k8s.gcr.io/pause:2.0
I0111 23:46:16.602]     Port:         9949/TCP
... skipping 11 lines ...
I0111 23:46:16.607] Namespace:    namespace-1547250366-29834
I0111 23:46:16.607] Selector:     app=mock2
I0111 23:46:16.607] Labels:       app=mock2
I0111 23:46:16.607]               status=replaced
I0111 23:46:16.608] Annotations:  <none>
I0111 23:46:16.608] Replicas:     1 current / 1 desired
I0111 23:46:16.608] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 23:46:16.608] Pod Template:
I0111 23:46:16.608]   Labels:  app=mock2
I0111 23:46:16.608]   Containers:
I0111 23:46:16.608]    mock-container:
I0111 23:46:16.608]     Image:        k8s.gcr.io/pause:2.0
I0111 23:46:16.608]     Port:         9949/TCP
... skipping 581 lines ...
I0111 23:46:33.154] yes
I0111 23:46:33.155] has:the server doesn't have a resource type
I0111 23:46:33.241] Successful
I0111 23:46:33.242] message:yes
I0111 23:46:33.242] has:yes
I0111 23:46:33.325] Successful
I0111 23:46:33.326] message:error: --subresource can not be used with NonResourceURL
I0111 23:46:33.326] has:subresource can not be used with NonResourceURL
I0111 23:46:33.418] Successful
I0111 23:46:33.512] Successful
I0111 23:46:33.512] message:yes
I0111 23:46:33.512] 0
I0111 23:46:33.512] has:0
... skipping 48 lines ...
I0111 23:46:35.579] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0111 23:46:35.581] +++ working dir: /go/src/k8s.io/kubernetes
I0111 23:46:35.583] +++ command: run_kubectl_explain_tests
I0111 23:46:35.594] +++ [0111 23:46:35] Testing kubectl(v1:explain)
W0111 23:46:35.694] I0111 23:46:35.446800   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250394-9709", Name:"cassandra", UID:"213b3640-15fb-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"2801", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-shl65
W0111 23:46:35.695] I0111 23:46:35.452399   72790 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547250394-9709", Name:"cassandra", UID:"213b3640-15fb-11e9-b157-0242ac110002", APIVersion:"v1", ResourceVersion:"2801", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-wnxph
W0111 23:46:35.695] E0111 23:46:35.457918   72790 replica_set.go:450] Sync "namespace-1547250394-9709/cassandra" failed with Operation cannot be fulfilled on replicationcontrollers "cassandra": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1547250394-9709/cassandra, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 213b3640-15fb-11e9-b157-0242ac110002, UID in object meta: 
I0111 23:46:35.796] KIND:     Pod
I0111 23:46:35.796] VERSION:  v1
I0111 23:46:35.796] 
I0111 23:46:35.796] DESCRIPTION:
I0111 23:46:35.796]      Pod is a collection of containers that can run on a host. This resource is
I0111 23:46:35.796]      created by clients and scheduled onto hosts.
... skipping 761 lines ...
I0111 23:47:02.604] message:node/127.0.0.1 already uncordoned (dry run)
I0111 23:47:02.604] has:already uncordoned
I0111 23:47:02.702] test-cmd-util.sh:4971: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0111 23:47:02.787] (Bnode/127.0.0.1 labeled
I0111 23:47:02.891] test-cmd-util.sh:4976: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0111 23:47:02.972] (BSuccessful
I0111 23:47:02.972] message:error: cannot specify both a node name and a --selector option
I0111 23:47:02.972] See 'kubectl drain -h' for help and examples.
I0111 23:47:02.972] has:cannot specify both a node name
I0111 23:47:03.049] Successful
I0111 23:47:03.049] message:error: USAGE: cordon NODE [flags]
I0111 23:47:03.049] See 'kubectl cordon -h' for help and examples.
I0111 23:47:03.050] has:error\: USAGE\: cordon NODE
I0111 23:47:03.146] node/127.0.0.1 already uncordoned
I0111 23:47:03.231] Successful
I0111 23:47:03.232] message:error: You must provide one or more resources by argument or filename.
I0111 23:47:03.232] Example resource specifications include:
I0111 23:47:03.232]    '-f rsrc.yaml'
I0111 23:47:03.232]    '--filename=rsrc.json'
I0111 23:47:03.232]    '<resource> <name>'
I0111 23:47:03.232]    '<resource>'
I0111 23:47:03.232] has:must provide one or more resources
... skipping 77 lines ...
I0111 23:47:03.776]   kubectl [flags] [options]
I0111 23:47:03.776] 
I0111 23:47:03.776] Use "kubectl <command> --help" for more information about a given command.
I0111 23:47:03.776] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0111 23:47:03.777] has:plugin\s\+Runs a command-line plugin
I0111 23:47:03.850] Successful
I0111 23:47:03.850] message:error: no plugins installed.
I0111 23:47:03.850] has:no plugins installed
I0111 23:47:03.934] Successful
I0111 23:47:03.934] message:Runs a command-line plugin. 
I0111 23:47:03.934] 
I0111 23:47:03.935] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0111 23:47:03.935] 
I0111 23:47:03.935] Available Commands:
I0111 23:47:03.935]   echo        Echoes for test-cmd
I0111 23:47:03.935]   env         The plugin envs plugin
I0111 23:47:03.935]   error       The tremendous plugin that always fails!
I0111 23:47:03.935]   get         The wonderful new plugin-based get!
I0111 23:47:03.935]   tree        Plugin with a tree of commands
I0111 23:47:03.935] 
I0111 23:47:03.935] Usage:
I0111 23:47:03.935]   kubectl plugin NAME [options]
I0111 23:47:03.935] 
... skipping 5 lines ...
I0111 23:47:03.936] 
I0111 23:47:03.937] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0111 23:47:03.937] 
I0111 23:47:03.937] Available Commands:
I0111 23:47:03.937]   echo        Echoes for test-cmd
I0111 23:47:03.937]   env         The plugin envs plugin
I0111 23:47:03.937]   error       The tremendous plugin that always fails!
I0111 23:47:03.937]   get         The wonderful new plugin-based get!
I0111 23:47:03.937]   tree        Plugin with a tree of commands
I0111 23:47:03.937] 
I0111 23:47:03.937] Usage:
I0111 23:47:03.937]   kubectl plugin NAME [options]
I0111 23:47:03.938] 
... skipping 5 lines ...
I0111 23:47:03.939] 
I0111 23:47:03.939] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0111 23:47:03.939] 
I0111 23:47:03.939] Available Commands:
I0111 23:47:03.939]   echo        Echoes for test-cmd
I0111 23:47:03.939]   env         The plugin envs plugin
I0111 23:47:03.939]   error       The tremendous plugin that always fails!
I0111 23:47:03.939]   get         The wonderful new plugin-based get!
I0111 23:47:03.939]   tree        Plugin with a tree of commands
I0111 23:47:03.939] 
I0111 23:47:03.940] Usage:
I0111 23:47:03.940]   kubectl plugin NAME [options]
I0111 23:47:03.940] 
I0111 23:47:03.940] Use "kubectl <command> --help" for more information about a given command.
I0111 23:47:03.940] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0111 23:47:03.940] has:error\s\+The tremendous plugin that always fails!
I0111 23:47:03.940] Successful
I0111 23:47:03.941] message:Runs a command-line plugin. 
I0111 23:47:03.941] 
I0111 23:47:03.941] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0111 23:47:03.941] 
I0111 23:47:03.941] Available Commands:
I0111 23:47:03.941]   echo        Echoes for test-cmd
I0111 23:47:03.941]   env         The plugin envs plugin
I0111 23:47:03.942]   error       The tremendous plugin that always fails!
I0111 23:47:03.942]   get         The wonderful new plugin-based get!
I0111 23:47:03.942]   tree        Plugin with a tree of commands
I0111 23:47:03.942] 
I0111 23:47:03.942] Usage:
I0111 23:47:03.942]   kubectl plugin NAME [options]
I0111 23:47:03.942] 
... skipping 5 lines ...
I0111 23:47:03.943] 
I0111 23:47:03.943] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0111 23:47:03.943] 
I0111 23:47:03.944] Available Commands:
I0111 23:47:03.944]   echo        Echoes for test-cmd
I0111 23:47:03.944]   env         The plugin envs plugin
I0111 23:47:03.944]   error       The tremendous plugin that always fails!
I0111 23:47:03.944]   get         The wonderful new plugin-based get!
I0111 23:47:03.944]   tree        Plugin with a tree of commands
I0111 23:47:03.944] 
I0111 23:47:03.944] Usage:
I0111 23:47:03.945]   kubectl plugin NAME [options]
I0111 23:47:03.945] 
... skipping 5 lines ...
I0111 23:47:03.945] 
I0111 23:47:03.946] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0111 23:47:03.946] 
I0111 23:47:03.946] Available Commands:
I0111 23:47:03.946]   echo        Echoes for test-cmd
I0111 23:47:03.946]   env         The plugin envs plugin
I0111 23:47:03.946]   error       The tremendous plugin that always fails!
I0111 23:47:03.946]   get         The wonderful new plugin-based get!
I0111 23:47:03.946]   tree        Plugin with a tree of commands
I0111 23:47:03.947] 
I0111 23:47:03.947] Usage:
I0111 23:47:03.947]   kubectl plugin NAME [options]
I0111 23:47:03.947] 
... skipping 5 lines ...
I0111 23:47:04.029] 
I0111 23:47:04.029] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0111 23:47:04.029] 
I0111 23:47:04.029] Available Commands:
I0111 23:47:04.030]   echo        Echoes for test-cmd
I0111 23:47:04.030]   env         The plugin envs plugin
I0111 23:47:04.030]   error       The tremendous plugin that always fails!
I0111 23:47:04.030]   get         The wonderful new plugin-based get!
I0111 23:47:04.030]   hello       The hello plugin
I0111 23:47:04.030]   tree        Plugin with a tree of commands
I0111 23:47:04.030] 
I0111 23:47:04.030] Usage:
I0111 23:47:04.031]   kubectl plugin NAME [options]
... skipping 6 lines ...
I0111 23:47:04.031] 
I0111 23:47:04.032] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0111 23:47:04.032] 
I0111 23:47:04.032] Available Commands:
I0111 23:47:04.032]   echo        Echoes for test-cmd
I0111 23:47:04.032]   env         The plugin envs plugin
I0111 23:47:04.032]   error       The tremendous plugin that always fails!
I0111 23:47:04.033]   get         The wonderful new plugin-based get!
I0111 23:47:04.033]   hello       The hello plugin
I0111 23:47:04.033]   tree        Plugin with a tree of commands
I0111 23:47:04.033] 
I0111 23:47:04.033] Usage:
I0111 23:47:04.033]   kubectl plugin NAME [options]
... skipping 6 lines ...
I0111 23:47:04.034] 
I0111 23:47:04.034] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0111 23:47:04.034] 
I0111 23:47:04.034] Available Commands:
I0111 23:47:04.034]   echo        Echoes for test-cmd
I0111 23:47:04.035]   env         The plugin envs plugin
I0111 23:47:04.035]   error       The tremendous plugin that always fails!
I0111 23:47:04.035]   get         The wonderful new plugin-based get!
I0111 23:47:04.035]   hello       The hello plugin
I0111 23:47:04.035]   tree        Plugin with a tree of commands
I0111 23:47:04.035] 
I0111 23:47:04.035] Usage:
I0111 23:47:04.035]   kubectl plugin NAME [options]
I0111 23:47:04.035] 
I0111 23:47:04.036] Use "kubectl <command> --help" for more information about a given command.
I0111 23:47:04.036] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0111 23:47:04.036] has:error\s\+The tremendous plugin that always fails!
I0111 23:47:04.036] Successful
I0111 23:47:04.036] message:Runs a command-line plugin. 
I0111 23:47:04.036] 
I0111 23:47:04.037] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0111 23:47:04.037] 
I0111 23:47:04.037] Available Commands:
I0111 23:47:04.037]   echo        Echoes for test-cmd
I0111 23:47:04.037]   env         The plugin envs plugin
I0111 23:47:04.037]   error       The tremendous plugin that always fails!
I0111 23:47:04.037]   get         The wonderful new plugin-based get!
I0111 23:47:04.037]   hello       The hello plugin
I0111 23:47:04.038]   tree        Plugin with a tree of commands
I0111 23:47:04.038] 
I0111 23:47:04.038] Usage:
I0111 23:47:04.038]   kubectl plugin NAME [options]
... skipping 6 lines ...
I0111 23:47:04.039] 
I0111 23:47:04.039] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0111 23:47:04.039] 
I0111 23:47:04.039] Available Commands:
I0111 23:47:04.039]   echo        Echoes for test-cmd
I0111 23:47:04.039]   env         The plugin envs plugin
I0111 23:47:04.039]   error       The tremendous plugin that always fails!
I0111 23:47:04.040]   get         The wonderful new plugin-based get!
I0111 23:47:04.040]   hello       The hello plugin
I0111 23:47:04.040]   tree        Plugin with a tree of commands
I0111 23:47:04.040] 
I0111 23:47:04.040] Usage:
I0111 23:47:04.040]   kubectl plugin NAME [options]
... skipping 159 lines ...
I0111 23:47:04.288] #######
I0111 23:47:04.288] has:#hello#
I0111 23:47:04.399] Successful
I0111 23:47:04.399] message:This plugin works!
I0111 23:47:04.399] has:This plugin works!
I0111 23:47:04.473] Successful
I0111 23:47:04.473] message:error: unknown command "hello"
I0111 23:47:04.473] See 'kubectl plugin -h' for help and examples.
I0111 23:47:04.474] has:unknown command
I0111 23:47:04.583] Successful
I0111 23:47:04.583] message:error: exit status 1
I0111 23:47:04.583] has:error: exit status 1
I0111 23:47:04.665] Successful
I0111 23:47:04.666] message:Plugin with a tree of commands
I0111 23:47:04.666] 
I0111 23:47:04.666] Available Commands:
I0111 23:47:04.666]   child1      The first child of a tree
I0111 23:47:04.666]   child2      The second child of a tree
... skipping 467 lines ...
I0111 23:47:05.123] 
I0111 23:47:05.125] +++ Running case: test-cmd.run_impersonation_tests 
I0111 23:47:05.127] +++ working dir: /go/src/k8s.io/kubernetes
I0111 23:47:05.129] +++ command: run_impersonation_tests
I0111 23:47:05.138] +++ [0111 23:47:05] Testing impersonation
I0111 23:47:05.214] Successful
I0111 23:47:05.214] message:error: requesting groups or user-extra for  without impersonating a user
I0111 23:47:05.215] has:without impersonating a user
I0111 23:47:05.378] certificatesigningrequest.certificates.k8s.io/foo created
I0111 23:47:05.487] test-cmd-util.sh:5101: Successful get csr/foo {{.spec.username}}: user1
I0111 23:47:05.582] (Btest-cmd-util.sh:5102: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0111 23:47:05.673] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0111 23:47:05.844] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 31 lines ...
I0111 23:48:08.187] {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
I0111 23:48:08.191] +++ [0111 23:48:08] Running integration test cases
I0111 23:48:14.757] Running tests for APIVersion: v1,admissionregistration.k8s.io/v1alpha1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1beta1,apps/v1beta1,apps/v1beta2,apps/v1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,
I0111 23:48:15.634] +++ [0111 23:48:15] Running tests without code coverage
I0111 23:56:34.950] ok  	k8s.io/kubernetes/test/integration/apiserver	39.728s
I0111 23:56:34.950] ok  	k8s.io/kubernetes/test/integration/auth	87.707s
I0111 23:56:34.950] FAIL	k8s.io/kubernetes/test/integration/client	33.682s
I0111 23:56:34.950] ok  	k8s.io/kubernetes/test/integration/configmap	8.433s
I0111 23:56:34.950] ok  	k8s.io/kubernetes/test/integration/daemonset	394.118s
I0111 23:56:34.951] ok  	k8s.io/kubernetes/test/integration/defaulttolerationseconds	8.463s
I0111 23:56:34.951] ok  	k8s.io/kubernetes/test/integration/deployment	220.409s
I0111 23:56:34.951] [restful] 2019/01/11 23:50:14 log.go:33: [restful/swagger] listing is available at https://172.17.0.2:44743/swaggerapi
I0111 23:56:34.951] [restful] 2019/01/11 23:50:14 log.go:33: [restful/swagger] https://172.17.0.2:44743/swaggerui/ is mapped to folder /swagger-ui/
... skipping 166 lines ...
I0111 23:57:18.504] [restful] 2019/01/11 23:53:40 log.go:33: [restful/swagger] https://172.17.0.2:43353/swaggerui/ is mapped to folder /swagger-ui/
I0111 23:57:18.504] ok  	k8s.io/kubernetes/test/integration/tls	14.124s
I0111 23:57:18.504] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	10.693s
I0111 23:57:18.504] ok  	k8s.io/kubernetes/test/integration/volume	89.183s
I0111 23:57:18.600] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	100.693s
I0111 23:57:19.355] +++ [0111 23:57:19] Saved JUnit XML test report to /workspace/artifacts/junit_cae8d27844a37937152775ec7fb068d1755ac188_20190111-234815.xml
I0111 23:57:19.358] Makefile:184: recipe for target 'test' failed
I0111 23:57:19.371] +++ [0111 23:57:19] Cleaning up etcd
W0111 23:57:19.471] make[1]: *** [test] Error 1
W0111 23:57:19.472] !!! [0111 23:57:19] Call tree:
W0111 23:57:19.472] !!! [0111 23:57:19]  1: hack/make-rules/test-integration.sh:105 runTests(...)
W0111 23:57:19.550] make: *** [test-integration] Error 1
I0111 23:57:19.651] +++ [0111 23:57:19] Integration test cleanup complete
I0111 23:57:19.651] Makefile:203: recipe for target 'test-integration' failed
W0111 23:57:22.292] Traceback (most recent call last):
W0111 23:57:22.293]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0111 23:57:22.332]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0111 23:57:22.333]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0111 23:57:22.333]     check(*cmd)
W0111 23:57:22.333]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0111 23:57:22.333]     subprocess.check_call(cmd)
W0111 23:57:22.333]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0111 23:57:22.364]     raise CalledProcessError(retcode, cmd)
W0111 23:57:22.365] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=release-1.11', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.11-v20181218-db74ab3f4', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0111 23:57:22.371] Command failed
I0111 23:57:22.371] process 683 exited with code 1 after 25.6m
E0111 23:57:22.372] FAIL: pull-kubernetes-integration
I0111 23:57:22.372] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0111 23:57:23.189] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0111 23:57:23.268] process 122855 exited with code 0 after 0.0m
I0111 23:57:23.269] Call:  gcloud config get-value account
I0111 23:57:23.687] process 122867 exited with code 0 after 0.0m
I0111 23:57:23.688] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 23:57:23.688] Upload result and artifacts...
I0111 23:57:23.688] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-integration/41087
I0111 23:57:23.689] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-integration/41087/artifacts
W0111 23:58:24.800] INFO 0111 23:58:24.799684 retry_util.py] Retrying request, attempt #1...
W0111 23:58:26.567] CommandException: One or more URLs matched no objects.
E0111 23:58:26.716] Command failed
I0111 23:58:26.716] process 122879 exited with code 1 after 1.1m
W0111 23:58:26.716] Remote dir gs://kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-integration/41087/artifacts not exist yet
I0111 23:58:26.717] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-integration/41087/artifacts
I0111 23:58:29.154] process 123021 exited with code 0 after 0.0m
W0111 23:58:29.155] metadata path /workspace/_artifacts/metadata.json does not exist
W0111 23:58:29.155] metadata not found or invalid, init with empty metadata
... skipping 25 lines ...