PRbsalamat: Automated cherry pick of #71722 upstream release 1.13
ResultFAILURE
Tests 1 failed / 577 succeeded
Started2018-12-07 02:44
Elapsed26m58s
Versionv1.13.1-beta.0.12+d2d6ac07e4ea8e
Buildergke-prow-default-pool-3c8994a8-tjxl
Refs release-1.13:ad728da0
71824:612b3c9b
podf0bf753d-f9c9-11e8-aa74-0a580a6c01e0
infra-commitd6f7bb8bf
podf0bf753d-f9c9-11e8-aa74-0a580a6c01e0
repok8s.io/kubernetes
repo-commitd2d6ac07e4ea8e2e70307c473a627990d4b50c51
repos{u'k8s.io/kubernetes': u'release-1.13:ad728da0e6d419f5ce8e3b05998382bce48f46bc,71824:612b3c9b95f35da0ff1079c8f2b49d7457e6d9aa'}

Test Failures


k8s.io/kubernetes/test/integration/ttlcontroller TestTTLAnnotations 7m20s

go test -v k8s.io/kubernetes/test/integration/ttlcontroller -run TestTTLAnnotations$
I1207 03:04:07.858051  122654 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I1207 03:04:07.858124  122654 master.go:272] Node port range unspecified. Defaulting to 30000-32767.
I1207 03:04:07.858151  122654 master.go:228] Using reconciler: 
W1207 03:04:08.058787  122654 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W1207 03:04:08.073710  122654 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1207 03:04:08.074437  122654 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1207 03:04:08.076964  122654 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1207 03:04:08.128898  122654 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
E1207 03:04:08.160832  122654 controller.go:155] Unable to perform initial Kubernetes service initialization: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
E1207 03:04:08.167851  122654 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I1207 03:04:09.140549  122654 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I1207 03:04:09.143279  122654 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I1207 03:04:09.143432  122654 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I1207 03:04:09.149921  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1207 03:04:09.152999  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1207 03:04:09.156190  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1207 03:04:09.159046  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I1207 03:04:09.162399  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I1207 03:04:09.174118  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I1207 03:04:09.183566  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1207 03:04:09.194337  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1207 03:04:09.198803  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1207 03:04:09.202764  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1207 03:04:09.207028  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I1207 03:04:09.210913  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1207 03:04:09.214182  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1207 03:04:09.217804  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1207 03:04:09.220569  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1207 03:04:09.223341  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1207 03:04:09.226427  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1207 03:04:09.230582  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1207 03:04:09.235026  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1207 03:04:09.239869  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1207 03:04:09.243500  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1207 03:04:09.251438  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1207 03:04:09.255575  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I1207 03:04:09.259445  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1207 03:04:09.263287  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1207 03:04:09.269108  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1207 03:04:09.316531  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1207 03:04:09.322564  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1207 03:04:09.326968  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1207 03:04:09.331089  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1207 03:04:09.334948  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1207 03:04:09.338177  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1207 03:04:09.340994  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1207 03:04:09.343699  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1207 03:04:09.347835  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1207 03:04:09.352311  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1207 03:04:09.357999  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1207 03:04:09.361380  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1207 03:04:09.364196  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1207 03:04:09.367699  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1207 03:04:09.375281  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1207 03:04:09.378981  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1207 03:04:09.382204  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1207 03:04:09.385681  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1207 03:04:09.389845  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1207 03:04:09.393039  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1207 03:04:09.395790  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1207 03:04:09.398665  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1207 03:04:09.420959  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1207 03:04:09.425703  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1207 03:04:09.430004  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1207 03:04:09.433734  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1207 03:04:09.436422  122654 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1207 03:04:09.458071  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1207 03:04:09.493652  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1207 03:04:09.534233  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1207 03:04:09.573575  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1207 03:04:09.614104  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1207 03:04:09.655131  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1207 03:04:09.695548  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1207 03:04:09.733941  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I1207 03:04:09.773585  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1207 03:04:09.813533  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1207 03:04:09.854215  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1207 03:04:09.893850  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1207 03:04:09.934209  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1207 03:04:09.973605  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1207 03:04:10.013443  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1207 03:04:10.054128  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1207 03:04:10.093998  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1207 03:04:10.134388  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1207 03:04:10.173645  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1207 03:04:10.219130  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1207 03:04:10.253549  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1207 03:04:10.293378  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1207 03:04:10.335084  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1207 03:04:10.374177  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1207 03:04:10.413789  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1207 03:04:10.456036  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1207 03:04:10.494018  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1207 03:04:10.535114  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1207 03:04:10.573571  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1207 03:04:10.614083  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1207 03:04:10.655970  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1207 03:04:10.693760  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1207 03:04:10.734074  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1207 03:04:10.773606  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1207 03:04:10.813806  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1207 03:04:10.854900  122654 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1207 03:04:10.894608  122654 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1207 03:04:10.933508  122654 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1207 03:04:10.975183  122654 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1207 03:04:11.013950  122654 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1207 03:04:11.054526  122654 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1207 03:04:11.093514  122654 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1207 03:04:11.139180  122654 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1207 03:04:11.174629  122654 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1207 03:04:11.213829  122654 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1207 03:04:11.253474  122654 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1207 03:04:11.293889  122654 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1207 03:04:11.333692  122654 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1207 03:04:11.374154  122654 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
W1207 03:04:11.433364  122654 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I1207 03:04:11.434388  122654 ttl_controller.go:116] Starting TTL controller
I1207 03:04:11.434410  122654 controller_utils.go:1027] Waiting for caches to sync for TTL controller
I1207 03:04:12.842148  122654 trace.go:76] Trace[1288845802]: "Create /api/v1/nodes" (started: 2018-12-07 03:04:11.45911229 +0000 UTC m=+3.846250966) (total time: 1.382969525s):
Trace[1288845802]: [1.382859513s] [1.382789156s] Object stored in database
I1207 03:04:12.843145  122654 trace.go:76] Trace[56403981]: "Create /api/v1/nodes" (started: 2018-12-07 03:04:11.458338114 +0000 UTC m=+3.845476789) (total time: 1.384764072s):
Trace[56403981]: [1.38470265s] [1.384608277s] Object stored in database
I1207 03:04:12.843544  122654 trace.go:76] Trace[1816412079]: "Create /api/v1/nodes" (started: 2018-12-07 03:04:11.458785706 +0000 UTC m=+3.845924384) (total time: 1.384730422s):
Trace[1816412079]: [1.384644884s] [1.384574035s] Object stored in database
I1207 03:04:12.843809  122654 trace.go:76] Trace[536212066]: "Create /api/v1/nodes" (started: 2018-12-07 03:04:11.460483644 +0000 UTC m=+3.847622322) (total time: 1.383302027s):
Trace[536212066]: [1.383250527s] [1.383165954s] Object stored in database
E1207 03:04:18.173561  122654 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
E1207 03:04:28.179124  122654 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
E1207 03:04:38.184807  122654 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I1207 03:04:42.867812  122654 controller.go:170] Shutting down kubernetes service endpoint reconciler
2018/12/07 03:04:47 httptest.Server blocked in Close after 5 seconds, waiting for connections:
  *net.TCPConn 0xc0044f2468 127.0.0.1:42244 in state active
				from junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181207-025834.xml

Filter through log files | View test history on testgrid


Show 577 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I1207 02:44:51.232] process 232 exited with code 0 after 0.1m
I1207 02:44:51.232] Call:  gcloud config get-value account
I1207 02:44:51.589] process 245 exited with code 0 after 0.0m
I1207 02:44:51.589] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 02:44:51.589] Call:  kubectl get -oyaml pods/f0bf753d-f9c9-11e8-aa74-0a580a6c01e0
W1207 02:44:53.223] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E1207 02:44:53.225] Command failed
I1207 02:44:53.225] process 258 exited with code 1 after 0.0m
E1207 02:44:53.225] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/f0bf753d-f9c9-11e8-aa74-0a580a6c01e0']' returned non-zero exit status 1
I1207 02:44:53.226] Root: /workspace
I1207 02:44:53.226] cd to /workspace
I1207 02:44:53.226] Checkout: /workspace/k8s.io/kubernetes release-1.13:ad728da0e6d419f5ce8e3b05998382bce48f46bc,71824:612b3c9b95f35da0ff1079c8f2b49d7457e6d9aa to /workspace/k8s.io/kubernetes
I1207 02:44:53.226] Call:  git init k8s.io/kubernetes
... skipping 452 lines ...
W1207 02:53:42.875] I1207 02:53:42.875034   55701 plugins.go:103] No cloud provider specified.
W1207 02:53:42.875] W1207 02:53:42.875091   55701 controllermanager.go:536] "serviceaccount-token" is disabled because there is no private key
W1207 02:53:42.881] I1207 02:53:42.881197   55701 controllermanager.go:516] Started "namespace"
W1207 02:53:42.882] I1207 02:53:42.881388   55701 namespace_controller.go:186] Starting namespace controller
W1207 02:53:42.882] I1207 02:53:42.881403   55701 controller_utils.go:1027] Waiting for caches to sync for namespace controller
W1207 02:53:42.882] I1207 02:53:42.882166   55701 controllermanager.go:516] Started "serviceaccount"
W1207 02:53:42.883] W1207 02:53:42.882624   55701 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
W1207 02:53:42.883] I1207 02:53:42.883004   55701 serviceaccounts_controller.go:115] Starting service account controller
W1207 02:53:42.884] I1207 02:53:42.883032   55701 controller_utils.go:1027] Waiting for caches to sync for service account controller
W1207 02:53:42.884] I1207 02:53:42.883623   55701 garbagecollector.go:133] Starting garbage collector controller
W1207 02:53:42.884] I1207 02:53:42.883654   55701 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 02:53:42.884] I1207 02:53:42.883692   55701 controllermanager.go:516] Started "garbagecollector"
W1207 02:53:42.884] I1207 02:53:42.883702   55701 graph_builder.go:308] GraphBuilder running
W1207 02:53:42.885] I1207 02:53:42.884376   55701 controllermanager.go:516] Started "job"
W1207 02:53:42.885] I1207 02:53:42.884668   55701 job_controller.go:143] Starting job controller
W1207 02:53:42.885] I1207 02:53:42.884710   55701 controller_utils.go:1027] Waiting for caches to sync for job controller
W1207 02:53:42.885] I1207 02:53:42.884937   55701 controllermanager.go:516] Started "replicaset"
W1207 02:53:42.885] W1207 02:53:42.884967   55701 controllermanager.go:508] Skipping "csrsigning"
W1207 02:53:42.885] I1207 02:53:42.885436   55701 replica_set.go:182] Starting replicaset controller
W1207 02:53:42.885] I1207 02:53:42.885454   55701 controller_utils.go:1027] Waiting for caches to sync for ReplicaSet controller
W1207 02:53:42.886] E1207 02:53:42.886355   55701 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1207 02:53:42.887] W1207 02:53:42.886633   55701 controllermanager.go:508] Skipping "service"
W1207 02:53:42.893] I1207 02:53:42.892854   55701 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
W1207 02:53:42.893] I1207 02:53:42.892951   55701 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
W1207 02:53:42.893] I1207 02:53:42.892984   55701 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
W1207 02:53:42.893] I1207 02:53:42.893008   55701 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W1207 02:53:42.894] I1207 02:53:42.893039   55701 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
... skipping 11 lines ...
W1207 02:53:42.896] I1207 02:53:42.893663   55701 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
W1207 02:53:42.896] I1207 02:53:42.893703   55701 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
W1207 02:53:42.896] I1207 02:53:42.893745   55701 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W1207 02:53:42.897] I1207 02:53:42.893779   55701 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W1207 02:53:42.897] I1207 02:53:42.893815   55701 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
W1207 02:53:42.897] I1207 02:53:42.893873   55701 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
W1207 02:53:42.897] E1207 02:53:42.893907   55701 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 02:53:42.897] I1207 02:53:42.893924   55701 controllermanager.go:516] Started "resourcequota"
W1207 02:53:42.897] I1207 02:53:42.894022   55701 resource_quota_controller.go:276] Starting resource quota controller
W1207 02:53:42.898] I1207 02:53:42.894120   55701 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
W1207 02:53:42.898] I1207 02:53:42.894219   55701 resource_quota_monitor.go:301] QuotaMonitor running
W1207 02:53:42.898] I1207 02:53:42.894877   55701 controllermanager.go:516] Started "persistentvolume-binder"
W1207 02:53:42.898] I1207 02:53:42.894896   55701 core.go:151] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
... skipping 76 lines ...
W1207 02:53:43.001] I1207 02:53:43.000750   55701 disruption.go:296] Sending events to api server.
W1207 02:53:43.003] I1207 02:53:43.003349   55701 controller_utils.go:1034] Caches are synced for GC controller
W1207 02:53:43.005] I1207 02:53:43.005085   55701 controller_utils.go:1034] Caches are synced for ReplicationController controller
W1207 02:53:43.005] I1207 02:53:43.005459   55701 controller_utils.go:1034] Caches are synced for PVC protection controller
W1207 02:53:43.006] I1207 02:53:43.006116   55701 controller_utils.go:1034] Caches are synced for stateful set controller
W1207 02:53:43.007] I1207 02:53:43.006899   55701 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
W1207 02:53:43.014] E1207 02:53:43.013694   55701 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W1207 02:53:43.014] E1207 02:53:43.014192   55701 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W1207 02:53:43.016] E1207 02:53:43.016402   55701 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W1207 02:53:43.020] E1207 02:53:43.020228   55701 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W1207 02:53:43.096] I1207 02:53:43.095279   55701 controller_utils.go:1034] Caches are synced for persistent volume controller
W1207 02:53:43.101] I1207 02:53:43.101202   55701 controller_utils.go:1034] Caches are synced for attach detach controller
W1207 02:53:43.103] I1207 02:53:43.102805   55701 controller_utils.go:1034] Caches are synced for expand controller
W1207 02:53:43.104] I1207 02:53:43.104351   55701 controller_utils.go:1034] Caches are synced for PV protection controller
W1207 02:53:43.203] I1207 02:53:43.203220   55701 controller_utils.go:1034] Caches are synced for endpoint controller
W1207 02:53:43.204] I1207 02:53:43.204009   55701 controller_utils.go:1034] Caches are synced for HPA controller
W1207 02:53:43.295] I1207 02:53:43.294468   55701 controller_utils.go:1034] Caches are synced for resource quota controller
W1207 02:53:43.301] I1207 02:53:43.300654   55701 controller_utils.go:1034] Caches are synced for daemon sets controller
W1207 02:53:43.375] W1207 02:53:43.375028   55701 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I1207 02:53:43.476] +++ [1207 02:53:43] On try 3, controller-manager: ok
I1207 02:53:43.476] node/127.0.0.1 created
I1207 02:53:43.476] +++ [1207 02:53:43] Checking kubectl version
I1207 02:53:43.476] Client Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.1-beta.0.12+d2d6ac07e4ea8e", GitCommit:"d2d6ac07e4ea8e2e70307c473a627990d4b50c51", GitTreeState:"clean", BuildDate:"2018-12-07T02:51:57Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
I1207 02:53:43.477] Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.1-beta.0.12+d2d6ac07e4ea8e", GitCommit:"d2d6ac07e4ea8e2e70307c473a627990d4b50c51", GitTreeState:"clean", BuildDate:"2018-12-07T02:52:14Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
W1207 02:53:43.735] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
... skipping 23 lines ...
I1207 02:53:44.306] Successful: --output json has correct client info
I1207 02:53:44.312] (BSuccessful: --output json has correct server info
I1207 02:53:44.315] (B+++ [1207 02:53:44] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
W1207 02:53:44.415] I1207 02:53:44.372950   55701 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 02:53:44.416] I1207 02:53:44.383825   55701 controller_utils.go:1034] Caches are synced for garbage collector controller
W1207 02:53:44.416] I1207 02:53:44.383850   55701 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W1207 02:53:44.416] E1207 02:53:44.390040   55701 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 02:53:44.473] I1207 02:53:44.473292   55701 controller_utils.go:1034] Caches are synced for garbage collector controller
I1207 02:53:44.574] Successful: --client --output json has correct client info
I1207 02:53:44.574] (BSuccessful: --client --output json has no server info
I1207 02:53:44.574] (B+++ [1207 02:53:44] Testing kubectl version: compare json output using additional --short flag
I1207 02:53:44.592] Successful: --short --output client json info is equal to non short result
I1207 02:53:44.597] (BSuccessful: --short --output server json info is equal to non short result
... skipping 46 lines ...
I1207 02:53:47.291] +++ working dir: /go/src/k8s.io/kubernetes
I1207 02:53:47.293] +++ command: run_RESTMapper_evaluation_tests
I1207 02:53:47.305] +++ [1207 02:53:47] Creating namespace namespace-1544151227-16539
I1207 02:53:47.374] namespace/namespace-1544151227-16539 created
I1207 02:53:47.437] Context "test" modified.
I1207 02:53:47.443] +++ [1207 02:53:47] Testing RESTMapper
I1207 02:53:47.563] +++ [1207 02:53:47] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1207 02:53:47.579] +++ exit code: 0
I1207 02:53:47.686] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1207 02:53:47.686] bindings                                                                      true         Binding
I1207 02:53:47.686] componentstatuses                 cs                                          false        ComponentStatus
I1207 02:53:47.686] configmaps                        cm                                          true         ConfigMap
I1207 02:53:47.687] endpoints                         ep                                          true         Endpoints
... skipping 591 lines ...
I1207 02:54:05.086] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 02:54:05.178] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 02:54:05.265] (Bpod "valid-pod" force deleted
I1207 02:54:05.354] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I1207 02:54:05.445] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I1207 02:54:05.549] (Bnamespace/test-kubectl-describe-pod created
W1207 02:54:05.649] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1207 02:54:05.649] error: setting 'all' parameter but found a non empty selector. 
W1207 02:54:05.650] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 02:54:05.750] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I1207 02:54:05.779] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:05.851] (Bsecret/test-secret created
I1207 02:54:05.938] core.sh:223: Successful get secret/test-secret --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-secret
I1207 02:54:06.017] (Bcore.sh:224: Successful get secret/test-secret --namespace=test-kubectl-describe-pod {{.type}}: test-type
... skipping 8 lines ...
I1207 02:54:06.681] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1207 02:54:06.746] (Bpoddisruptionbudget.policy/test-pdb-4 created
I1207 02:54:06.837] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1207 02:54:06.981] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:07.139] (Bpod/env-test-pod created
W1207 02:54:07.240] I1207 02:54:06.303737   52349 controller.go:608] quota admission added evaluator for: poddisruptionbudgets.policy
W1207 02:54:07.240] error: min-available and max-unavailable cannot be both specified
I1207 02:54:07.340] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I1207 02:54:07.341] Name:               env-test-pod
I1207 02:54:07.341] Namespace:          test-kubectl-describe-pod
I1207 02:54:07.341] Priority:           0
I1207 02:54:07.341] PriorityClassName:  <none>
I1207 02:54:07.341] Node:               <none>
... skipping 145 lines ...
W1207 02:54:18.433] I1207 02:54:17.895768   55701 namespace_controller.go:171] Namespace has been deleted test-kubectl-describe-pod
W1207 02:54:18.433] I1207 02:54:18.001135   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151253-3097", Name:"modified", UID:"63a0ba9c-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-4jkst
I1207 02:54:18.581] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:18.719] (Bpod/valid-pod created
I1207 02:54:18.814] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 02:54:18.958] (BSuccessful
I1207 02:54:18.958] message:Error from server: cannot restore map from string
I1207 02:54:18.958] has:cannot restore map from string
I1207 02:54:19.039] Successful
I1207 02:54:19.039] message:pod/valid-pod patched (no change)
I1207 02:54:19.040] has:patched (no change)
I1207 02:54:19.115] pod/valid-pod patched
I1207 02:54:19.201] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 4 lines ...
I1207 02:54:19.603] core.sh:465: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1207 02:54:19.674] (Bpod/valid-pod patched
I1207 02:54:19.772] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1207 02:54:19.856] (Bpod/valid-pod patched
I1207 02:54:19.947] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1207 02:54:20.111] (Bpod/valid-pod patched
W1207 02:54:20.212] E1207 02:54:18.951007   52349 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I1207 02:54:20.313] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1207 02:54:20.440] (B+++ [1207 02:54:20] "kubectl patch with resourceVersion 487" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I1207 02:54:20.683] pod "valid-pod" deleted
I1207 02:54:20.695] pod/valid-pod replaced
I1207 02:54:20.795] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1207 02:54:20.950] (BSuccessful
I1207 02:54:20.950] message:error: --grace-period must have --force specified
I1207 02:54:20.951] has:\-\-grace-period must have \-\-force specified
I1207 02:54:21.089] Successful
I1207 02:54:21.089] message:error: --timeout must have --force specified
I1207 02:54:21.089] has:\-\-timeout must have \-\-force specified
W1207 02:54:21.235] W1207 02:54:21.235384   55701 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1207 02:54:21.336] node/node-v1-test created
I1207 02:54:21.412] node/node-v1-test replaced
I1207 02:54:21.514] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1207 02:54:21.590] (Bnode "node-v1-test" deleted
I1207 02:54:21.707] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1207 02:54:22.089] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 17 lines ...
I1207 02:54:23.994] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I1207 02:54:24.104] (Bpod/valid-pod labeled
W1207 02:54:24.205] Edit cancelled, no changes made.
W1207 02:54:24.205] Edit cancelled, no changes made.
W1207 02:54:24.206] Edit cancelled, no changes made.
W1207 02:54:24.206] Edit cancelled, no changes made.
W1207 02:54:24.206] error: 'name' already has a value (valid-pod), and --overwrite is false
I1207 02:54:24.306] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I1207 02:54:24.349] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 02:54:24.451] (Bpod "valid-pod" force deleted
W1207 02:54:24.552] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 02:54:24.653] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:24.654] (B+++ [1207 02:54:24] Creating namespace namespace-1544151264-19255
... skipping 82 lines ...
I1207 02:54:30.860] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1207 02:54:30.861] +++ working dir: /go/src/k8s.io/kubernetes
I1207 02:54:30.863] +++ command: run_kubectl_create_error_tests
I1207 02:54:30.872] +++ [1207 02:54:30] Creating namespace namespace-1544151270-18637
I1207 02:54:30.932] namespace/namespace-1544151270-18637 created
I1207 02:54:30.990] Context "test" modified.
I1207 02:54:30.995] +++ [1207 02:54:30] Testing kubectl create with error
W1207 02:54:31.096] Error: required flag(s) "filename" not set
W1207 02:54:31.096] 
W1207 02:54:31.096] 
W1207 02:54:31.096] Examples:
W1207 02:54:31.096]   # Create a pod using the data in pod.json.
W1207 02:54:31.097]   kubectl create -f ./pod.json
W1207 02:54:31.097]   
... skipping 38 lines ...
W1207 02:54:31.102]   kubectl create -f FILENAME [options]
W1207 02:54:31.102] 
W1207 02:54:31.102] Use "kubectl <command> --help" for more information about a given command.
W1207 02:54:31.102] Use "kubectl options" for a list of global command-line options (applies to all commands).
W1207 02:54:31.102] 
W1207 02:54:31.102] required flag(s) "filename" not set
I1207 02:54:31.203] +++ [1207 02:54:31] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1207 02:54:31.303] kubectl convert is DEPRECATED and will be removed in a future version.
W1207 02:54:31.304] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1207 02:54:31.404] +++ exit code: 0
I1207 02:54:31.404] Recording: run_kubectl_apply_tests
I1207 02:54:31.404] Running command: run_kubectl_apply_tests
I1207 02:54:31.405] 
... skipping 13 lines ...
I1207 02:54:32.235] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I1207 02:54:33.010] (Bdeployment.extensions "test-deployment-retainkeys" deleted
I1207 02:54:33.088] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:33.223] (Bpod/selector-test-pod created
I1207 02:54:33.305] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1207 02:54:33.372] (BSuccessful
I1207 02:54:33.372] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1207 02:54:33.372] has:pods "selector-test-pod-dont-apply" not found
I1207 02:54:33.435] pod "selector-test-pod" deleted
I1207 02:54:33.510] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:33.705] (Bpod/test-pod created (server dry run)
I1207 02:54:33.787] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:33.924] (Bpod/test-pod created
... skipping 7 lines ...
W1207 02:54:34.028] I1207 02:54:32.670971   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151271-21506", Name:"test-deployment-retainkeys-7495cff5f", UID:"6c5f6cdb-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-deployment-retainkeys-7495cff5f-hnj6t
I1207 02:54:34.129] pod/test-pod configured (server dry run)
I1207 02:54:34.145] apply.sh:91: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I1207 02:54:34.212] (Bpod "test-pod" deleted
I1207 02:54:34.405] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W1207 02:54:34.630] I1207 02:54:34.630039   52349 controller.go:608] quota admission added evaluator for: resources.mygroup.example.com
W1207 02:54:34.706] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1207 02:54:34.807] kind.mygroup.example.com/myobj created (server dry run)
I1207 02:54:34.807] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1207 02:54:34.860] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:35.001] (Bpod/a created
I1207 02:54:36.487] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I1207 02:54:36.557] (BSuccessful
I1207 02:54:36.557] message:Error from server (NotFound): pods "b" not found
I1207 02:54:36.557] has:pods "b" not found
I1207 02:54:36.699] pod/b created
I1207 02:54:36.709] pod/a pruned
I1207 02:54:38.389] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I1207 02:54:38.465] (BSuccessful
I1207 02:54:38.486] message:Error from server (NotFound): pods "a" not found
I1207 02:54:38.487] has:pods "a" not found
I1207 02:54:38.535] pod "b" deleted
I1207 02:54:38.622] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:38.774] (Bpod/a created
I1207 02:54:38.865] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I1207 02:54:38.945] (BSuccessful
I1207 02:54:38.945] message:Error from server (NotFound): pods "b" not found
I1207 02:54:38.945] has:pods "b" not found
I1207 02:54:39.087] pod/b created
I1207 02:54:39.173] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I1207 02:54:39.251] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I1207 02:54:39.318] (Bpod "a" deleted
I1207 02:54:39.322] pod "b" deleted
I1207 02:54:39.472] Successful
I1207 02:54:39.472] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector.
I1207 02:54:39.472] has:all resources selected for prune without explicitly passing --all
I1207 02:54:39.611] pod/a created
I1207 02:54:39.617] pod/b created
I1207 02:54:39.624] service/prune-svc created
I1207 02:54:41.120] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I1207 02:54:41.198] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 139 lines ...
I1207 02:54:54.117] Context "test" modified.
I1207 02:54:54.124] +++ [1207 02:54:54] Testing kubectl create filter
I1207 02:54:54.205] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:54.351] (Bpod/selector-test-pod created
I1207 02:54:54.440] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1207 02:54:54.519] (BSuccessful
I1207 02:54:54.519] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1207 02:54:54.519] has:pods "selector-test-pod-dont-apply" not found
I1207 02:54:54.591] pod "selector-test-pod" deleted
I1207 02:54:54.610] +++ exit code: 0
I1207 02:54:55.214] Recording: run_kubectl_apply_deployments_tests
I1207 02:54:55.214] Running command: run_kubectl_apply_deployments_tests
I1207 02:54:55.235] 
... skipping 28 lines ...
I1207 02:54:57.021] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:57.102] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:57.182] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:54:57.330] (Bdeployment.extensions/nginx created
I1207 02:54:57.419] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I1207 02:55:01.615] (BSuccessful
I1207 02:55:01.616] message:Error from server (Conflict): error when applying patch:
I1207 02:55:01.616] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544151295-21332\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1207 02:55:01.616] to:
I1207 02:55:01.616] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I1207 02:55:01.616] Name: "nginx", Namespace: "namespace-1544151295-21332"
I1207 02:55:01.617] Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["generation":'\x01' "creationTimestamp":"2018-12-07T02:54:57Z" "namespace":"namespace-1544151295-21332" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1544151295-21332/deployments/nginx" "uid":"7b125349-f9cb-11e8-a1d0-0242ac110002" "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1544151295-21332\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "name":"nginx" "resourceVersion":"707" "labels":map["name":"nginx"]] "spec":map["revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["schedulerName":"default-scheduler" "containers":[map["terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[]]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[]]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']]] "status":map["replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["type":"Available" "status":"False" "lastUpdateTime":"2018-12-07T02:54:57Z" "lastTransitionTime":"2018-12-07T02:54:57Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability."]] "observedGeneration":'\x01']]}
I1207 02:55:01.618] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I1207 02:55:01.618] has:Error from server (Conflict)
W1207 02:55:01.718] I1207 02:54:55.773109   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151295-21332", Name:"my-depl", UID:"7a240921-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-559b7bc95d to 1
W1207 02:55:01.719] I1207 02:54:55.779105   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151295-21332", Name:"my-depl-559b7bc95d", UID:"7a248d4e-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-559b7bc95d-kcggp
W1207 02:55:01.719] I1207 02:54:56.251403   52349 controller.go:608] quota admission added evaluator for: replicasets.extensions
W1207 02:55:01.719] I1207 02:54:56.255998   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151295-21332", Name:"my-depl", UID:"7a240921-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-6676598dcb to 1
W1207 02:55:01.719] I1207 02:54:56.258708   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151295-21332", Name:"my-depl-6676598dcb", UID:"7a6e8877-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-6676598dcb-qvjw7
W1207 02:55:01.720] I1207 02:54:57.332654   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151295-21332", Name:"nginx", UID:"7b125349-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W1207 02:55:01.720] I1207 02:54:57.334702   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151295-21332", Name:"nginx-5d56d6b95f", UID:"7b12ccd6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-tzq7z
W1207 02:55:01.720] I1207 02:54:57.336557   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151295-21332", Name:"nginx-5d56d6b95f", UID:"7b12ccd6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-56zg5
W1207 02:55:01.720] I1207 02:54:57.336792   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151295-21332", Name:"nginx-5d56d6b95f", UID:"7b12ccd6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-29z2v
W1207 02:55:05.811] E1207 02:55:05.810748   55701 replica_set.go:450] Sync "namespace-1544151295-21332/nginx-5d56d6b95f" failed with Operation cannot be fulfilled on replicasets.apps "nginx-5d56d6b95f": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1544151295-21332/nginx-5d56d6b95f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 7b12ccd6-f9cb-11e8-a1d0-0242ac110002, UID in object meta: 
I1207 02:55:06.801] deployment.extensions/nginx configured
W1207 02:55:06.902] I1207 02:55:06.804307   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151295-21332", Name:"nginx", UID:"80b780f5-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
W1207 02:55:06.902] I1207 02:55:06.826873   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151295-21332", Name:"nginx-7777658b9d", UID:"80b8070c-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-jn5tk
W1207 02:55:06.902] I1207 02:55:06.829873   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151295-21332", Name:"nginx-7777658b9d", UID:"80b8070c-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-qg7s8
W1207 02:55:06.903] I1207 02:55:06.830636   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151295-21332", Name:"nginx-7777658b9d", UID:"80b8070c-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7777658b9d-vkdjq
I1207 02:55:07.003] Successful
... skipping 82 lines ...
I1207 02:55:13.311] +++ [1207 02:55:13] Creating namespace namespace-1544151313-25565
I1207 02:55:13.381] namespace/namespace-1544151313-25565 created
I1207 02:55:13.448] Context "test" modified.
I1207 02:55:13.454] +++ [1207 02:55:13] Testing kubectl get
I1207 02:55:13.539] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:55:13.623] (BSuccessful
I1207 02:55:13.623] message:Error from server (NotFound): pods "abc" not found
I1207 02:55:13.623] has:pods "abc" not found
I1207 02:55:13.708] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:55:13.789] (BSuccessful
I1207 02:55:13.790] message:Error from server (NotFound): pods "abc" not found
I1207 02:55:13.790] has:pods "abc" not found
I1207 02:55:13.874] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:55:13.953] (BSuccessful
I1207 02:55:13.953] message:{
I1207 02:55:13.953]     "apiVersion": "v1",
I1207 02:55:13.954]     "items": [],
... skipping 23 lines ...
I1207 02:55:14.261] has not:No resources found
I1207 02:55:14.338] Successful
I1207 02:55:14.339] message:NAME
I1207 02:55:14.339] has not:No resources found
I1207 02:55:14.421] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:55:14.527] (BSuccessful
I1207 02:55:14.527] message:error: the server doesn't have a resource type "foobar"
I1207 02:55:14.527] has not:No resources found
I1207 02:55:14.604] Successful
I1207 02:55:14.604] message:No resources found.
I1207 02:55:14.604] has:No resources found
I1207 02:55:14.681] Successful
I1207 02:55:14.681] message:
I1207 02:55:14.682] has not:No resources found
I1207 02:55:14.758] Successful
I1207 02:55:14.758] message:No resources found.
I1207 02:55:14.758] has:No resources found
I1207 02:55:14.840] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:55:14.921] (BSuccessful
I1207 02:55:14.922] message:Error from server (NotFound): pods "abc" not found
I1207 02:55:14.922] has:pods "abc" not found
I1207 02:55:14.923] FAIL!
I1207 02:55:14.924] message:Error from server (NotFound): pods "abc" not found
I1207 02:55:14.924] has not:List
I1207 02:55:14.924] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1207 02:55:15.033] Successful
I1207 02:55:15.033] message:I1207 02:55:14.984738   67794 loader.go:359] Config loaded from file /tmp/tmp.Jh3ChvAN7y/.kube/config
I1207 02:55:15.033] I1207 02:55:14.985205   67794 loader.go:359] Config loaded from file /tmp/tmp.Jh3ChvAN7y/.kube/config
I1207 02:55:15.033] I1207 02:55:14.986413   67794 round_trippers.go:405] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
... skipping 995 lines ...
I1207 02:55:18.384] }
I1207 02:55:18.463] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 02:55:18.676] (B<no value>Successful
I1207 02:55:18.676] message:valid-pod:
I1207 02:55:18.676] has:valid-pod:
I1207 02:55:18.749] Successful
I1207 02:55:18.749] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1207 02:55:18.749] 	template was:
I1207 02:55:18.749] 		{.missing}
I1207 02:55:18.749] 	object given to jsonpath engine was:
I1207 02:55:18.750] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/namespace-1544151317-22930/pods/valid-pod", "uid":"8793156c-f9cb-11e8-a1d0-0242ac110002", "resourceVersion":"797", "creationTimestamp":"2018-12-07T02:55:18Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1544151317-22930"}, "spec":map[string]interface {}{"dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I1207 02:55:18.750] has:missing is not found
I1207 02:55:18.821] Successful
I1207 02:55:18.821] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1207 02:55:18.821] 	template was:
I1207 02:55:18.821] 		{{.missing}}
I1207 02:55:18.821] 	raw data was:
I1207 02:55:18.822] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2018-12-07T02:55:18Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1544151317-22930","resourceVersion":"797","selfLink":"/api/v1/namespaces/namespace-1544151317-22930/pods/valid-pod","uid":"8793156c-f9cb-11e8-a1d0-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1207 02:55:18.822] 	object given to template engine was:
I1207 02:55:18.823] 		map[status:map[phase:Pending qosClass:Guaranteed] apiVersion:v1 kind:Pod metadata:map[resourceVersion:797 selfLink:/api/v1/namespaces/namespace-1544151317-22930/pods/valid-pod uid:8793156c-f9cb-11e8-a1d0-0242ac110002 creationTimestamp:2018-12-07T02:55:18Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1544151317-22930] spec:map[containers:[map[terminationMessagePath:/dev/termination-log terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[requests:map[cpu:1 memory:512Mi] limits:map[cpu:1 memory:512Mi]]]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30]]
I1207 02:55:18.823] has:map has no entry for key "missing"
W1207 02:55:18.923] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W1207 02:55:19.889] E1207 02:55:19.889269   68183 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I1207 02:55:19.990] Successful
I1207 02:55:19.990] message:NAME        READY   STATUS    RESTARTS   AGE
I1207 02:55:19.990] valid-pod   0/1     Pending   0          0s
I1207 02:55:19.991] has:STATUS
I1207 02:55:19.991] Successful
... skipping 80 lines ...
I1207 02:55:22.146]   terminationGracePeriodSeconds: 30
I1207 02:55:22.146] status:
I1207 02:55:22.146]   phase: Pending
I1207 02:55:22.146]   qosClass: Guaranteed
I1207 02:55:22.146] has:name: valid-pod
I1207 02:55:22.146] Successful
I1207 02:55:22.146] message:Error from server (NotFound): pods "invalid-pod" not found
I1207 02:55:22.146] has:"invalid-pod" not found
I1207 02:55:22.205] pod "valid-pod" deleted
I1207 02:55:22.287] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:55:22.428] (Bpod/redis-master created
I1207 02:55:22.432] pod/valid-pod created
I1207 02:55:22.517] Successful
... skipping 305 lines ...
I1207 02:55:26.316] Running command: run_create_secret_tests
I1207 02:55:26.336] 
I1207 02:55:26.338] +++ Running case: test-cmd.run_create_secret_tests 
I1207 02:55:26.340] +++ working dir: /go/src/k8s.io/kubernetes
I1207 02:55:26.342] +++ command: run_create_secret_tests
I1207 02:55:26.427] Successful
I1207 02:55:26.427] message:Error from server (NotFound): secrets "mysecret" not found
I1207 02:55:26.427] has:secrets "mysecret" not found
W1207 02:55:26.528] No resources found.
W1207 02:55:26.528] No resources found.
I1207 02:55:26.628] Successful
I1207 02:55:26.629] message:Error from server (NotFound): secrets "mysecret" not found
I1207 02:55:26.629] has:secrets "mysecret" not found
I1207 02:55:26.629] Successful
I1207 02:55:26.629] message:user-specified
I1207 02:55:26.629] has:user-specified
I1207 02:55:26.633] Successful
I1207 02:55:26.699] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"8c937c37-f9cb-11e8-a1d0-0242ac110002","resourceVersion":"871","creationTimestamp":"2018-12-07T02:55:26Z"}}
... skipping 80 lines ...
I1207 02:55:28.534] has:Timeout exceeded while reading body
I1207 02:55:28.608] Successful
I1207 02:55:28.608] message:NAME        READY   STATUS    RESTARTS   AGE
I1207 02:55:28.608] valid-pod   0/1     Pending   0          1s
I1207 02:55:28.608] has:valid-pod
I1207 02:55:28.669] Successful
I1207 02:55:28.670] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1207 02:55:28.670] has:Invalid timeout value
I1207 02:55:28.742] pod "valid-pod" deleted
I1207 02:55:28.761] +++ exit code: 0
I1207 02:55:28.792] Recording: run_crd_tests
I1207 02:55:28.793] Running command: run_crd_tests
I1207 02:55:28.812] 
... skipping 151 lines ...
I1207 02:55:32.771] foo.company.com/test patched
I1207 02:55:32.854] crd.sh:237: Successful get foos/test {{.patched}}: value1
I1207 02:55:32.928] (Bfoo.company.com/test patched
I1207 02:55:33.008] crd.sh:239: Successful get foos/test {{.patched}}: value2
I1207 02:55:33.079] (Bfoo.company.com/test patched
I1207 02:55:33.164] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I1207 02:55:33.298] (B+++ [1207 02:55:33] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1207 02:55:33.353] {
I1207 02:55:33.353]     "apiVersion": "company.com/v1",
I1207 02:55:33.354]     "kind": "Foo",
I1207 02:55:33.354]     "metadata": {
I1207 02:55:33.354]         "annotations": {
I1207 02:55:33.354]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 113 lines ...
W1207 02:55:34.768] I1207 02:55:31.244559   52349 controller.go:608] quota admission added evaluator for: foos.company.com
W1207 02:55:34.768] I1207 02:55:34.429274   52349 controller.go:608] quota admission added evaluator for: bars.company.com
W1207 02:55:34.769] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 70708 Killed                  while [ ${tries} -lt 10 ]; do
W1207 02:55:34.769]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W1207 02:55:34.769] done
W1207 02:55:34.769] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 70707 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W1207 02:55:44.512] E1207 02:55:44.510937   55701 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources"]
W1207 02:55:44.697] I1207 02:55:44.696939   55701 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 02:55:44.798] I1207 02:55:44.797315   55701 controller_utils.go:1034] Caches are synced for garbage collector controller
I1207 02:55:44.898] crd.sh:321: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:55:45.026] (Bfoo.company.com/test created
I1207 02:55:45.113] crd.sh:327: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test:
I1207 02:55:45.193] (Bcrd.sh:330: Successful get foos/test {{.someField}}: field1
... skipping 76 lines ...
I1207 02:55:56.171] +++ [1207 02:55:56] Testing cmd with image
I1207 02:55:56.253] Successful
I1207 02:55:56.254] message:deployment.apps/test1 created
I1207 02:55:56.254] has:deployment.apps/test1 created
I1207 02:55:56.326] deployment.extensions "test1" deleted
I1207 02:55:56.395] Successful
I1207 02:55:56.396] message:error: Invalid image name "InvalidImageName": invalid reference format
I1207 02:55:56.396] has:error: Invalid image name "InvalidImageName": invalid reference format
I1207 02:55:56.409] +++ exit code: 0
I1207 02:55:56.440] Recording: run_recursive_resources_tests
I1207 02:55:56.440] Running command: run_recursive_resources_tests
I1207 02:55:56.458] 
I1207 02:55:56.460] +++ Running case: test-cmd.run_recursive_resources_tests 
I1207 02:55:56.462] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I1207 02:55:56.607] Context "test" modified.
I1207 02:55:56.690] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:55:56.926] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:55:56.928] (BSuccessful
I1207 02:55:56.928] message:pod/busybox0 created
I1207 02:55:56.928] pod/busybox1 created
I1207 02:55:56.929] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 02:55:56.929] has:error validating data: kind not set
I1207 02:55:57.009] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:55:57.168] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1207 02:55:57.170] (BSuccessful
I1207 02:55:57.171] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 02:55:57.171] has:Object 'Kind' is missing
I1207 02:55:57.251] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:55:57.484] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1207 02:55:57.486] (BSuccessful
I1207 02:55:57.486] message:pod/busybox0 replaced
I1207 02:55:57.486] pod/busybox1 replaced
I1207 02:55:57.486] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 02:55:57.486] has:error validating data: kind not set
I1207 02:55:57.566] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:55:57.652] (BSuccessful
I1207 02:55:57.653] message:Name:               busybox0
I1207 02:55:57.653] Namespace:          namespace-1544151356-30106
I1207 02:55:57.653] Priority:           0
I1207 02:55:57.653] PriorityClassName:  <none>
... skipping 159 lines ...
I1207 02:55:57.664] has:Object 'Kind' is missing
I1207 02:55:57.737] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:55:57.897] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1207 02:55:57.899] (BSuccessful
I1207 02:55:57.899] message:pod/busybox0 annotated
I1207 02:55:57.899] pod/busybox1 annotated
I1207 02:55:57.899] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 02:55:57.899] has:Object 'Kind' is missing
I1207 02:55:57.982] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:55:58.224] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1207 02:55:58.226] (BSuccessful
I1207 02:55:58.227] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 02:55:58.227] pod/busybox0 configured
I1207 02:55:58.227] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1207 02:55:58.227] pod/busybox1 configured
I1207 02:55:58.227] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1207 02:55:58.227] has:error validating data: kind not set
I1207 02:55:58.307] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:55:58.449] (Bdeployment.extensions/nginx created
I1207 02:55:58.537] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1207 02:55:58.616] (Bgeneric-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 02:55:58.762] (Bgeneric-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I1207 02:55:58.764] (BSuccessful
... skipping 42 lines ...
I1207 02:55:58.833] deployment.extensions "nginx" deleted
I1207 02:55:58.922] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:55:59.075] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:55:59.077] (BSuccessful
I1207 02:55:59.077] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1207 02:55:59.078] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1207 02:55:59.078] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 02:55:59.078] has:Object 'Kind' is missing
I1207 02:55:59.159] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:55:59.232] (BSuccessful
I1207 02:55:59.233] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 02:55:59.233] has:busybox0:busybox1:
I1207 02:55:59.235] Successful
I1207 02:55:59.235] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 02:55:59.235] has:Object 'Kind' is missing
I1207 02:55:59.317] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:55:59.396] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 02:55:59.481] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1207 02:55:59.483] (BSuccessful
I1207 02:55:59.483] message:pod/busybox0 labeled
I1207 02:55:59.484] pod/busybox1 labeled
I1207 02:55:59.484] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 02:55:59.484] has:Object 'Kind' is missing
I1207 02:55:59.563] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:55:59.641] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 02:55:59.725] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1207 02:55:59.727] (BSuccessful
I1207 02:55:59.727] message:pod/busybox0 patched
I1207 02:55:59.727] pod/busybox1 patched
I1207 02:55:59.727] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 02:55:59.727] has:Object 'Kind' is missing
I1207 02:55:59.806] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:55:59.966] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:55:59.968] (BSuccessful
I1207 02:55:59.968] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 02:55:59.968] pod "busybox0" force deleted
I1207 02:55:59.968] pod "busybox1" force deleted
I1207 02:55:59.969] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1207 02:55:59.969] has:Object 'Kind' is missing
I1207 02:56:00.047] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:56:00.181] (Breplicationcontroller/busybox0 created
I1207 02:56:00.190] replicationcontroller/busybox1 created
I1207 02:56:00.281] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:56:00.362] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:56:00.438] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 02:56:00.525] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 02:56:00.682] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1207 02:56:00.763] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1207 02:56:00.765] (BSuccessful
I1207 02:56:00.765] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1207 02:56:00.766] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1207 02:56:00.766] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 02:56:00.766] has:Object 'Kind' is missing
I1207 02:56:00.835] horizontalpodautoscaler.autoscaling "busybox0" deleted
I1207 02:56:00.908] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1207 02:56:00.994] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:56:01.076] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 02:56:01.156] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 02:56:01.324] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1207 02:56:01.403] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1207 02:56:01.405] (BSuccessful
I1207 02:56:01.406] message:service/busybox0 exposed
I1207 02:56:01.406] service/busybox1 exposed
I1207 02:56:01.406] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 02:56:01.406] has:Object 'Kind' is missing
I1207 02:56:01.487] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:56:01.566] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I1207 02:56:01.650] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I1207 02:56:01.826] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I1207 02:56:01.909] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I1207 02:56:01.911] (BSuccessful
I1207 02:56:01.912] message:replicationcontroller/busybox0 scaled
I1207 02:56:01.912] replicationcontroller/busybox1 scaled
I1207 02:56:01.912] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 02:56:01.912] has:Object 'Kind' is missing
I1207 02:56:01.993] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1207 02:56:02.153] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:56:02.155] (BSuccessful
I1207 02:56:02.155] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1207 02:56:02.155] replicationcontroller "busybox0" force deleted
I1207 02:56:02.156] replicationcontroller "busybox1" force deleted
I1207 02:56:02.156] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 02:56:02.156] has:Object 'Kind' is missing
I1207 02:56:02.237] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:56:02.376] (Bdeployment.extensions/nginx1-deployment created
I1207 02:56:02.382] deployment.extensions/nginx0-deployment created
I1207 02:56:02.476] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1207 02:56:02.558] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1207 02:56:02.738] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1207 02:56:02.740] (BSuccessful
I1207 02:56:02.740] message:deployment.extensions/nginx1-deployment skipped rollback (current template already matches revision 1)
I1207 02:56:02.740] deployment.extensions/nginx0-deployment skipped rollback (current template already matches revision 1)
I1207 02:56:02.741] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 02:56:02.741] has:Object 'Kind' is missing
I1207 02:56:02.820] deployment.extensions/nginx1-deployment paused
I1207 02:56:02.823] deployment.extensions/nginx0-deployment paused
I1207 02:56:02.920] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1207 02:56:02.922] (BSuccessful
I1207 02:56:02.923] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 02:56:02.923] has:Object 'Kind' is missing
I1207 02:56:03.004] deployment.extensions/nginx1-deployment resumed
I1207 02:56:03.008] deployment.extensions/nginx0-deployment resumed
I1207 02:56:03.102] generic-resources.sh:408: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I1207 02:56:03.104] (BSuccessful
I1207 02:56:03.105] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 02:56:03.105] has:Object 'Kind' is missing
W1207 02:56:03.205] Error from server (NotFound): namespaces "non-native-resources" not found
W1207 02:56:03.206] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1207 02:56:03.206] I1207 02:55:56.246133   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151356-29554", Name:"test1", UID:"9e2f983a-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"981", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
W1207 02:56:03.206] I1207 02:55:56.250659   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151356-29554", Name:"test1-fb488bd5d", UID:"9e302ec8-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"982", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-pq527
W1207 02:56:03.206] I1207 02:55:58.451493   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151356-30106", Name:"nginx", UID:"9f80426a-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1006", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
W1207 02:56:03.207] I1207 02:55:58.454015   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151356-30106", Name:"nginx-6f6bb85d9c", UID:"9f80c76e-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1007", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-lq97d
W1207 02:56:03.207] I1207 02:55:58.455784   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151356-30106", Name:"nginx-6f6bb85d9c", UID:"9f80c76e-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1007", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-js9m8
W1207 02:56:03.207] I1207 02:55:58.456277   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151356-30106", Name:"nginx-6f6bb85d9c", UID:"9f80c76e-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1007", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-8zb2n
W1207 02:56:03.207] kubectl convert is DEPRECATED and will be removed in a future version.
W1207 02:56:03.207] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W1207 02:56:03.208] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 02:56:03.208] I1207 02:56:00.190204   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151356-30106", Name:"busybox0", UID:"a088b4f1-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1037", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-zqlgq
W1207 02:56:03.208] I1207 02:56:00.192657   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151356-30106", Name:"busybox1", UID:"a0899b12-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1039", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-2l762
W1207 02:56:03.208] I1207 02:56:00.452395   55701 namespace_controller.go:171] Namespace has been deleted non-native-resources
W1207 02:56:03.208] I1207 02:56:01.734633   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151356-30106", Name:"busybox0", UID:"a088b4f1-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1058", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-72xxf
W1207 02:56:03.209] I1207 02:56:01.742317   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151356-30106", Name:"busybox1", UID:"a0899b12-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1063", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-zr4n5
W1207 02:56:03.209] I1207 02:56:02.378926   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151356-30106", Name:"nginx1-deployment", UID:"a1d76c1a-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1078", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W1207 02:56:03.209] I1207 02:56:02.381364   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151356-30106", Name:"nginx1-deployment-75f6fc6747", UID:"a1d80853-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1079", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-wx9hd
W1207 02:56:03.209] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 02:56:03.210] I1207 02:56:02.383963   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151356-30106", Name:"nginx1-deployment-75f6fc6747", UID:"a1d80853-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1079", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-5td85
W1207 02:56:03.210] I1207 02:56:02.385543   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151356-30106", Name:"nginx0-deployment", UID:"a1d88906-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1082", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W1207 02:56:03.210] I1207 02:56:02.389881   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151356-30106", Name:"nginx0-deployment-b6bb4ccbb", UID:"a1d8f561-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-ch4pw
W1207 02:56:03.210] I1207 02:56:02.395493   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151356-30106", Name:"nginx0-deployment-b6bb4ccbb", UID:"a1d8f561-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-rl2vk
W1207 02:56:03.272] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 02:56:03.286] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 02:56:03.387] Successful
I1207 02:56:03.387] message:deployment.extensions/nginx1-deployment 
I1207 02:56:03.387] REVISION  CHANGE-CAUSE
I1207 02:56:03.387] 1         <none>
I1207 02:56:03.387] 
I1207 02:56:03.387] deployment.extensions/nginx0-deployment 
I1207 02:56:03.387] REVISION  CHANGE-CAUSE
I1207 02:56:03.388] 1         <none>
I1207 02:56:03.388] 
I1207 02:56:03.388] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 02:56:03.388] has:nginx0-deployment
I1207 02:56:03.388] Successful
I1207 02:56:03.388] message:deployment.extensions/nginx1-deployment 
I1207 02:56:03.388] REVISION  CHANGE-CAUSE
I1207 02:56:03.388] 1         <none>
I1207 02:56:03.389] 
I1207 02:56:03.389] deployment.extensions/nginx0-deployment 
I1207 02:56:03.389] REVISION  CHANGE-CAUSE
I1207 02:56:03.389] 1         <none>
I1207 02:56:03.389] 
I1207 02:56:03.389] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 02:56:03.389] has:nginx1-deployment
I1207 02:56:03.389] Successful
I1207 02:56:03.390] message:deployment.extensions/nginx1-deployment 
I1207 02:56:03.390] REVISION  CHANGE-CAUSE
I1207 02:56:03.390] 1         <none>
I1207 02:56:03.390] 
I1207 02:56:03.390] deployment.extensions/nginx0-deployment 
I1207 02:56:03.390] REVISION  CHANGE-CAUSE
I1207 02:56:03.390] 1         <none>
I1207 02:56:03.390] 
I1207 02:56:03.390] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1207 02:56:03.391] has:Object 'Kind' is missing
I1207 02:56:03.391] deployment.extensions "nginx1-deployment" force deleted
I1207 02:56:03.391] deployment.extensions "nginx0-deployment" force deleted
I1207 02:56:04.373] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:56:04.508] (Breplicationcontroller/busybox0 created
I1207 02:56:04.514] replicationcontroller/busybox1 created
... skipping 7 lines ...
I1207 02:56:04.687] message:no rollbacker has been implemented for "ReplicationController"
I1207 02:56:04.687] no rollbacker has been implemented for "ReplicationController"
I1207 02:56:04.687] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 02:56:04.687] has:Object 'Kind' is missing
I1207 02:56:04.767] Successful
I1207 02:56:04.768] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 02:56:04.768] error: replicationcontrollers "busybox0" pausing is not supported
I1207 02:56:04.768] error: replicationcontrollers "busybox1" pausing is not supported
I1207 02:56:04.768] has:Object 'Kind' is missing
I1207 02:56:04.769] Successful
I1207 02:56:04.770] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 02:56:04.770] error: replicationcontrollers "busybox0" pausing is not supported
I1207 02:56:04.770] error: replicationcontrollers "busybox1" pausing is not supported
I1207 02:56:04.770] has:replicationcontrollers "busybox0" pausing is not supported
I1207 02:56:04.771] Successful
I1207 02:56:04.771] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 02:56:04.771] error: replicationcontrollers "busybox0" pausing is not supported
I1207 02:56:04.771] error: replicationcontrollers "busybox1" pausing is not supported
I1207 02:56:04.772] has:replicationcontrollers "busybox1" pausing is not supported
I1207 02:56:04.853] Successful
I1207 02:56:04.854] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 02:56:04.854] error: replicationcontrollers "busybox0" resuming is not supported
I1207 02:56:04.854] error: replicationcontrollers "busybox1" resuming is not supported
I1207 02:56:04.854] has:Object 'Kind' is missing
I1207 02:56:04.855] Successful
I1207 02:56:04.855] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 02:56:04.856] error: replicationcontrollers "busybox0" resuming is not supported
I1207 02:56:04.856] error: replicationcontrollers "busybox1" resuming is not supported
I1207 02:56:04.856] has:replicationcontrollers "busybox0" resuming is not supported
I1207 02:56:04.857] Successful
I1207 02:56:04.857] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 02:56:04.857] error: replicationcontrollers "busybox0" resuming is not supported
I1207 02:56:04.857] error: replicationcontrollers "busybox1" resuming is not supported
I1207 02:56:04.858] has:replicationcontrollers "busybox0" resuming is not supported
I1207 02:56:04.929] replicationcontroller "busybox0" force deleted
I1207 02:56:04.932] replicationcontroller "busybox1" force deleted
W1207 02:56:05.033] I1207 02:56:04.511554   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151356-30106", Name:"busybox0", UID:"a31cda5a-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1124", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-l8nnm
W1207 02:56:05.033] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1207 02:56:05.034] I1207 02:56:04.516002   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151356-30106", Name:"busybox1", UID:"a31ddff2-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1128", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-w6z8r
W1207 02:56:05.034] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 02:56:05.034] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1207 02:56:05.951] +++ exit code: 0
I1207 02:56:05.980] Recording: run_namespace_tests
I1207 02:56:05.981] Running command: run_namespace_tests
I1207 02:56:05.997] 
I1207 02:56:05.999] +++ Running case: test-cmd.run_namespace_tests 
I1207 02:56:06.001] +++ working dir: /go/src/k8s.io/kubernetes
I1207 02:56:06.003] +++ command: run_namespace_tests
I1207 02:56:06.011] +++ [1207 02:56:06] Testing kubectl(v1:namespaces)
I1207 02:56:06.075] namespace/my-namespace created
I1207 02:56:06.155] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1207 02:56:06.221] (Bnamespace "my-namespace" deleted
I1207 02:56:11.318] namespace/my-namespace condition met
I1207 02:56:11.396] Successful
I1207 02:56:11.396] message:Error from server (NotFound): namespaces "my-namespace" not found
I1207 02:56:11.397] has: not found
I1207 02:56:11.499] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I1207 02:56:11.560] (Bnamespace/other created
I1207 02:56:11.640] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I1207 02:56:11.723] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:56:11.866] (Bpod/valid-pod created
I1207 02:56:11.953] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 02:56:12.033] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 02:56:12.105] (BSuccessful
I1207 02:56:12.105] message:error: a resource cannot be retrieved by name across all namespaces
I1207 02:56:12.105] has:a resource cannot be retrieved by name across all namespaces
I1207 02:56:12.186] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1207 02:56:12.260] (Bpod "valid-pod" force deleted
I1207 02:56:12.347] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:56:12.414] (Bnamespace "other" deleted
W1207 02:56:12.514] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1207 02:56:14.517] E1207 02:56:14.517261   55701 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W1207 02:56:14.820] I1207 02:56:14.819570   55701 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W1207 02:56:14.920] I1207 02:56:14.919938   55701 controller_utils.go:1034] Caches are synced for garbage collector controller
W1207 02:56:15.595] I1207 02:56:15.595328   55701 horizontal.go:309] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1544151356-30106
W1207 02:56:15.599] I1207 02:56:15.599105   55701 horizontal.go:309] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1544151356-30106
W1207 02:56:16.315] I1207 02:56:16.315211   55701 namespace_controller.go:171] Namespace has been deleted my-namespace
I1207 02:56:17.516] +++ exit code: 0
... skipping 113 lines ...
I1207 02:56:32.222] +++ command: run_client_config_tests
I1207 02:56:32.234] +++ [1207 02:56:32] Creating namespace namespace-1544151392-28188
I1207 02:56:32.297] namespace/namespace-1544151392-28188 created
I1207 02:56:32.362] Context "test" modified.
I1207 02:56:32.368] +++ [1207 02:56:32] Testing client config
I1207 02:56:32.432] Successful
I1207 02:56:32.432] message:error: stat missing: no such file or directory
I1207 02:56:32.432] has:missing: no such file or directory
I1207 02:56:32.494] Successful
I1207 02:56:32.494] message:error: stat missing: no such file or directory
I1207 02:56:32.494] has:missing: no such file or directory
I1207 02:56:32.557] Successful
I1207 02:56:32.557] message:error: stat missing: no such file or directory
I1207 02:56:32.557] has:missing: no such file or directory
I1207 02:56:32.620] Successful
I1207 02:56:32.621] message:Error in configuration: context was not found for specified context: missing-context
I1207 02:56:32.621] has:context was not found for specified context: missing-context
I1207 02:56:32.685] Successful
I1207 02:56:32.685] message:error: no server found for cluster "missing-cluster"
I1207 02:56:32.685] has:no server found for cluster "missing-cluster"
I1207 02:56:32.752] Successful
I1207 02:56:32.752] message:error: auth info "missing-user" does not exist
I1207 02:56:32.752] has:auth info "missing-user" does not exist
I1207 02:56:32.887] Successful
I1207 02:56:32.888] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1207 02:56:32.888] has:Error loading config file
I1207 02:56:32.953] Successful
I1207 02:56:32.953] message:error: stat missing-config: no such file or directory
I1207 02:56:32.954] has:no such file or directory
I1207 02:56:32.966] +++ exit code: 0
I1207 02:56:33.002] Recording: run_service_accounts_tests
I1207 02:56:33.002] Running command: run_service_accounts_tests
I1207 02:56:33.023] 
I1207 02:56:33.025] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 76 lines ...
I1207 02:56:40.084]                 job-name=test-job
I1207 02:56:40.084]                 run=pi
I1207 02:56:40.085] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1207 02:56:40.085] Parallelism:    1
I1207 02:56:40.085] Completions:    1
I1207 02:56:40.085] Start Time:     Fri, 07 Dec 2018 02:56:39 +0000
I1207 02:56:40.085] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1207 02:56:40.085] Pod Template:
I1207 02:56:40.085]   Labels:  controller-uid=b82d2627-f9cb-11e8-a1d0-0242ac110002
I1207 02:56:40.085]            job-name=test-job
I1207 02:56:40.085]            run=pi
I1207 02:56:40.085]   Containers:
I1207 02:56:40.086]    pi:
... skipping 329 lines ...
I1207 02:56:49.118]   selector:
I1207 02:56:49.118]     role: padawan
I1207 02:56:49.118]   sessionAffinity: None
I1207 02:56:49.118]   type: ClusterIP
I1207 02:56:49.118] status:
I1207 02:56:49.118]   loadBalancer: {}
W1207 02:56:49.219] error: you must specify resources by --filename when --local is set.
W1207 02:56:49.219] Example resource specifications include:
W1207 02:56:49.219]    '-f rsrc.yaml'
W1207 02:56:49.219]    '--filename=rsrc.json'
I1207 02:56:49.320] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1207 02:56:49.405] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1207 02:56:49.478] (Bservice "redis-master" deleted
... skipping 93 lines ...
I1207 02:56:54.790] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 02:56:54.874] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1207 02:56:54.969] (Bdaemonset.extensions/bind rolled back
I1207 02:56:55.056] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 02:56:55.139] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 02:56:55.235] (BSuccessful
I1207 02:56:55.236] message:error: unable to find specified revision 1000000 in history
I1207 02:56:55.236] has:unable to find specified revision
I1207 02:56:55.316] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 02:56:55.398] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 02:56:55.490] (Bdaemonset.extensions/bind rolled back
I1207 02:56:55.575] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1207 02:56:55.655] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I1207 02:56:56.819] Namespace:    namespace-1544151415-23524
I1207 02:56:56.819] Selector:     app=guestbook,tier=frontend
I1207 02:56:56.819] Labels:       app=guestbook
I1207 02:56:56.819]               tier=frontend
I1207 02:56:56.819] Annotations:  <none>
I1207 02:56:56.819] Replicas:     3 current / 3 desired
I1207 02:56:56.819] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:56:56.819] Pod Template:
I1207 02:56:56.820]   Labels:  app=guestbook
I1207 02:56:56.820]            tier=frontend
I1207 02:56:56.820]   Containers:
I1207 02:56:56.820]    php-redis:
I1207 02:56:56.820]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 02:56:56.918] Namespace:    namespace-1544151415-23524
I1207 02:56:56.919] Selector:     app=guestbook,tier=frontend
I1207 02:56:56.919] Labels:       app=guestbook
I1207 02:56:56.919]               tier=frontend
I1207 02:56:56.919] Annotations:  <none>
I1207 02:56:56.919] Replicas:     3 current / 3 desired
I1207 02:56:56.919] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:56:56.919] Pod Template:
I1207 02:56:56.919]   Labels:  app=guestbook
I1207 02:56:56.919]            tier=frontend
I1207 02:56:56.919]   Containers:
I1207 02:56:56.919]    php-redis:
I1207 02:56:56.919]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1207 02:56:57.015] Namespace:    namespace-1544151415-23524
I1207 02:56:57.015] Selector:     app=guestbook,tier=frontend
I1207 02:56:57.015] Labels:       app=guestbook
I1207 02:56:57.016]               tier=frontend
I1207 02:56:57.016] Annotations:  <none>
I1207 02:56:57.016] Replicas:     3 current / 3 desired
I1207 02:56:57.016] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:56:57.016] Pod Template:
I1207 02:56:57.016]   Labels:  app=guestbook
I1207 02:56:57.016]            tier=frontend
I1207 02:56:57.016]   Containers:
I1207 02:56:57.016]    php-redis:
I1207 02:56:57.016]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1207 02:56:57.117] Namespace:    namespace-1544151415-23524
I1207 02:56:57.117] Selector:     app=guestbook,tier=frontend
I1207 02:56:57.117] Labels:       app=guestbook
I1207 02:56:57.117]               tier=frontend
I1207 02:56:57.117] Annotations:  <none>
I1207 02:56:57.117] Replicas:     3 current / 3 desired
I1207 02:56:57.117] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:56:57.117] Pod Template:
I1207 02:56:57.118]   Labels:  app=guestbook
I1207 02:56:57.118]            tier=frontend
I1207 02:56:57.118]   Containers:
I1207 02:56:57.118]    php-redis:
I1207 02:56:57.118]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I1207 02:56:57.119]   Type    Reason            Age   From                    Message
I1207 02:56:57.119]   ----    ------            ----  ----                    -------
I1207 02:56:57.119]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-ntc2v
I1207 02:56:57.119]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-rzdzj
I1207 02:56:57.119]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-9gsm6
I1207 02:56:57.119] (B
W1207 02:56:57.222] E1207 02:56:54.977218   55701 daemon_controller.go:304] namespace-1544151413-24188/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1544151413-24188", SelfLink:"/apis/apps/v1/namespaces/namespace-1544151413-24188/daemonsets/bind", UID:"c0704364-f9cb-11e8-a1d0-0242ac110002", ResourceVersion:"1339", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63679748213, loc:(*time.Location)(0x66f47a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"name\":\"bind\",\"namespace\":\"namespace-1544151413-24188\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0044ef5e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004582318), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004540240), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0044ef620), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc003936290)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc004582390)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W1207 02:56:57.222] I1207 02:56:56.235018   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"frontend", UID:"c1f0dbdb-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-d4q5w
W1207 02:56:57.223] I1207 02:56:56.237392   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"frontend", UID:"c1f0dbdb-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-szn57
W1207 02:56:57.223] I1207 02:56:56.237430   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"frontend", UID:"c1f0dbdb-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cc5xx
W1207 02:56:57.223] I1207 02:56:56.607314   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"frontend", UID:"c22a2ad0-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ntc2v
W1207 02:56:57.224] I1207 02:56:56.609388   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"frontend", UID:"c22a2ad0-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rzdzj
W1207 02:56:57.224] I1207 02:56:56.609927   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"frontend", UID:"c22a2ad0-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9gsm6
... skipping 2 lines ...
I1207 02:56:57.325] Namespace:    namespace-1544151415-23524
I1207 02:56:57.325] Selector:     app=guestbook,tier=frontend
I1207 02:56:57.325] Labels:       app=guestbook
I1207 02:56:57.325]               tier=frontend
I1207 02:56:57.325] Annotations:  <none>
I1207 02:56:57.325] Replicas:     3 current / 3 desired
I1207 02:56:57.325] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:56:57.325] Pod Template:
I1207 02:56:57.326]   Labels:  app=guestbook
I1207 02:56:57.326]            tier=frontend
I1207 02:56:57.326]   Containers:
I1207 02:56:57.326]    php-redis:
I1207 02:56:57.326]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 02:56:57.346] Namespace:    namespace-1544151415-23524
I1207 02:56:57.346] Selector:     app=guestbook,tier=frontend
I1207 02:56:57.346] Labels:       app=guestbook
I1207 02:56:57.346]               tier=frontend
I1207 02:56:57.346] Annotations:  <none>
I1207 02:56:57.346] Replicas:     3 current / 3 desired
I1207 02:56:57.346] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:56:57.346] Pod Template:
I1207 02:56:57.346]   Labels:  app=guestbook
I1207 02:56:57.346]            tier=frontend
I1207 02:56:57.346]   Containers:
I1207 02:56:57.347]    php-redis:
I1207 02:56:57.347]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1207 02:56:57.443] Namespace:    namespace-1544151415-23524
I1207 02:56:57.443] Selector:     app=guestbook,tier=frontend
I1207 02:56:57.443] Labels:       app=guestbook
I1207 02:56:57.443]               tier=frontend
I1207 02:56:57.443] Annotations:  <none>
I1207 02:56:57.443] Replicas:     3 current / 3 desired
I1207 02:56:57.443] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:56:57.443] Pod Template:
I1207 02:56:57.443]   Labels:  app=guestbook
I1207 02:56:57.444]            tier=frontend
I1207 02:56:57.444]   Containers:
I1207 02:56:57.444]    php-redis:
I1207 02:56:57.444]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1207 02:56:57.539] Namespace:    namespace-1544151415-23524
I1207 02:56:57.539] Selector:     app=guestbook,tier=frontend
I1207 02:56:57.539] Labels:       app=guestbook
I1207 02:56:57.539]               tier=frontend
I1207 02:56:57.539] Annotations:  <none>
I1207 02:56:57.539] Replicas:     3 current / 3 desired
I1207 02:56:57.540] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:56:57.540] Pod Template:
I1207 02:56:57.540]   Labels:  app=guestbook
I1207 02:56:57.540]            tier=frontend
I1207 02:56:57.540]   Containers:
I1207 02:56:57.540]    php-redis:
I1207 02:56:57.540]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I1207 02:56:58.289] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I1207 02:56:58.368] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I1207 02:56:58.444] (Breplicationcontroller/frontend scaled
I1207 02:56:58.526] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I1207 02:56:58.594] (Breplicationcontroller "frontend" deleted
W1207 02:56:58.694] I1207 02:56:57.707142   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"frontend", UID:"c22a2ad0-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1377", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-ntc2v
W1207 02:56:58.695] error: Expected replicas to be 3, was 2
W1207 02:56:58.695] I1207 02:56:58.205964   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"frontend", UID:"c22a2ad0-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1383", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7m6sz
W1207 02:56:58.695] I1207 02:56:58.448685   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"frontend", UID:"c22a2ad0-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1388", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-7m6sz
W1207 02:56:58.736] I1207 02:56:58.735717   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"redis-master", UID:"c36ef1a7-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1399", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-m7p95
I1207 02:56:58.836] replicationcontroller/redis-master created
I1207 02:56:58.879] replicationcontroller/redis-slave created
I1207 02:56:58.973] replicationcontroller/redis-master scaled
... skipping 29 lines ...
I1207 02:57:00.312] service "expose-test-deployment" deleted
I1207 02:57:00.401] Successful
I1207 02:57:00.401] message:service/expose-test-deployment exposed
I1207 02:57:00.402] has:service/expose-test-deployment exposed
I1207 02:57:00.476] service "expose-test-deployment" deleted
I1207 02:57:00.562] Successful
I1207 02:57:00.562] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1207 02:57:00.562] See 'kubectl expose -h' for help and examples.
I1207 02:57:00.562] has:invalid deployment: no selectors
I1207 02:57:00.637] Successful
I1207 02:57:00.638] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1207 02:57:00.638] See 'kubectl expose -h' for help and examples.
I1207 02:57:00.638] has:invalid deployment: no selectors
W1207 02:57:00.738] I1207 02:56:59.743671   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment", UID:"c408bdc1-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1454", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-659fc6fb to 3
W1207 02:57:00.739] I1207 02:56:59.745994   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-659fc6fb", UID:"c40943c6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-wrqbp
W1207 02:57:00.739] I1207 02:56:59.747659   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-659fc6fb", UID:"c40943c6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-ws26p
W1207 02:57:00.740] I1207 02:56:59.748290   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-659fc6fb", UID:"c40943c6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-zzsvb
... skipping 27 lines ...
I1207 02:57:02.460] service "frontend" deleted
I1207 02:57:02.465] service "frontend-2" deleted
I1207 02:57:02.470] service "frontend-3" deleted
I1207 02:57:02.475] service "frontend-4" deleted
I1207 02:57:02.480] service "frontend-5" deleted
I1207 02:57:02.569] Successful
I1207 02:57:02.569] message:error: cannot expose a Node
I1207 02:57:02.569] has:cannot expose
I1207 02:57:02.649] Successful
I1207 02:57:02.649] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I1207 02:57:02.650] has:metadata.name: Invalid value
I1207 02:57:02.731] Successful
I1207 02:57:02.731] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 33 lines ...
I1207 02:57:04.670] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1207 02:57:04.756] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1207 02:57:04.830] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W1207 02:57:04.931] I1207 02:57:04.278135   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"frontend", UID:"c6bca4ac-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8wnmv
W1207 02:57:04.931] I1207 02:57:04.280234   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"frontend", UID:"c6bca4ac-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hxmlw
W1207 02:57:04.931] I1207 02:57:04.280287   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1544151415-23524", Name:"frontend", UID:"c6bca4ac-f9cb-11e8-a1d0-0242ac110002", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-q8s25
W1207 02:57:04.931] Error: required flag(s) "max" not set
W1207 02:57:04.931] 
W1207 02:57:04.932] 
W1207 02:57:04.932] Examples:
W1207 02:57:04.932]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1207 02:57:04.932]   kubectl autoscale deployment foo --min=2 --max=10
W1207 02:57:04.932]   
... skipping 54 lines ...
I1207 02:57:05.118]           limits:
I1207 02:57:05.118]             cpu: 300m
I1207 02:57:05.118]           requests:
I1207 02:57:05.118]             cpu: 300m
I1207 02:57:05.118]       terminationGracePeriodSeconds: 0
I1207 02:57:05.118] status: {}
W1207 02:57:05.219] Error from server (NotFound): deployments.extensions "nginx-deployment-resources" not found
I1207 02:57:05.334] deployment.extensions/nginx-deployment-resources created
I1207 02:57:05.426] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I1207 02:57:05.507] (Bcore.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 02:57:05.586] (Bcore.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1207 02:57:05.678] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I1207 02:57:05.774] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 82 lines ...
W1207 02:57:06.710] I1207 02:57:05.681315   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources", UID:"c75e3e32-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W1207 02:57:06.711] I1207 02:57:05.684115   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources-6c5996c457", UID:"c79337f2-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1660", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-b4jh9
W1207 02:57:06.711] I1207 02:57:05.686375   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources", UID:"c75e3e32-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W1207 02:57:06.711] I1207 02:57:05.691007   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources-69c96fd869", UID:"c75edd14-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-94qsd
W1207 02:57:06.711] I1207 02:57:05.694391   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources", UID:"c75e3e32-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1661", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 2
W1207 02:57:06.712] I1207 02:57:05.697140   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources-6c5996c457", UID:"c79337f2-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1673", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-wpjdk
W1207 02:57:06.712] error: unable to find container named redis
W1207 02:57:06.712] I1207 02:57:06.030976   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources", UID:"c75e3e32-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1684", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 0
W1207 02:57:06.712] I1207 02:57:06.034613   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources-69c96fd869", UID:"c75edd14-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1688", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-cw5x5
W1207 02:57:06.713] I1207 02:57:06.034970   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources-69c96fd869", UID:"c75edd14-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1688", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-sdsh6
W1207 02:57:06.713] I1207 02:57:06.037548   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources", UID:"c75e3e32-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 2
W1207 02:57:06.713] I1207 02:57:06.041674   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources-5f4579485f", UID:"c7c79dac-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-l8ql8
W1207 02:57:06.713] I1207 02:57:06.044026   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources-5f4579485f", UID:"c7c79dac-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-r5lgj
W1207 02:57:06.714] I1207 02:57:06.287885   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources", UID:"c75e3e32-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1710", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-6c5996c457 to 0
W1207 02:57:06.714] I1207 02:57:06.292032   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources-6c5996c457", UID:"c79337f2-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1714", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-6c5996c457-wpjdk
W1207 02:57:06.714] I1207 02:57:06.292083   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources-6c5996c457", UID:"c79337f2-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1714", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-6c5996c457-b4jh9
W1207 02:57:06.714] I1207 02:57:06.293477   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources", UID:"c75e3e32-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1712", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 2
W1207 02:57:06.715] I1207 02:57:06.296644   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources-ff8d89cb6", UID:"c7eed253-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-75pkr
W1207 02:57:06.715] I1207 02:57:06.391450   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151415-23524", Name:"nginx-deployment-resources-ff8d89cb6", UID:"c7eed253-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-hl5xk
W1207 02:57:06.715] error: you must specify resources by --filename when --local is set.
W1207 02:57:06.715] Example resource specifications include:
W1207 02:57:06.715]    '-f rsrc.yaml'
W1207 02:57:06.715]    '--filename=rsrc.json'
I1207 02:57:06.816] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1207 02:57:06.839] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1207 02:57:06.918] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I1207 02:57:08.248]                 pod-template-hash=55c9b846cc
I1207 02:57:08.248] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I1207 02:57:08.249]                 deployment.kubernetes.io/max-replicas: 2
I1207 02:57:08.249]                 deployment.kubernetes.io/revision: 1
I1207 02:57:08.249] Controlled By:  Deployment/test-nginx-apps
I1207 02:57:08.249] Replicas:       1 current / 1 desired
I1207 02:57:08.249] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:08.249] Pod Template:
I1207 02:57:08.249]   Labels:  app=test-nginx-apps
I1207 02:57:08.249]            pod-template-hash=55c9b846cc
I1207 02:57:08.249]   Containers:
I1207 02:57:08.249]    nginx:
I1207 02:57:08.250]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 95 lines ...
W1207 02:57:12.112] I1207 02:57:11.660315   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-6f6bb85d9c", UID:"cad61d30-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1895", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-6f6bb85d9c-gz4xf
W1207 02:57:12.112] I1207 02:57:11.662960   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx", UID:"cad58caa-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1890", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 2
W1207 02:57:12.112] I1207 02:57:11.667217   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-9486b7cb7", UID:"cb21c799-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1902", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-kf8nj
I1207 02:57:13.098] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 02:57:13.270] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1207 02:57:13.360] (Bdeployment.extensions/nginx rolled back
W1207 02:57:13.461] error: unable to find specified revision 1000000 in history
I1207 02:57:14.452] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1207 02:57:14.534] (Bdeployment.extensions/nginx paused
W1207 02:57:14.634] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I1207 02:57:14.735] deployment.extensions/nginx resumed
I1207 02:57:14.819] deployment.extensions/nginx rolled back
I1207 02:57:14.985]     deployment.kubernetes.io/revision-history: 1,3
W1207 02:57:15.166] error: desired revision (3) is different from the running revision (5)
I1207 02:57:15.306] deployment.extensions/nginx2 created
I1207 02:57:15.384] deployment.extensions "nginx2" deleted
I1207 02:57:15.458] deployment.extensions "nginx" deleted
I1207 02:57:15.540] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:57:15.680] (Bdeployment.extensions/nginx-deployment created
I1207 02:57:15.774] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 28 lines ...
W1207 02:57:17.918] I1207 02:57:15.687574   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-646d4f779d", UID:"cd8966e2-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1965", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-xbf2l
W1207 02:57:17.918] I1207 02:57:16.026895   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cd88d35a-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1978", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W1207 02:57:17.918] I1207 02:57:16.029745   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-85db47bbdb", UID:"cdbdccf6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1979", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-72xfg
W1207 02:57:17.919] I1207 02:57:16.032563   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cd88d35a-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1978", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1207 02:57:17.919] I1207 02:57:16.037074   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-646d4f779d", UID:"cd8966e2-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1985", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-x8v52
W1207 02:57:17.919] I1207 02:57:16.037299   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cd88d35a-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1981", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 2
W1207 02:57:17.920] E1207 02:57:16.037621   55701 replica_set.go:450] Sync "namespace-1544151427-67/nginx-deployment-85db47bbdb" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-85db47bbdb": the object has been modified; please apply your changes to the latest version and try again
W1207 02:57:17.920] I1207 02:57:16.039937   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-85db47bbdb", UID:"cdbdccf6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1988", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-v4tdm
W1207 02:57:17.920] error: unable to find container named "redis"
W1207 02:57:17.920] I1207 02:57:17.113050   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cd88d35a-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2009", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W1207 02:57:17.921] I1207 02:57:17.116482   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-646d4f779d", UID:"cd8966e2-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-zs7gn
W1207 02:57:17.921] I1207 02:57:17.116673   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-646d4f779d", UID:"cd8966e2-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-xbf2l
W1207 02:57:17.921] I1207 02:57:17.118862   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cd88d35a-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2011", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 2
W1207 02:57:17.922] I1207 02:57:17.121224   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-dc756cc6", UID:"ce628f5b-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-lr4qv
W1207 02:57:17.922] I1207 02:57:17.123488   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-dc756cc6", UID:"ce628f5b-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-qrkwq
... skipping 55 lines ...
I1207 02:57:21.309] Namespace:    namespace-1544151439-31329
I1207 02:57:21.309] Selector:     app=guestbook,tier=frontend
I1207 02:57:21.310] Labels:       app=guestbook
I1207 02:57:21.310]               tier=frontend
I1207 02:57:21.310] Annotations:  <none>
I1207 02:57:21.310] Replicas:     3 current / 3 desired
I1207 02:57:21.310] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:21.310] Pod Template:
I1207 02:57:21.310]   Labels:  app=guestbook
I1207 02:57:21.310]            tier=frontend
I1207 02:57:21.310]   Containers:
I1207 02:57:21.310]    php-redis:
I1207 02:57:21.310]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 02:57:21.413] Namespace:    namespace-1544151439-31329
I1207 02:57:21.413] Selector:     app=guestbook,tier=frontend
I1207 02:57:21.413] Labels:       app=guestbook
I1207 02:57:21.414]               tier=frontend
I1207 02:57:21.414] Annotations:  <none>
I1207 02:57:21.414] Replicas:     3 current / 3 desired
I1207 02:57:21.414] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:21.414] Pod Template:
I1207 02:57:21.414]   Labels:  app=guestbook
I1207 02:57:21.414]            tier=frontend
I1207 02:57:21.414]   Containers:
I1207 02:57:21.414]    php-redis:
I1207 02:57:21.415]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I1207 02:57:21.509] Namespace:    namespace-1544151439-31329
I1207 02:57:21.509] Selector:     app=guestbook,tier=frontend
I1207 02:57:21.509] Labels:       app=guestbook
I1207 02:57:21.509]               tier=frontend
I1207 02:57:21.509] Annotations:  <none>
I1207 02:57:21.509] Replicas:     3 current / 3 desired
I1207 02:57:21.509] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:21.509] Pod Template:
I1207 02:57:21.509]   Labels:  app=guestbook
I1207 02:57:21.509]            tier=frontend
I1207 02:57:21.509]   Containers:
I1207 02:57:21.509]    php-redis:
I1207 02:57:21.509]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I1207 02:57:21.604] Namespace:    namespace-1544151439-31329
I1207 02:57:21.604] Selector:     app=guestbook,tier=frontend
I1207 02:57:21.604] Labels:       app=guestbook
I1207 02:57:21.604]               tier=frontend
I1207 02:57:21.604] Annotations:  <none>
I1207 02:57:21.604] Replicas:     3 current / 3 desired
I1207 02:57:21.605] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:21.605] Pod Template:
I1207 02:57:21.605]   Labels:  app=guestbook
I1207 02:57:21.605]            tier=frontend
I1207 02:57:21.605]   Containers:
I1207 02:57:21.605]    php-redis:
I1207 02:57:21.605]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 14 lines ...
I1207 02:57:21.606]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-8vdqv
I1207 02:57:21.606] (B
W1207 02:57:21.707] I1207 02:57:18.429675   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cece8fb6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2064", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 1
W1207 02:57:21.707] I1207 02:57:18.432026   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-5b795689cd", UID:"cf2c6b08-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2065", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-tnnwt
W1207 02:57:21.708] I1207 02:57:18.435689   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cece8fb6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2064", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W1207 02:57:21.708] I1207 02:57:18.438879   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-646d4f779d", UID:"cecf1a14-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-7krnp
W1207 02:57:21.708] E1207 02:57:18.441144   55701 replica_set.go:450] Sync "namespace-1544151427-67/nginx-deployment-5b795689cd" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5b795689cd": the object has been modified; please apply your changes to the latest version and try again
W1207 02:57:21.708] I1207 02:57:18.441445   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cece8fb6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2067", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5b795689cd to 2
W1207 02:57:21.709] I1207 02:57:18.444222   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-5b795689cd", UID:"cf2c6b08-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5b795689cd-zcdn7
W1207 02:57:21.709] I1207 02:57:18.687486   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cece8fb6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2089", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 0
W1207 02:57:21.709] I1207 02:57:18.691378   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-646d4f779d", UID:"cecf1a14-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-tbv9z
W1207 02:57:21.709] I1207 02:57:18.692097   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-646d4f779d", UID:"cecf1a14-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-j6fgg
W1207 02:57:21.710] I1207 02:57:18.694370   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cece8fb6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2091", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5766b7c95b to 2
... skipping 4 lines ...
W1207 02:57:21.711] I1207 02:57:18.940477   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cece8fb6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2118", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-5b795689cd to 0
W1207 02:57:21.711] I1207 02:57:18.945582   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cece8fb6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2120", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-65b869c68c to 2
W1207 02:57:21.711] I1207 02:57:19.080344   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cece8fb6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2128", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-65b869c68c to 0
W1207 02:57:21.712] I1207 02:57:19.214200   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-5b795689cd", UID:"cf2c6b08-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2121", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-tnnwt
W1207 02:57:21.712] I1207 02:57:19.230675   55701 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1544151427-67", Name:"nginx-deployment", UID:"cece8fb6-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2133", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7b8f7659b7 to 2
W1207 02:57:21.712] I1207 02:57:19.264807   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151427-67", Name:"nginx-deployment-5b795689cd", UID:"cf2c6b08-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2121", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5b795689cd-zcdn7
W1207 02:57:21.712] E1207 02:57:19.311083   55701 replica_set.go:450] Sync "namespace-1544151427-67/nginx-deployment-794dcdf6bb" failed with replicasets.apps "nginx-deployment-794dcdf6bb" not found
W1207 02:57:21.713] E1207 02:57:19.412027   55701 replica_set.go:450] Sync "namespace-1544151427-67/nginx-deployment-5766b7c95b" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5766b7c95b": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1544151427-67/nginx-deployment-5766b7c95b, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: cf52d0b8-f9cb-11e8-a1d0-0242ac110002, UID in object meta: 
W1207 02:57:21.713] I1207 02:57:19.436373   55701 horizontal.go:309] Horizontal Pod Autoscaler frontend has been deleted in namespace-1544151415-23524
W1207 02:57:21.713] E1207 02:57:19.461626   55701 replica_set.go:450] Sync "namespace-1544151427-67/nginx-deployment-65b869c68c" failed with replicasets.apps "nginx-deployment-65b869c68c" not found
W1207 02:57:21.713] E1207 02:57:19.611447   55701 replica_set.go:450] Sync "namespace-1544151427-67/nginx-deployment-669d4f8fc9" failed with replicasets.apps "nginx-deployment-669d4f8fc9" not found
W1207 02:57:21.713] E1207 02:57:19.761878   55701 replica_set.go:450] Sync "namespace-1544151427-67/nginx-deployment-5b795689cd" failed with replicasets.apps "nginx-deployment-5b795689cd" not found
W1207 02:57:21.713] E1207 02:57:19.811773   55701 replica_set.go:450] Sync "namespace-1544151427-67/nginx-deployment-75bf89d86f" failed with replicasets.apps "nginx-deployment-75bf89d86f" not found
W1207 02:57:21.713] I1207 02:57:19.890139   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151439-31329", Name:"frontend", UID:"d00a7c5f-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2163", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-d4mbq
W1207 02:57:21.714] I1207 02:57:19.912638   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151439-31329", Name:"frontend", UID:"d00a7c5f-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2163", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b4rc6
W1207 02:57:21.714] I1207 02:57:19.963384   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151439-31329", Name:"frontend", UID:"d00a7c5f-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2163", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r5plj
W1207 02:57:21.714] E1207 02:57:20.162016   55701 replica_set.go:450] Sync "namespace-1544151439-31329/frontend" failed with replicasets.apps "frontend" not found
W1207 02:57:21.714] I1207 02:57:20.277707   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151439-31329", Name:"frontend-no-cascade", UID:"d045fd1b-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2177", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-zkfgs
W1207 02:57:21.714] I1207 02:57:20.312981   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151439-31329", Name:"frontend-no-cascade", UID:"d045fd1b-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2177", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-spdln
W1207 02:57:21.715] I1207 02:57:20.413107   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151439-31329", Name:"frontend-no-cascade", UID:"d045fd1b-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2177", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-g9bh5
W1207 02:57:21.715] E1207 02:57:20.661806   55701 replica_set.go:450] Sync "namespace-1544151439-31329/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W1207 02:57:21.715] I1207 02:57:21.091903   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151439-31329", Name:"frontend", UID:"d0c24747-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2196", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-lzxg6
W1207 02:57:21.715] I1207 02:57:21.094030   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151439-31329", Name:"frontend", UID:"d0c24747-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2196", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mcjkl
W1207 02:57:21.716] I1207 02:57:21.094287   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151439-31329", Name:"frontend", UID:"d0c24747-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2196", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8vdqv
I1207 02:57:21.816] Successful describe rs:
I1207 02:57:21.816] Name:         frontend
I1207 02:57:21.816] Namespace:    namespace-1544151439-31329
I1207 02:57:21.816] Selector:     app=guestbook,tier=frontend
I1207 02:57:21.816] Labels:       app=guestbook
I1207 02:57:21.816]               tier=frontend
I1207 02:57:21.816] Annotations:  <none>
I1207 02:57:21.817] Replicas:     3 current / 3 desired
I1207 02:57:21.817] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:21.817] Pod Template:
I1207 02:57:21.817]   Labels:  app=guestbook
I1207 02:57:21.817]            tier=frontend
I1207 02:57:21.817]   Containers:
I1207 02:57:21.817]    php-redis:
I1207 02:57:21.817]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 02:57:21.824] Namespace:    namespace-1544151439-31329
I1207 02:57:21.824] Selector:     app=guestbook,tier=frontend
I1207 02:57:21.824] Labels:       app=guestbook
I1207 02:57:21.824]               tier=frontend
I1207 02:57:21.825] Annotations:  <none>
I1207 02:57:21.825] Replicas:     3 current / 3 desired
I1207 02:57:21.825] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:21.825] Pod Template:
I1207 02:57:21.825]   Labels:  app=guestbook
I1207 02:57:21.825]            tier=frontend
I1207 02:57:21.825]   Containers:
I1207 02:57:21.825]    php-redis:
I1207 02:57:21.825]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1207 02:57:21.919] Namespace:    namespace-1544151439-31329
I1207 02:57:21.919] Selector:     app=guestbook,tier=frontend
I1207 02:57:21.919] Labels:       app=guestbook
I1207 02:57:21.919]               tier=frontend
I1207 02:57:21.919] Annotations:  <none>
I1207 02:57:21.919] Replicas:     3 current / 3 desired
I1207 02:57:21.919] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:21.919] Pod Template:
I1207 02:57:21.919]   Labels:  app=guestbook
I1207 02:57:21.919]            tier=frontend
I1207 02:57:21.919]   Containers:
I1207 02:57:21.919]    php-redis:
I1207 02:57:21.919]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I1207 02:57:22.015] Namespace:    namespace-1544151439-31329
I1207 02:57:22.015] Selector:     app=guestbook,tier=frontend
I1207 02:57:22.015] Labels:       app=guestbook
I1207 02:57:22.015]               tier=frontend
I1207 02:57:22.015] Annotations:  <none>
I1207 02:57:22.015] Replicas:     3 current / 3 desired
I1207 02:57:22.015] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:22.015] Pod Template:
I1207 02:57:22.015]   Labels:  app=guestbook
I1207 02:57:22.015]            tier=frontend
I1207 02:57:22.015]   Containers:
I1207 02:57:22.015]    php-redis:
I1207 02:57:22.015]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I1207 02:57:26.746] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1207 02:57:26.832] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1207 02:57:26.901] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W1207 02:57:27.002] I1207 02:57:26.333327   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151439-31329", Name:"frontend", UID:"d3e1f431-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2388", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fl2sp
W1207 02:57:27.002] I1207 02:57:26.335611   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151439-31329", Name:"frontend", UID:"d3e1f431-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2388", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7gb5z
W1207 02:57:27.002] I1207 02:57:26.335647   55701 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1544151439-31329", Name:"frontend", UID:"d3e1f431-f9cb-11e8-a1d0-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2388", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kc4f7
W1207 02:57:27.002] Error: required flag(s) "max" not set
W1207 02:57:27.002] 
W1207 02:57:27.003] 
W1207 02:57:27.003] Examples:
W1207 02:57:27.003]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1207 02:57:27.003]   kubectl autoscale deployment foo --min=2 --max=10
W1207 02:57:27.003]   
... skipping 85 lines ...
I1207 02:57:29.729] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1207 02:57:29.811] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1207 02:57:29.905] (Bstatefulset.apps/nginx rolled back
I1207 02:57:29.989] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1207 02:57:30.073] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 02:57:30.168] (BSuccessful
I1207 02:57:30.168] message:error: unable to find specified revision 1000000 in history
I1207 02:57:30.168] has:unable to find specified revision
I1207 02:57:30.253] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I1207 02:57:30.336] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1207 02:57:30.428] (Bstatefulset.apps/nginx rolled back
I1207 02:57:30.516] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I1207 02:57:30.599] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 61 lines ...
I1207 02:57:32.301] Name:         mock
I1207 02:57:32.301] Namespace:    namespace-1544151451-30258
I1207 02:57:32.301] Selector:     app=mock
I1207 02:57:32.301] Labels:       app=mock
I1207 02:57:32.301] Annotations:  <none>
I1207 02:57:32.301] Replicas:     1 current / 1 desired
I1207 02:57:32.302] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:32.302] Pod Template:
I1207 02:57:32.302]   Labels:  app=mock
I1207 02:57:32.302]   Containers:
I1207 02:57:32.302]    mock-container:
I1207 02:57:32.302]     Image:        k8s.gcr.io/pause:2.0
I1207 02:57:32.302]     Port:         9949/TCP
... skipping 56 lines ...
I1207 02:57:34.282] Name:         mock
I1207 02:57:34.282] Namespace:    namespace-1544151451-30258
I1207 02:57:34.282] Selector:     app=mock
I1207 02:57:34.282] Labels:       app=mock
I1207 02:57:34.282] Annotations:  <none>
I1207 02:57:34.282] Replicas:     1 current / 1 desired
I1207 02:57:34.282] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:34.282] Pod Template:
I1207 02:57:34.282]   Labels:  app=mock
I1207 02:57:34.282]   Containers:
I1207 02:57:34.283]    mock-container:
I1207 02:57:34.283]     Image:        k8s.gcr.io/pause:2.0
I1207 02:57:34.283]     Port:         9949/TCP
... skipping 56 lines ...
I1207 02:57:36.217] Name:         mock
I1207 02:57:36.217] Namespace:    namespace-1544151451-30258
I1207 02:57:36.217] Selector:     app=mock
I1207 02:57:36.217] Labels:       app=mock
I1207 02:57:36.217] Annotations:  <none>
I1207 02:57:36.217] Replicas:     1 current / 1 desired
I1207 02:57:36.218] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:36.218] Pod Template:
I1207 02:57:36.218]   Labels:  app=mock
I1207 02:57:36.218]   Containers:
I1207 02:57:36.218]    mock-container:
I1207 02:57:36.218]     Image:        k8s.gcr.io/pause:2.0
I1207 02:57:36.218]     Port:         9949/TCP
... skipping 42 lines ...
I1207 02:57:38.062] Namespace:    namespace-1544151451-30258
I1207 02:57:38.062] Selector:     app=mock
I1207 02:57:38.062] Labels:       app=mock
I1207 02:57:38.063]               status=replaced
I1207 02:57:38.063] Annotations:  <none>
I1207 02:57:38.063] Replicas:     1 current / 1 desired
I1207 02:57:38.063] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:38.063] Pod Template:
I1207 02:57:38.063]   Labels:  app=mock
I1207 02:57:38.063]   Containers:
I1207 02:57:38.063]    mock-container:
I1207 02:57:38.063]     Image:        k8s.gcr.io/pause:2.0
I1207 02:57:38.063]     Port:         9949/TCP
... skipping 11 lines ...
I1207 02:57:38.065] Namespace:    namespace-1544151451-30258
I1207 02:57:38.065] Selector:     app=mock2
I1207 02:57:38.065] Labels:       app=mock2
I1207 02:57:38.065]               status=replaced
I1207 02:57:38.065] Annotations:  <none>
I1207 02:57:38.065] Replicas:     1 current / 1 desired
I1207 02:57:38.065] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1207 02:57:38.065] Pod Template:
I1207 02:57:38.065]   Labels:  app=mock2
I1207 02:57:38.066]   Containers:
I1207 02:57:38.066]    mock-container:
I1207 02:57:38.066]     Image:        k8s.gcr.io/pause:2.0
I1207 02:57:38.066]     Port:         9949/TCP
... skipping 107 lines ...
I1207 02:57:42.461] Context "test" modified.
I1207 02:57:42.467] +++ [1207 02:57:42] Testing persistent volumes
I1207 02:57:42.548] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I1207 02:57:42.689] (Bpersistentvolume/pv0001 created
I1207 02:57:42.778] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I1207 02:57:42.849] (Bpersistentvolume "pv0001" deleted
W1207 02:57:42.949] E1207 02:57:42.694712   55701 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I1207 02:57:43.050] persistentvolume/pv0002 created
I1207 02:57:43.084] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I1207 02:57:43.154] (Bpersistentvolume "pv0002" deleted
I1207 02:57:43.308] persistentvolume/pv0003 created
I1207 02:57:43.397] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I1207 02:57:43.471] (Bpersistentvolume "pv0003" deleted
... skipping 478 lines ...
I1207 02:57:48.008] yes
I1207 02:57:48.008] has:the server doesn't have a resource type
I1207 02:57:48.084] Successful
I1207 02:57:48.084] message:yes
I1207 02:57:48.084] has:yes
I1207 02:57:48.158] Successful
I1207 02:57:48.159] message:error: --subresource can not be used with NonResourceURL
I1207 02:57:48.159] has:subresource can not be used with NonResourceURL
I1207 02:57:48.232] Successful
I1207 02:57:48.308] Successful
I1207 02:57:48.308] message:yes
I1207 02:57:48.308] 0
I1207 02:57:48.308] has:0
... skipping 6 lines ...
I1207 02:57:48.489] role.rbac.authorization.k8s.io/testing-R reconciled
I1207 02:57:48.577] legacy-script.sh:736: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I1207 02:57:48.661] (Blegacy-script.sh:737: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I1207 02:57:48.749] (Blegacy-script.sh:738: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I1207 02:57:48.838] (Blegacy-script.sh:739: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I1207 02:57:48.917] (BSuccessful
I1207 02:57:48.918] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I1207 02:57:48.918] has:only rbac.authorization.k8s.io/v1 is supported
I1207 02:57:49.012] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I1207 02:57:49.017] role.rbac.authorization.k8s.io "testing-R" deleted
I1207 02:57:49.026] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I1207 02:57:49.033] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I1207 02:57:49.043] Recording: run_retrieve_multiple_tests
... skipping 893 lines ...
I1207 02:58:15.289] message:node/127.0.0.1 already uncordoned (dry run)
I1207 02:58:15.289] has:already uncordoned
I1207 02:58:15.370] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I1207 02:58:15.440] (Bnode/127.0.0.1 labeled
I1207 02:58:15.523] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I1207 02:58:15.585] (BSuccessful
I1207 02:58:15.585] message:error: cannot specify both a node name and a --selector option
I1207 02:58:15.586] See 'kubectl drain -h' for help and examples.
I1207 02:58:15.586] has:cannot specify both a node name
I1207 02:58:15.648] Successful
I1207 02:58:15.648] message:error: USAGE: cordon NODE [flags]
I1207 02:58:15.648] See 'kubectl cordon -h' for help and examples.
I1207 02:58:15.648] has:error\: USAGE\: cordon NODE
I1207 02:58:15.716] node/127.0.0.1 already uncordoned
I1207 02:58:15.783] Successful
I1207 02:58:15.783] message:error: You must provide one or more resources by argument or filename.
I1207 02:58:15.783] Example resource specifications include:
I1207 02:58:15.783]    '-f rsrc.yaml'
I1207 02:58:15.783]    '--filename=rsrc.json'
I1207 02:58:15.783]    '<resource> <name>'
I1207 02:58:15.783]    '<resource>'
I1207 02:58:15.783] has:must provide one or more resources
... skipping 15 lines ...
I1207 02:58:16.177] Successful
I1207 02:58:16.178] message:The following kubectl-compatible plugins are available:
I1207 02:58:16.178] 
I1207 02:58:16.178] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I1207 02:58:16.178]   - warning: kubectl-version overwrites existing command: "kubectl version"
I1207 02:58:16.178] 
I1207 02:58:16.178] error: one plugin warning was found
I1207 02:58:16.178] has:kubectl-version overwrites existing command: "kubectl version"
I1207 02:58:16.242] Successful
I1207 02:58:16.242] message:The following kubectl-compatible plugins are available:
I1207 02:58:16.243] 
I1207 02:58:16.243] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 02:58:16.243] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I1207 02:58:16.243]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 02:58:16.243] 
I1207 02:58:16.243] error: one plugin warning was found
I1207 02:58:16.243] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I1207 02:58:16.306] Successful
I1207 02:58:16.307] message:The following kubectl-compatible plugins are available:
I1207 02:58:16.307] 
I1207 02:58:16.307] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I1207 02:58:16.307] has:plugins are available
I1207 02:58:16.372] Successful
I1207 02:58:16.372] message:
I1207 02:58:16.373] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I1207 02:58:16.373] error: unable to find any kubectl plugins in your PATH
I1207 02:58:16.373] has:unable to find any kubectl plugins in your PATH
I1207 02:58:16.436] Successful
I1207 02:58:16.436] message:I am plugin foo
I1207 02:58:16.436] has:plugin foo
I1207 02:58:16.502] Successful
I1207 02:58:16.502] message:Client Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.1-beta.0.12+d2d6ac07e4ea8e", GitCommit:"d2d6ac07e4ea8e2e70307c473a627990d4b50c51", GitTreeState:"clean", BuildDate:"2018-12-07T02:51:57Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I1207 02:58:16.570] 
I1207 02:58:16.572] +++ Running case: test-cmd.run_impersonation_tests 
I1207 02:58:16.574] +++ working dir: /go/src/k8s.io/kubernetes
I1207 02:58:16.576] +++ command: run_impersonation_tests
I1207 02:58:16.585] +++ [1207 02:58:16] Testing impersonation
I1207 02:58:16.650] Successful
I1207 02:58:16.650] message:error: requesting groups or user-extra for  without impersonating a user
I1207 02:58:16.651] has:without impersonating a user
I1207 02:58:16.796] certificatesigningrequest.certificates.k8s.io/foo created
I1207 02:58:16.888] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I1207 02:58:16.972] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I1207 02:58:17.051] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I1207 02:58:17.202] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 268 lines ...
I1207 03:11:28.656] ok  	k8s.io/kubernetes/test/integration/storageclasses	4.976s
I1207 03:11:28.656] [restful] 2018/12/07 03:03:58 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:34141/swaggerapi
I1207 03:11:28.656] [restful] 2018/12/07 03:03:58 log.go:33: [restful/swagger] https://127.0.0.1:34141/swaggerui/ is mapped to folder /swagger-ui/
I1207 03:11:28.656] [restful] 2018/12/07 03:04:01 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:34141/swaggerapi
I1207 03:11:28.656] [restful] 2018/12/07 03:04:01 log.go:33: [restful/swagger] https://127.0.0.1:34141/swaggerui/ is mapped to folder /swagger-ui/
I1207 03:11:28.656] ok  	k8s.io/kubernetes/test/integration/tls	13.474s
I1207 03:11:28.656] FAIL	k8s.io/kubernetes/test/integration/ttlcontroller	442.250s
I1207 03:11:28.656] ok  	k8s.io/kubernetes/test/integration/volume	92.006s
I1207 03:11:28.657] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	147.114s
I1207 03:11:29.327] +++ [1207 03:11:29] Saved JUnit XML test report to /workspace/artifacts/junit_f5a444384056ebac4f2929ce7b7920ea9733ca19_20181207-025834.xml
I1207 03:11:29.329] Makefile:184: recipe for target 'test' failed
I1207 03:11:29.338] +++ [1207 03:11:29] Cleaning up etcd
W1207 03:11:29.439] make[1]: *** [test] Error 1
W1207 03:11:29.439] !!! [1207 03:11:29] Call tree:
W1207 03:11:29.439] !!! [1207 03:11:29]  1: hack/make-rules/test-integration.sh:105 runTests(...)
W1207 03:11:29.519] make: *** [test-integration] Error 1
I1207 03:11:29.619] +++ [1207 03:11:29] Integration test cleanup complete
I1207 03:11:29.620] Makefile:203: recipe for target 'test-integration' failed
W1207 03:11:30.527] Traceback (most recent call last):
W1207 03:11:30.527]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 167, in <module>
W1207 03:11:30.527]     main(ARGS.branch, ARGS.script, ARGS.force, ARGS.prow)
W1207 03:11:30.527]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 136, in main
W1207 03:11:30.528]     check(*cmd)
W1207 03:11:30.528]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W1207 03:11:30.528]     subprocess.check_call(cmd)
W1207 03:11:30.528]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
W1207 03:11:30.551]     raise CalledProcessError(retcode, cmd)
W1207 03:11:30.552] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=release-1.13', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181105-ceed87206', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E1207 03:11:30.558] Command failed
I1207 03:11:30.558] process 712 exited with code 1 after 25.3m
E1207 03:11:30.558] FAIL: pull-kubernetes-integration
I1207 03:11:30.559] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1207 03:11:31.008] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1207 03:11:31.050] process 123984 exited with code 0 after 0.0m
I1207 03:11:31.050] Call:  gcloud config get-value account
I1207 03:11:31.293] process 123997 exited with code 0 after 0.0m
I1207 03:11:31.293] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 03:11:31.293] Upload result and artifacts...
I1207 03:11:31.293] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/71824/pull-kubernetes-integration/37816
I1207 03:11:31.294] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/71824/pull-kubernetes-integration/37816/artifacts
W1207 03:11:32.946] CommandException: One or more URLs matched no objects.
E1207 03:11:33.096] Command failed
I1207 03:11:33.096] process 124010 exited with code 1 after 0.0m
W1207 03:11:33.096] Remote dir gs://kubernetes-jenkins/pr-logs/pull/71824/pull-kubernetes-integration/37816/artifacts not exist yet
I1207 03:11:33.097] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/71824/pull-kubernetes-integration/37816/artifacts
I1207 03:11:36.182] process 124155 exited with code 0 after 0.1m
W1207 03:11:36.182] metadata path /workspace/_artifacts/metadata.json does not exist
W1207 03:11:36.182] metadata not found or invalid, init with empty metadata
... skipping 22 lines ...