This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2916 succeeded
Started2020-06-30 13:28
Elapsed30m20s
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/3a8ba008-6886-49b9-a8e9-64b2348859da/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/3a8ba008-6886-49b9-a8e9-64b2348859da/targets/test

Test Failures


k8s.io/kubernetes/test/integration/client TestCertRotationContinuousRequests 56s

go test -v k8s.io/kubernetes/test/integration/client -run TestCertRotationContinuousRequests$
=== RUN   TestCertRotationContinuousRequests
I0630 13:48:03.296445  108287 controller.go:123] Shutting down OpenAPI controller
I0630 13:48:03.296462  108287 establishing_controller.go:87] Shutting down EstablishingController
I0630 13:48:03.296486  108287 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I0630 13:48:03.296532  108287 dynamic_cafile_content.go:182] Shutting down request-header::/tmp/kubernetes-kube-apiserver344466617/proxy-ca.crt
I0630 13:48:03.296545  108287 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/tmp/ca.crt
I0630 13:48:03.296566  108287 controller.go:87] Shutting down OpenAPI AggregationController
I0630 13:48:03.296590  108287 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/tmp/ca.crt
I0630 13:48:03.296668  108287 nonstructuralschema_controller.go:198] Shutting down NonStructuralSchemaConditionController
I0630 13:48:03.296696  108287 naming_controller.go:302] Shutting down NamingConditionController
I0630 13:48:03.296708  108287 crd_finalizer.go:278] Shutting down CRDFinalizer
I0630 13:48:03.296723  108287 available_controller.go:404] Shutting down AvailableConditionController
I0630 13:48:03.296737  108287 tlsconfig.go:255] Shutting down DynamicServingCertificateController
I0630 13:48:03.296747  108287 crdregistration_controller.go:142] Shutting down crd-autoregister controller
I0630 13:48:03.296750  108287 dynamic_serving_content.go:145] Shutting down serving-cert::/tmp/kubernetes-kube-apiserver344466617/apiserver.crt::/tmp/kubernetes-kube-apiserver344466617/apiserver.key
I0630 13:48:03.296762  108287 apiservice_controller.go:128] Shutting down APIServiceRegistrationController
I0630 13:48:03.296777  108287 customresource_discovery_controller.go:245] Shutting down DiscoveryController
I0630 13:48:03.296807  108287 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0630 13:48:03.296821  108287 autoregister_controller.go:165] Shutting down autoregister controller
I0630 13:48:03.296839  108287 secure_serving.go:231] Stopped listening on 127.0.0.1:43593
W0630 13:48:03.297008  108287 reflector.go:423] k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:117: watch of *v1.APIService ended with: very short watch: k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:117: Unexpected watch close - watch lasted less than a second and no items received
W0630 13:48:03.297061  108287 reflector.go:423] k8s.io/client-go/informers/factory.go:134: watch of *v1.IngressClass ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
W0630 13:48:03.297089  108287 reflector.go:423] k8s.io/client-go/informers/factory.go:134: watch of *v1.ResourceQuota ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
W0630 13:48:03.297103  108287 reflector.go:423] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
W0630 13:48:03.297121  108287 reflector.go:423] k8s.io/client-go/informers/factory.go:134: watch of *v1.LimitRange ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
W0630 13:48:03.297145  108287 reflector.go:423] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
W0630 13:48:03.297234  108287 reflector.go:423] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
W0630 13:48:03.297173  108287 reflector.go:423] k8s.io/client-go/informers/factory.go:134: watch of *v1.Endpoints ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
E0630 13:48:03.298252  108287 controller.go:184] Get "https://127.0.0.1:43593/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:43593: connect: connection refused
    TestCertRotationContinuousRequests: testserver.go:312: Resolved testserver package path to: "/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing"
I0630 13:48:05.028805  108287 serving.go:325] Generated self-signed cert (/tmp/kubernetes-kube-apiserver470517444/apiserver.crt, /tmp/kubernetes-kube-apiserver470517444/apiserver.key)
I0630 13:48:05.028889  108287 server.go:621] external host was not specified, using 127.0.0.1
W0630 13:48:05.028942  108287 authentication.go:484] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
    TestCertRotationContinuousRequests: testserver.go:183: runtime-config=map[api/all:true]
    TestCertRotationContinuousRequests: testserver.go:184: Starting kube-apiserver on port 46379...
W0630 13:48:06.637408  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:48:06.637440  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:48:06.637451  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:48:06.637668  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:48:06.638777  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:48:06.638828  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:48:06.638867  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:48:06.638891  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:48:06.638925  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:48:06.638983  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:48:06.639243  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:48:06.639466  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:48:06.639550  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0630 13:48:06.639573  108287 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0630 13:48:06.639583  108287 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0630 13:48:06.641029  108287 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0630 13:48:06.641054  108287 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0630 13:48:06.642881  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.642926  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.645599  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.645635  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
W0630 13:48:06.681651  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0630 13:48:06.682878  108287 master.go:270] Using reconciler: lease
I0630 13:48:06.683309  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.683350  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.686490  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.686527  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.687876  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.687909  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.689037  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.689064  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.689945  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.689981  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.690909  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.690942  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.691725  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.691753  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.693400  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.693427  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.694412  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.694451  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.695424  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.695467  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.697236  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.697266  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.698324  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.698351  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.699292  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.699324  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.700887  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.700914  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.701913  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.701938  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.703978  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.704011  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.704984  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.705012  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.705733  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.705762  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.914879  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.915003  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.916678  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.916707  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.918884  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.918919  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.920093  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.920253  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.923252  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.923299  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.924645  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.924678  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.926993  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.927043  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.929611  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.929646  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.937183  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.937219  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.939309  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.939462  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.941073  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.941110  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.942518  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.942560  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.945317  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.945357  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.948096  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.948304  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.949879  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.949916  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.952970  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.953004  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.955740  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.955774  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.957236  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.957269  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.960462  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.960495  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.962647  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.962775  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.965534  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.965997  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.968020  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.968050  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.969780  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.969799  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.973410  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.973577  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.977436  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.977471  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.979208  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.979337  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.981003  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.981034  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.982067  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.982086  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.985719  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.985750  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.989358  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.989393  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:06.990706  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:06.990735  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.014163  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.014406  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.018490  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.018531  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.024049  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.024087  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.041979  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.042037  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.045481  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.045957  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.048600  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.048642  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.051122  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.051166  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.052645  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.052685  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.054061  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.054094  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.057113  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.057185  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.061449  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.061500  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.062725  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.062773  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.064256  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.064299  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.066517  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.066559  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.069452  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.069493  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.072763  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.072812  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.082503  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.082549  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.085803  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.085847  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.090498  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.090736  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.093711  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.093737  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.095416  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.095460  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.099772  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.099915  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.103296  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.103525  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.110270  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.110397  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.114254  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.114285  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.116715  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.116746  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.117633  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.117663  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
W0630 13:48:07.385020  108287 genericapiserver.go:412] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0630 13:48:07.536749  108287 genericapiserver.go:412] Skipping API apps/v1beta2 because it has no resources.
W0630 13:48:07.536785  108287 genericapiserver.go:412] Skipping API apps/v1beta1 because it has no resources.
I0630 13:48:07.552237  108287 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0630 13:48:07.552265  108287 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0630 13:48:07.553836  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0630 13:48:07.554057  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.554084  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:07.555334  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.555372  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
W0630 13:48:07.558929  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
    TestCertRotationContinuousRequests: testserver.go:200: Waiting for /healthz to be ok...
I0630 13:48:07.642993  108287 client.go:360] parsed scheme: "endpoint"
I0630 13:48:07.643080  108287 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:48:14.402536  108287 dynamic_cafile_content.go:167] Starting request-header::/tmp/kubernetes-kube-apiserver470517444/proxy-ca.crt
I0630 13:48:14.402550  108287 dynamic_cafile_content.go:167] Starting client-ca-bundle::/tmp/ca.crt
I0630 13:48:14.403089  108287 dynamic_serving_content.go:130] Starting serving-cert::/tmp/kubernetes-kube-apiserver470517444/apiserver.crt::/tmp/kubernetes-kube-apiserver470517444/apiserver.key
I0630 13:48:14.403734  108287 secure_serving.go:187] Serving securely on 127.0.0.1:46379
I0630 13:48:14.403803  108287 tlsconfig.go:240] Starting DynamicServingCertificateController
I0630 13:48:14.403982  108287 customresource_discovery_controller.go:209] Starting DiscoveryController
I0630 13:48:14.409248  108287 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0630 13:48:14.412967  108287 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0630 13:48:14.410580  108287 available_controller.go:392] Starting AvailableConditionController
I0630 13:48:14.413145  108287 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0630 13:48:14.410595  108287 controller.go:81] Starting OpenAPI AggregationController
I0630 13:48:14.410694  108287 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0630 13:48:14.410928  108287 autoregister_controller.go:141] Starting autoregister controller
I0630 13:48:14.410955  108287 controller.go:86] Starting OpenAPI controller
I0630 13:48:14.410997  108287 naming_controller.go:291] Starting NamingConditionController
I0630 13:48:14.411009  108287 establishing_controller.go:76] Starting EstablishingController
I0630 13:48:14.411168  108287 crdregistration_controller.go:111] Starting crd-autoregister controller
I0630 13:48:14.411657  108287 crd_finalizer.go:266] Starting CRDFinalizer
I0630 13:48:14.411670  108287 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
E0630 13:48:14.412067  108287 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /eef59620-a0c1-4f47-be9e-63c38d77a9ff/registry/masterleases/127.0.0.1, ResourceVersion: 0, AdditionalErrorMsg: 
I0630 13:48:14.413501  108287 cache.go:32] Waiting for caches to sync for autoregister controller
I0630 13:48:14.413603  108287 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
W0630 13:48:14.414111  108287 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0630 13:48:14.414250  108287 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0630 13:48:14.414266  108287 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0630 13:48:14.414310  108287 dynamic_cafile_content.go:167] Starting client-ca-bundle::/tmp/ca.crt
I0630 13:48:14.414333  108287 dynamic_cafile_content.go:167] Starting request-header::/tmp/kubernetes-kube-apiserver470517444/proxy-ca.crt
W0630 13:48:14.420525  108287 warnings.go:67] node.k8s.io/v1beta1 RuntimeClass is deprecated in v1.22+, unavailable in v1.25+
W0630 13:48:14.422479  108287 warnings.go:67] node.k8s.io/v1beta1 RuntimeClass is deprecated in v1.22+, unavailable in v1.25+
I0630 13:48:14.513132  108287 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0630 13:48:14.513176  108287 cache.go:39] Caches are synced for AvailableConditionController controller
I0630 13:48:14.513586  108287 cache.go:39] Caches are synced for autoregister controller
I0630 13:48:14.513843  108287 shared_informer.go:247] Caches are synced for crd-autoregister 
I0630 13:48:14.514331  108287 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I0630 13:48:15.402305  108287 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0630 13:48:15.402349  108287 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0630 13:48:15.416816  108287 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0630 13:48:15.422786  108287 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0630 13:48:15.422809  108287 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
W0630 13:48:15.456612  108287 lease.go:229] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0630 13:48:15.458087  108287 controller.go:223] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
W0630 13:48:25.446004  108287 lease.go:229] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0630 13:48:25.447458  108287 controller.go:223] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
E0630 13:48:30.081983  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.087650  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.100601  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.121818  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.162379  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.242565  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.402701  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.722859  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:31.081057  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
W0630 13:48:35.446106  108287 lease.go:229] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0630 13:48:35.447438  108287 controller.go:223] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
W0630 13:48:45.444955  108287 lease.go:229] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0630 13:48:45.446502  108287 controller.go:223] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
W0630 13:48:55.450338  108287 lease.go:229] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0630 13:48:55.451879  108287 controller.go:223] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
    TestCertRotationContinuousRequests: cert_rotation_test.go:169: Get "https://127.0.0.1:46379/api/v1/namespaces/default/serviceaccounts": context canceled
W0630 13:49:00.083537  108287 cacher.go:148] Terminating all watchers from cacher *apiextensions.CustomResourceDefinition
E0630 13:49:00.083945  108287 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
W0630 13:49:00.084189  108287 cacher.go:148] Terminating all watchers from cacher *core.LimitRange
W0630 13:49:00.084339  108287 cacher.go:148] Terminating all watchers from cacher *core.ResourceQuota
W0630 13:49:00.084390  108287 cacher.go:148] Terminating all watchers from cacher *core.Secret
W0630 13:49:00.084624  108287 cacher.go:148] Terminating all watchers from cacher *core.ConfigMap
W0630 13:49:00.084713  108287 cacher.go:148] Terminating all watchers from cacher *core.Namespace
W0630 13:49:00.084830  108287 cacher.go:148] Terminating all watchers from cacher *core.Endpoints
W0630 13:49:00.085116  108287 cacher.go:148] Terminating all watchers from cacher *core.Pod
W0630 13:49:00.085248  108287 cacher.go:148] Terminating all watchers from cacher *core.ServiceAccount
W0630 13:49:00.085559  108287 cacher.go:148] Terminating all watchers from cacher *core.Service
W0630 13:49:00.087128  108287 cacher.go:148] Terminating all watchers from cacher *networking.IngressClass
W0630 13:49:00.087800  108287 cacher.go:148] Terminating all watchers from cacher *node.RuntimeClass
W0630 13:49:00.089728  108287 cacher.go:148] Terminating all watchers from cacher *scheduling.PriorityClass
W0630 13:49:00.090483  108287 cacher.go:148] Terminating all watchers from cacher *storage.StorageClass
W0630 13:49:00.091899  108287 cacher.go:148] Terminating all watchers from cacher *admissionregistration.ValidatingWebhookConfiguration
W0630 13:49:00.092142  108287 cacher.go:148] Terminating all watchers from cacher *admissionregistration.MutatingWebhookConfiguration
W0630 13:49:00.092768  108287 cacher.go:148] Terminating all watchers from cacher *apiregistration.APIService
I0630 13:49:00.093262  108287 controller.go:181] Shutting down kubernetes service endpoint reconciler
I0630 13:49:00.093313  108287 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I0630 13:49:00.093306  108287 naming_controller.go:302] Shutting down NamingConditionController
I0630 13:49:00.093288  108287 controller.go:123] Shutting down OpenAPI controller
I0630 13:49:00.093342  108287 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0630 13:49:00.093358  108287 crdregistration_controller.go:142] Shutting down crd-autoregister controller
--- FAIL: TestCertRotationContinuousRequests (56.80s)
I0630 13:49:00.093269  108287 dynamic_cafile_content.go:182] Shutting down request-header::/tmp/kubernetes-kube-apiserver470517444/proxy-ca.crt
I0630 13:49:00.093384  108287 nonstructuralschema_controller.go:198] Shutting down NonStructuralSchemaConditionController
I0630 13:49:00.093376  108287 autoregister_controller.go:165] Shutting down autoregister controller
I0630 13:49:00.093398  108287 establishing_controller.go:87] Shutting down EstablishingController

				from junit_20200630-134520.xml

Filter through log files | View test history on testgrid


Show 2916 Passed Tests

Show 6 Skipped Tests

Error lines from build-log.txt

... skipping 62 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 154: bogus-expected-to-fail: command not found
!!! [0630 13:33:04] Call tree:
!!! [0630 13:33:04]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0630 13:33:04]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0630 13:33:04]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:130 juLog(...)
!!! [0630 13:33:04]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:158 record_command(...)
!!! [0630 13:33:04]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0630 13:33:04] Running kubeadm tests
+++ [0630 13:33:10] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0630 13:33:57] Running tests without code coverage
{"Time":"2020-06-30T13:35:26.857682476Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t52.227s\n"}
✓  cmd/kubeadm/test/cmd (52.232s)
... skipping 319 lines ...
I0630 13:37:46.653634   53853 client.go:360] parsed scheme: "passthrough"
I0630 13:37:46.653703   53853 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0630 13:37:46.653716   53853 clientconn.go:948] ClientConn switching balancer to "pick_first"
+++ [0630 13:37:48] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0630 13:37:48.891474   57355 serving.go:331] Generated self-signed cert in-memory
W0630 13:37:49.521097   57355 authentication.go:368] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0630 13:37:49.521149   57355 authentication.go:265] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0630 13:37:49.521164   57355 authentication.go:289] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0630 13:37:49.521178   57355 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0630 13:37:49.521194   57355 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0630 13:37:49.521215   57355 controllermanager.go:175] Version: v1.19.0-beta.2.568+908847c01e9640
I0630 13:37:49.524083   57355 secure_serving.go:187] Serving securely on [::]:10257
I0630 13:37:49.524195   57355 tlsconfig.go:240] Starting DynamicServingCertificateController
I0630 13:37:49.524870   57355 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0630 13:37:49.524937   57355 leaderelection.go:243] attempting to acquire leader lease  kube-system/kube-controller-manager...
+++ [0630 13:37:49] On try 2, controller-manager: ok
I0630 13:37:49.535957   53853 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0630 13:37:49.538978   57355 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
I0630 13:37:49.539128   57355 event.go:291] "Event occurred" [object kube-system/kube-controller-manager kind Endpoints apiVersion v1 type Normal reason LeaderElection message 3a98ed8f-bad5-11ea-aff6-1681b180c6aa_29006a0e-4a35-4b69-b629-fd2445e1adae became leader]="(MISSING)"
I0630 13:37:49.539632   57355 event.go:291] "Event occurred" [object kube-system/kube-controller-manager kind Lease apiVersion coordination.k8s.io/v1 type Normal reason LeaderElection message 3a98ed8f-bad5-11ea-aff6-1681b180c6aa_29006a0e-4a35-4b69-b629-fd2445e1adae became leader]="(MISSING)"
W0630 13:37:49.842813   57355 controllermanager.go:567] "serviceaccount-token" is disabled because there is no private key
I0630 13:37:49.843179   57355 node_lifecycle_controller.go:77] Sending events to api server
E0630 13:37:49.843227   57355 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided
W0630 13:37:49.843242   57355 controllermanager.go:539] Skipping "cloud-node-lifecycle"
W0630 13:37:49.843809   57355 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:37:49.843876   57355 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:37:49.844682   57355 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:37:49.844711   57355 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0630 13:37:49.844738   57355 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
... skipping 73 lines ...
I0630 13:37:49.857334   57355 controllermanager.go:547] Started "job"
I0630 13:37:49.857467   57355 job_controller.go:148] Starting job controller
I0630 13:37:49.857480   57355 shared_informer.go:240] Waiting for caches to sync for job
I0630 13:37:49.857693   57355 controllermanager.go:547] Started "deployment"
I0630 13:37:49.857812   57355 deployment_controller.go:153] Starting deployment controller
I0630 13:37:49.857821   57355 shared_informer.go:240] Waiting for caches to sync for deployment
E0630 13:37:49.857949   57355 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0630 13:37:49.857959   57355 controllermanager.go:539] Skipping "service"
I0630 13:37:49.858207   57355 controllermanager.go:547] Started "csrapproving"
W0630 13:37:49.858216   57355 controllermanager.go:539] Skipping "ttl-after-finished"
I0630 13:37:49.858315   57355 certificate_controller.go:118] Starting certificate controller "csrapproving"
I0630 13:37:49.858327   57355 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
I0630 13:37:49.858468   57355 controllermanager.go:547] Started "podgc"
... skipping 57 lines ...
I0630 13:37:50.171024   57355 controllermanager.go:547] Started "cronjob"
W0630 13:37:50.171062   57355 controllermanager.go:539] Skipping "csrsigning"
I0630 13:37:50.171412   57355 controllermanager.go:547] Started "ttl"
I0630 13:37:50.173583   57355 cronjob_controller.go:96] Starting CronJob Manager
I0630 13:37:50.173804   57355 ttl_controller.go:118] Starting TTL controller
I0630 13:37:50.173811   57355 shared_informer.go:240] Waiting for caches to sync for TTL
W0630 13:37:50.216981   57355 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0630 13:37:50.246805   57355 shared_informer.go:247] Caches are synced for expand 
I0630 13:37:50.248456   57355 shared_informer.go:247] Caches are synced for persistent volume 
I0630 13:37:50.248876   57355 shared_informer.go:247] Caches are synced for daemon sets 
I0630 13:37:50.249517   57355 shared_informer.go:247] Caches are synced for ReplicaSet 
I0630 13:37:50.250411   57355 shared_informer.go:247] Caches are synced for PVC protection 
I0630 13:37:50.250848   57355 shared_informer.go:247] Caches are synced for PV protection 
... skipping 143 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0630 13:37:56] Creating namespace namespace-1593524276-7054
namespace/namespace-1593524276-7054 created
Context "test" modified.
+++ [0630 13:37:56] Testing RESTMapper
+++ [0630 13:37:57] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 59 lines ...
namespace/namespace-1593524281-17402 created
Context "test" modified.
+++ [0630 13:38:02] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 58 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 29 lines ...
message:Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
No resources found in namespace-1593524291-19058 namespace.
has:Role is deprecated
Successful
message:Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
No resources found in namespace-1593524291-19058 namespace.
Error: 1 warning received
has:Role is deprecated
Successful
message:Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
No resources found in namespace-1593524291-19058 namespace.
Error: 1 warning received
has:Error: 1 warning received
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:163: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:164: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:165: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 461 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:210: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:215: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 22 lines ...
I0630 13:38:27.461306   53853 client.go:360] parsed scheme: "passthrough"
I0630 13:38:27.461375   53853 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0630 13:38:27.461391   53853 clientconn.go:948] ClientConn switching balancer to "pick_first"
core.sh:265: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:269: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:275: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 221 lines ...
core.sh:534: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.2:
(BSuccessful
message:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:554: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0630 13:38:47] "kubectl patch with resourceVersion 560" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:578: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:kubectl-create kubectl-patch kubectl-replace
has:kubectl-replace
Successful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0630 13:38:49.100099   57355 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:606: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:631: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
I0630 13:38:50.257415   57355 event.go:291] "Event occurred" [object node-v1-test kind Node apiVersion v1 type Normal reason RegisteredNode message Node node-v1-test event: Registered Node node-v1-test in Controller]="(MISSING)"
... skipping 30 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:683: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:687: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:699: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 84 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0630 13:39:02] Creating namespace namespace-1593524342-27071
namespace/namespace-1593524342-27071 created
Context "test" modified.
+++ [0630 13:39:03] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 42 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0630 13:39:03] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 34 lines ...
I0630 13:39:07.550491   57355 event.go:291] "Event occurred" [object namespace-1593524343-22170/test-deployment-retainkeys-54b7586755 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: test-deployment-retainkeys-54b7586755-ddfrj]="(MISSING)"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0630 13:39:09.051794   65836 helpers.go:567] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
pod/test-pod created (dry run)
pod/test-pod created (dry run)
... skipping 11 lines ...
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
I0630 13:39:12.505711   53853 client.go:360] parsed scheme: "endpoint"
I0630 13:39:12.505755   53853 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0630 13:39:12.512610   53853 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj created (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
namespace/nsb created
apply.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/a created
apply.sh:161: Successful get pods a -n nsb {{.metadata.name}}: a
(Bpod/b created
pod/a pruned
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
apply.sh:165: Successful get pods b -n nsb {{.metadata.name}}: b
(BSuccessful
message:Error from server (NotFound): pods "a" not found
has:pods "a" not found
pod "b" deleted
apply.sh:175: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/a created
apply.sh:180: Successful get pods a {{.metadata.name}}: a
(BSuccessful
message:Error from server (NotFound): pods "b" not found
has:pods "b" not found
pod/b created
apply.sh:188: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:189: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
I0630 13:39:17.194788   57355 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1593524339-669
pod/a created
pod/b created
service/prune-svc created
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
... skipping 40 lines ...
(Bpod/b created
apply.sh:242: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod/b unchanged
pod/a pruned
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Successful
message:Error from server (NotFound): pods "a" not found
has:pods "a" not found
apply.sh:249: Successful get pods b -n nsb {{.metadata.name}}: b
(Bnamespace "nsb" deleted
Successful
message:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:260: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:264: Successful get services a {{.metadata.name}}: a
(BSuccessful
message:The Service "a" is invalid: spec.clusterIP: Invalid value: "10.0.0.12": field is immutable
has:field is immutable
I0630 13:39:44.636653   57355 event.go:291] "Event occurred" [object namespace-1593524343-22170/a kind Endpoints apiVersion v1 type Warning reason FailedToUpdateEndpoint message Failed to update endpoint namespace-1593524343-22170/a: Operation cannot be fulfilled on endpoints "a": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/namespace-1593524343-22170/a, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f5524e31-d762-4885-a7c3-b12fe0defcf6, UID in object meta: ]="(MISSING)"
service/a configured
apply.sh:271: Successful get services a {{.spec.clusterIP}}: 10.0.0.12
(Bservice "a" deleted
configmap/test-the-map created
service/test-the-service created
deployment.apps/test-the-deployment created
... skipping 18 lines ...
(Bapply.sh:286: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:287: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
message:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:295: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0630 13:39:47.724868   53853 client.go:360] parsed scheme: "passthrough"
I0630 13:39:47.724929   53853 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0630 13:39:47.724943   53853 clientconn.go:948] ClientConn switching balancer to "pick_first"
Successful
message:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
message:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
I0630 13:39:48.351931   57355 namespace_controller.go:185] Namespace has been deleted nsb
apply.sh:303: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
apply.sh:309: Successful get configmaps {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:configmap/foo created
error: unable to recognize "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:315: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:321: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:pod/pod-a created
... skipping 6 lines ...
pod "pod-c" deleted
apply.sh:329: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:333: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: unable to recognize "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
has:no matches for kind "Widget" in version "example.com/v1"
I0630 13:39:56.148490   53853 client.go:360] parsed scheme: "endpoint"
I0630 13:39:56.148531   53853 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
Successful
message:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:339: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0630 13:39:56.514962   53853 controller.go:606] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
... skipping 37 lines ...
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
I0630 13:40:00.716707   53853 client.go:360] parsed scheme: "endpoint"
I0630 13:40:00.716748   53853 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 12 lines ...
(Bpod "nginx-extensions" deleted
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0630 13:40:02] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 29 lines ...
I0630 13:40:06.563677   57355 event.go:291] "Event occurred" [object namespace-1593524403-17979/nginx kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx-954fb5f79 to 3]="(MISSING)"
I0630 13:40:06.567837   57355 event.go:291] "Event occurred" [object namespace-1593524403-17979/nginx-954fb5f79 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-954fb5f79-bpkrv]="(MISSING)"
I0630 13:40:06.571362   57355 event.go:291] "Event occurred" [object namespace-1593524403-17979/nginx-954fb5f79 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-954fb5f79-45t89]="(MISSING)"
I0630 13:40:06.573252   57355 event.go:291] "Event occurred" [object namespace-1593524403-17979/nginx-954fb5f79 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-954fb5f79-zgjqt]="(MISSING)"
apps.sh:152: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1593524403-17979\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1593524403-17979"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0630 13:40:16.319911   57355 event.go:291] "Event occurred" [object namespace-1593524403-17979/nginx kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx-6bc449d9b6 to 3]="(MISSING)"
I0630 13:40:16.325774   57355 event.go:291] "Event occurred" [object namespace-1593524403-17979/nginx-6bc449d9b6 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-6bc449d9b6-9jtst]="(MISSING)"
I0630 13:40:16.328467   57355 event.go:291] "Event occurred" [object namespace-1593524403-17979/nginx-6bc449d9b6 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-6bc449d9b6-5mm5s]="(MISSING)"
I0630 13:40:16.329420   57355 event.go:291] "Event occurred" [object namespace-1593524403-17979/nginx-6bc449d9b6 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-6bc449d9b6-zqwjk]="(MISSING)"
Successful
... skipping 314 lines ...
+++ [0630 13:40:27] Creating namespace namespace-1593524427-4635
namespace/namespace-1593524427-4635 created
Context "test" modified.
+++ [0630 13:40:27] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1593524427-4635 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1593524427-4635 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0630 13:40:30.114591   69200 loader.go:375] Config loaded from file:  /tmp/tmp.LWNw8MGqwq/.kube/config
I0630 13:40:30.116004   69200 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0630 13:40:30.150651   69200 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0630 13:40:30.152603   69200 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 624 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2020-06-30T13:40:38Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2020-06-30T13:40:38Z"}}, "name":"valid-pod", "namespace":"namespace-1593524437-21511", "resourceVersion":"1069", "selfLink":"/api/v1/namespaces/namespace-1593524437-21511/pods/valid-pod", "uid":"58544af9-e689-4d3b-b1f5-483c203772dc"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2020-06-30T13:40:38Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2020-06-30T13:40:38Z"}],"name":"valid-pod","namespace":"namespace-1593524437-21511","resourceVersion":"1069","selfLink":"/api/v1/namespaces/namespace-1593524437-21511/pods/valid-pod","uid":"58544af9-e689-4d3b-b1f5-483c203772dc"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2020-06-30T13:40:38Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2020-06-30T13:40:38Z]] name:valid-pod namespace:namespace-1593524437-21511 resourceVersion:1069 selfLink:/api/v1/namespaces/namespace-1593524437-21511/pods/valid-pod uid:58544af9-e689-4d3b-b1f5-483c203772dc] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 158 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 36 lines ...
+++ [0630 13:40:44] Creating namespace namespace-1593524444-19981
namespace/namespace-1593524444-19981 created
Context "test" modified.
+++ [0630 13:40:44] Testing kubectl exec POD COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0630 13:40:45] Creating namespace namespace-1593524445-18443
namespace/namespace-1593524445-18443 created
Context "test" modified.
+++ [0630 13:40:45] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0630 13:40:46.775908   57355 event.go:291] "Event occurred" [object namespace-1593524445-18443/frontend kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: frontend-fwwl6]="(MISSING)"
I0630 13:40:46.780300   57355 event.go:291] "Event occurred" [object namespace-1593524445-18443/frontend kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: frontend-ckxw5]="(MISSING)"
I0630 13:40:46.780367   57355 event.go:291] "Event occurred" [object namespace-1593524445-18443/frontend kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: frontend-vxhfq]="(MISSING)"
configmap/test-set-env-config created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-ckxw5 does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-ckxw5 does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"13f3d232-7b75-4157-9eff-e0b5556613b6","resourceVersion":"1152","creationTimestamp":"2020-06-30T13:40:48Z"}}
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"13f3d232-7b75-4157-9eff-e0b5556613b6","resourceVersion":"1153","creationTimestamp":"2020-06-30T13:40:48Z"},"data":{"key1":"config1"}}
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"13f3d232-7b75-4157-9eff-e0b5556613b6","resourceVersion":"1153","creationTimestamp":"2020-06-30T13:40:48Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"13f3d232-7b75-4157-9eff-e0b5556613b6"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 173 lines ...
has:Timeout
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 244 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0630 13:41:03] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 376 lines ...
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
I0630 13:41:35.423916   53853 client.go:360] parsed scheme: "passthrough"
I0630 13:41:35.423979   53853 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0630 13:41:35.423993   53853 clientconn.go:948] ClientConn switching balancer to "pick_first"
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
+++ [0630 13:41:38] Testing recursive resources
+++ [0630 13:41:38] Creating namespace namespace-1593524498-23742
namespace/namespace-1593524498-23742 created
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0630 13:41:38.796870   53853 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
E0630 13:41:38.798196   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0630 13:41:38.922925   53853 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
E0630 13:41:38.924013   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0630 13:41:39.054550   53853 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
E0630 13:41:39.055872   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0630 13:41:39.202657   53853 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
E0630 13:41:39.203752   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0630 13:41:39.794513   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
E0630 13:41:39.974105   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0630 13:41:40.034489   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:Name:         busybox0
Namespace:    namespace-1593524498-23742
Priority:     0
Node:         <none>
... skipping 159 lines ...
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0630 13:41:40.723032   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
I0630 13:41:41.372464   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/nginx kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx-5dc775846f to 3]="(MISSING)"
I0630 13:41:41.375223   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/nginx-5dc775846f kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-5dc775846f-4czrv]="(MISSING)"
I0630 13:41:41.379596   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/nginx-5dc775846f kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-5dc775846f-c6jlt]="(MISSING)"
I0630 13:41:41.379995   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/nginx-5dc775846f kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-5dc775846f-gmmqn]="(MISSING)"
... skipping 44 lines ...
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:extensions/v1beta1
deployment.apps "nginx" deleted
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0630 13:41:42.224444   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BI0630 13:41:42.448361   57355 namespace_controller.go:185] Namespace has been deleted non-native-resources
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0630 13:41:42.711023   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0630 13:41:43.048929   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0630 13:41:43.695731   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/busybox0 created
I0630 13:41:43.954799   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/busybox0 kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: busybox0-w9wz5]="(MISSING)"
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0630 13:41:43.959696   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/busybox1 kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: busybox1-h4dxl]="(MISSING)"
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0630 13:41:46.081959   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/busybox0 kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: busybox0-txrvg]="(MISSING)"
I0630 13:41:46.096609   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/busybox1 kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: busybox1-jznks]="(MISSING)"
E0630 13:41:46.202679   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0630 13:41:46.528695   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I0630 13:41:47.103876   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/nginx1-deployment kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx1-deployment-6977ddb7 to 2]="(MISSING)"
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0630 13:41:47.109703   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/nginx1-deployment-6977ddb7 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx1-deployment-6977ddb7-vz9fb]="(MISSING)"
I0630 13:41:47.112724   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/nginx0-deployment kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx0-deployment-d6f89cc8 to 2]="(MISSING)"
I0630 13:41:47.113225   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/nginx1-deployment-6977ddb7 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx1-deployment-6977ddb7-bk6z2]="(MISSING)"
I0630 13:41:47.117106   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/nginx0-deployment-d6f89cc8 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx0-deployment-d6f89cc8-5r6hr]="(MISSING)"
I0630 13:41:47.120673   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/nginx0-deployment-d6f89cc8 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx0-deployment-d6f89cc8-wwcmw]="(MISSING)"
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(BE0630 13:41:47.294561   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E0630 13:41:47.758375   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
... skipping 9 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0630 13:41:49.844268   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/busybox0 kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: busybox0-vdzrn]="(MISSING)"
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0630 13:41:49.851442   57355 event.go:291] "Event occurred" [object namespace-1593524498-23742/busybox1 kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: busybox1-wmtkv]="(MISSING)"
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0630 13:41:51] Testing kubectl(v1:namespaces)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1459: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
I0630 13:41:53.994108   57355 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0630 13:41:53.994167   57355 shared_informer.go:247] Caches are synced for garbage collector 
E0630 13:41:54.033725   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0630 13:41:54.473211   57355 shared_informer.go:240] Waiting for caches to sync for resource quota
I0630 13:41:54.473256   57355 shared_informer.go:247] Caches are synced for resource quota 
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
E0630 13:41:57.797938   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1468: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1593524271-9188" deleted
... skipping 29 lines ...
namespace "namespace-1593524450-18037" deleted
namespace "namespace-1593524450-4947" deleted
namespace "namespace-1593524452-18611" deleted
namespace "namespace-1593524455-28921" deleted
namespace "namespace-1593524456-29118" deleted
namespace "namespace-1593524498-23742" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1593524271-9188" deleted
... skipping 29 lines ...
namespace "namespace-1593524450-18037" deleted
namespace "namespace-1593524450-4947" deleted
namespace "namespace-1593524452-18611" deleted
namespace "namespace-1593524455-28921" deleted
namespace "namespace-1593524456-29118" deleted
namespace "namespace-1593524498-23742" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1475: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1476: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
resourcequota/test-quota created (server dry run)
core.sh:1480: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created
core.sh:1483: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: found:
(BI0630 13:41:59.126121   57355 resource_quota_controller.go:306] Resource quota has been deleted quotas/test-quota
resourcequota "test-quota" deleted
E0630 13:41:59.157092   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "quotas" deleted
I0630 13:41:59.540428   57355 horizontal.go:354] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1593524498-23742
I0630 13:41:59.544661   57355 horizontal.go:354] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1593524498-23742
E0630 13:42:00.058826   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1495: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1499: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1503: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1507: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1509: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1516: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1520: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 119 lines ...
(Bcore.sh:910: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:911: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:920: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
E0630 13:42:15.770204   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0630 13:42:15.950790   57355 namespace_controller.go:185] Namespace has been deleted other
E0630 13:42:17.841416   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0630 13:42:19.470630   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 2 lines ...
namespace/namespace-1593524540-31216 created
Context "test" modified.
+++ [0630 13:42:21] Testing configmaps
configmap/test-configmap created
core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
(Bconfigmap "test-configmap" deleted
E0630 13:42:21.576564   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
(Bnamespace/test-configmaps created
core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
(Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
(Bconfigmap/test-configmap created (dry run)
... skipping 16 lines ...
+++ command: run_client_config_tests
+++ [0630 13:42:28] Creating namespace namespace-1593524548-19894
namespace/namespace-1593524548-19894 created
Context "test" modified.
+++ [0630 13:42:28] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 43 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 37 lines ...
Labels:         controller-uid=adc37449-dd5c-46ed-a28f-228d0db63b3a
                job-name=test-job
Annotations:    cronjob.kubernetes.io/instantiate: manual
Parallelism:    1
Completions:    1
Start Time:     Tue, 30 Jun 2020 13:42:38 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=adc37449-dd5c-46ed-a28f-228d0db63b3a
           job-name=test-job
  Containers:
   pi:
    Image:      k8s.gcr.io/perl
... skipping 467 lines ...
  type: ClusterIP
status:
  loadBalancer: {}
Successful
message:kubectl-create kubectl-set
has:kubectl-set
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1020: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:1033: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:1040: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1044: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/redis-master created
... skipping 44 lines ...
core.sh:1152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
(Bcore.sh:1153: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
(BSuccessful
message:kubectl-run
has:kubectl-run
service/exposemetadata exposed
E0630 13:42:56.451984   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1162: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
(BSuccessful
message:kubectl-expose
has:kubectl-expose
service "exposemetadata" deleted
service "testmetadata" deleted
... skipping 66 lines ...
 (dry run)
daemonset.apps/bind rolled back (server dry run)
apps.sh:87: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
E0630 13:43:01.956496   57355 daemon_controller.go:320] namespace-1593524579-362/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1593524579-362", SelfLink:"/apis/apps/v1/namespaces/namespace-1593524579-362/daemonsets/bind", UID:"81c50f22-2fb5-4b1f-8f7f-488afa75a391", ResourceVersion:"2039", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729121379, loc:(*time.Location)(0x6ea2a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1593524579-362\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001a99d60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001a99e60)}, v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001a99ea0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001a99ec0)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001a99ee0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001a99f20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001a99f40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00214fb58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004b6230), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001a99f80), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00192e380)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00214fbbc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:92: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:93: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(BE0630 13:43:02.496918   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:98: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
E0630 13:43:02.667730   57355 daemon_controller.go:320] namespace-1593524579-362/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1593524579-362", SelfLink:"/apis/apps/v1/namespaces/namespace-1593524579-362/daemonsets/bind", UID:"81c50f22-2fb5-4b1f-8f7f-488afa75a391", ResourceVersion:"2042", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729121379, loc:(*time.Location)(0x6ea2a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1593524579-362\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0019e87a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019e87c0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0019e87e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019e8800)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0019e8820), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019e8840)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019e8880), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a8e458), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004b7c70), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0019e88c0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00192e6b8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002a8e4ac)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:101: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:102: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:103: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_rc_tests
... skipping 32 lines ...
Namespace:    namespace-1593524583-6742
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1593524583-6742
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1593524583-6742
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1593524583-6742
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1593524583-6742
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1593524583-6742
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1593524583-6742
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1593524583-6742
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 15 lines ...
(Bcore.sh:1224: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0630 13:43:05.777023   57355 replica_set.go:201] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1593524583-6742 /api/v1/namespaces/namespace-1593524583-6742/replicationcontrollers/frontend aeb86b32-c0fa-43d2-8ecc-ce1559a25cf8 2080 2 2020-06-30 13:43:04 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  [{kube-controller-manager Update v1 2020-06-30 13:43:04 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}} {kubectl-create Update v1 2020-06-30 13:43:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:replicas":{},"f:selector":{".":{},"f:app":{},"f:tier":{}},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0028b2a88 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0630 13:43:05.782710   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/frontend kind ReplicationController apiVersion v1 type Normal reason SuccessfulDelete message Deleted pod: frontend-5stnh]="(MISSING)"
core.sh:1228: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1232: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1236: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1240: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0630 13:43:06.453471   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/frontend kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: frontend-tdhtj]="(MISSING)"
core.sh:1244: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1248: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 15 lines ...
I0630 13:43:07.666967   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/redis-master kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: redis-master-2k59q]="(MISSING)"
I0630 13:43:07.668251   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/redis-master kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: redis-master-b8z7q]="(MISSING)"
core.sh:1262: Successful get rc redis-master {{.spec.replicas}}: 4
(Bcore.sh:1263: Successful get rc redis-slave {{.spec.replicas}}: 4
(Breplicationcontroller "redis-master" deleted
replicationcontroller "redis-slave" deleted
E0630 13:43:08.291750   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0630 13:43:08.303056   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx-deployment-6c8578df44 to 3]="(MISSING)"
I0630 13:43:08.307267   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-6c8578df44 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-deployment-6c8578df44-n9cnm]="(MISSING)"
I0630 13:43:08.311052   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-6c8578df44 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-deployment-6c8578df44-66gd9]="(MISSING)"
I0630 13:43:08.311092   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-6c8578df44 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-deployment-6c8578df44-zd4wj]="(MISSING)"
deployment.apps/nginx-deployment scaled
... skipping 2 lines ...
I0630 13:43:08.450085   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-6c8578df44 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulDelete message Deleted pod: nginx-deployment-6c8578df44-n9cnm]="(MISSING)"
core.sh:1272: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
(Bdeployment.apps "nginx-deployment" deleted
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
E0630 13:43:08.864919   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0630 13:43:09.263444   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx-deployment-6c8578df44 to 3]="(MISSING)"
I0630 13:43:09.266487   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-6c8578df44 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-deployment-6c8578df44-fqdm4]="(MISSING)"
I0630 13:43:09.271010   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-6c8578df44 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-deployment-6c8578df44-zwhxt]="(MISSING)"
... skipping 23 lines ...
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
service "frontend-5" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1391: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1395: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1404: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0630 13:43:15.799634   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-resources kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx-deployment-resources-988f5b655 to 3]="(MISSING)"
I0630 13:43:15.802303   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-resources-988f5b655 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-deployment-resources-988f5b655-s77kg]="(MISSING)"
I0630 13:43:15.807183   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-resources-988f5b655 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-deployment-resources-988f5b655-7lmmz]="(MISSING)"
I0630 13:43:15.807221   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-resources-988f5b655 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-deployment-resources-988f5b655-qm7x7]="(MISSING)"
core.sh:1410: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1411: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1412: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0630 13:43:16.257116   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-resources kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx-deployment-resources-7d47d89d55 to 1]="(MISSING)"
I0630 13:43:16.260987   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-resources-7d47d89d55 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-deployment-resources-7d47d89d55-fbh6p]="(MISSING)"
core.sh:1415: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1416: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0630 13:43:16.716883   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-resources kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled down replica set nginx-deployment-resources-988f5b655 to 2]="(MISSING)"
I0630 13:43:16.725450   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-resources-988f5b655 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulDelete message Deleted pod: nginx-deployment-resources-988f5b655-s77kg]="(MISSING)"
I0630 13:43:16.730058   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-resources kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx-deployment-resources-884bc7d65 to 1]="(MISSING)"
I0630 13:43:16.735250   57355 event.go:291] "Event occurred" [object namespace-1593524583-6742/nginx-deployment-resources-884bc7d65 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-deployment-resources-884bc7d65-n88vl]="(MISSING)"
core.sh:1421: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 387 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1432: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1433: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1434: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 46 lines ...
                pod-template-hash=7c69fb5cbf
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=7c69fb5cbf
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 105 lines ...
apps.sh:304: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(B    Image:	k8s.gcr.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:308: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:312: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:315: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:319: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
I0630 13:43:29.548382   57355 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1593524583-6742
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0630 13:43:30.220854   57355 event.go:291] "Event occurred" [object namespace-1593524598-6243/nginx kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled down replica set nginx-5dc775846f to 2]="(MISSING)"
I0630 13:43:30.228808   57355 event.go:291] "Event occurred" [object namespace-1593524598-6243/nginx-5dc775846f kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulDelete message Deleted pod: nginx-5dc775846f-hfx99]="(MISSING)"
I0630 13:43:30.231146   57355 event.go:291] "Event occurred" [object namespace-1593524598-6243/nginx kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx-6899f569dc to 1]="(MISSING)"
I0630 13:43:30.236237   57355 event.go:291] "Event occurred" [object namespace-1593524598-6243/nginx-6899f569dc kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-6899f569dc-k6h8d]="(MISSING)"
Successful
... skipping 149 lines ...
(Bapps.sh:363: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0630 13:43:33.567058   57355 event.go:291] "Event occurred" [object namespace-1593524598-6243/nginx-deployment kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx-deployment-5b75fb96c8 to 1]="(MISSING)"
I0630 13:43:33.572674   57355 event.go:291] "Event occurred" [object namespace-1593524598-6243/nginx-deployment-5b75fb96c8 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: nginx-deployment-5b75fb96c8-k5fgj]="(MISSING)"
apps.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:367: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:372: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:373: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:376: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:377: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
... skipping 47 lines ...
deployment.apps/nginx-deployment env updated
I0630 13:43:38.772365   57355 event.go:291] "Event occurred" [object namespace-1593524598-6243/nginx-deployment kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled down replica set nginx-deployment-5bdd466bb to 0]="(MISSING)"
I0630 13:43:38.791826   57355 event.go:291] "Event occurred" [object namespace-1593524598-6243/nginx-deployment kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set nginx-deployment-6887bd5699 to 1]="(MISSING)"
deployment.apps/nginx-deployment env updated
I0630 13:43:38.927168   57355 event.go:291] "Event occurred" [object namespace-1593524598-6243/nginx-deployment-5bdd466bb kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulDelete message Deleted pod: nginx-deployment-5bdd466bb-n7vbp]="(MISSING)"
deployment.apps/nginx-deployment env updated
E0630 13:43:39.070459   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx-deployment" deleted
configmap "test-set-env-config" deleted
E0630 13:43:39.269656   57355 replica_set.go:532] sync "namespace-1593524598-6243/nginx-deployment-c5c5cb694" failed with replicasets.apps "nginx-deployment-c5c5cb694" not found
E0630 13:43:39.320852   57355 replica_set.go:532] sync "namespace-1593524598-6243/nginx-deployment-6887bd5699" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-6887bd5699": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1593524598-6243/nginx-deployment-6887bd5699, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f45047a5-67c3-4ab2-a75a-65d0a44528f6, UID in object meta: 
I0630 13:43:39.340934   57355 horizontal.go:354] Horizontal Pod Autoscaler nginx-deployment has been deleted in namespace-1593524598-6243
secret "test-set-env-secret" deleted
+++ exit code: 0
Recording: run_rs_tests
Running command: run_rs_tests

E0630 13:43:39.489203   57355 replica_set.go:532] sync "namespace-1593524598-6243/nginx-deployment-5b8bf4dfb7" failed with replicasets.apps "nginx-deployment-5b8bf4dfb7" not found
+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [0630 13:43:39] Creating namespace namespace-1593524619-11716
E0630 13:43:39.520026   57355 replica_set.go:532] sync "namespace-1593524598-6243/nginx-deployment-6b6d6866bf" failed with replicasets.apps "nginx-deployment-6b6d6866bf" not found
E0630 13:43:39.570192   57355 replica_set.go:532] sync "namespace-1593524598-6243/nginx-deployment-5bdd466bb" failed with replicasets.apps "nginx-deployment-5bdd466bb" not found
namespace/namespace-1593524619-11716 created
Context "test" modified.
+++ [0630 13:43:39] Testing kubectl(v1:replicasets)
apps.sh:540: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0630 13:43:39.965606   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0630 13:43:40.103133   57355 event.go:291] "Event occurred" [object namespace-1593524619-11716/frontend kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: frontend-59m8q]="(MISSING)"
I0630 13:43:40.106866   57355 event.go:291] "Event occurred" [object namespace-1593524619-11716/frontend kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: frontend-9gf52]="(MISSING)"
I0630 13:43:40.108581   57355 event.go:291] "Event occurred" [object namespace-1593524619-11716/frontend kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: frontend-rqzcv]="(MISSING)"
+++ [0630 13:43:40] Deleting rs
replicaset.apps "frontend" deleted
E0630 13:43:40.232794   57355 replica_set.go:532] sync "namespace-1593524619-11716/frontend" failed with Operation cannot be fulfilled on replicasets.apps "frontend": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1593524619-11716/frontend, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 4e76575a-fdb5-4be8-99fd-f30c9f70a7e5, UID in object meta: 
apps.sh:546: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:550: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0630 13:43:40.714224   57355 event.go:291] "Event occurred" [object namespace-1593524619-11716/frontend kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: frontend-r2lgp]="(MISSING)"
I0630 13:43:40.718082   57355 event.go:291] "Event occurred" [object namespace-1593524619-11716/frontend kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: frontend-vz9zf]="(MISSING)"
I0630 13:43:40.718128   57355 event.go:291] "Event occurred" [object namespace-1593524619-11716/frontend kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: frontend-w7cl9]="(MISSING)"
apps.sh:554: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0630 13:43:40] Deleting rs
replicaset.apps "frontend" deleted
E0630 13:43:41.019880   57355 replica_set.go:532] sync "namespace-1593524619-11716/frontend" failed with replicasets.apps "frontend" not found
apps.sh:558: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:560: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-r2lgp" deleted
pod "frontend-vz9zf" deleted
pod "frontend-w7cl9" deleted
apps.sh:563: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1593524619-11716
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1593524619-11716
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1593524619-11716
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1593524619-11716
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1593524619-11716
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1593524619-11716
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1593524619-11716
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1593524619-11716
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 182 lines ...
apps.sh:656: Successful get rs frontend {{.metadata.generation}}: 4
(Breplicaset.apps/frontend serviceaccount updated (dry run)
replicaset.apps/frontend serviceaccount updated (server dry run)
apps.sh:659: Successful get rs frontend {{.metadata.generation}}: 4
(Breplicaset.apps/frontend serviceaccount updated
apps.sh:661: Successful get rs frontend {{.metadata.generation}}: 5
(BE0630 13:43:51.038936   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:kubectl-create kube-controller-manager kubectl-set
has:kubectl-set
apps.sh:669: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(Breplicaset.apps "frontend" deleted
E0630 13:43:51.282159   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:673: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:677: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0630 13:43:51.728266   57355 event.go:291] "Event occurred" [object namespace-1593524619-11716/frontend kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: frontend-kzx2g]="(MISSING)"
I0630 13:43:51.733475   57355 event.go:291] "Event occurred" [object namespace-1593524619-11716/frontend kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: frontend-rrkb5]="(MISSING)"
I0630 13:43:51.734908   57355 event.go:291] "Event occurred" [object namespace-1593524619-11716/frontend kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: frontend-p69t4]="(MISSING)"
... skipping 20 lines ...
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:705: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(BSuccessful
message:kubectl-autoscale
has:kubectl-autoscale
horizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 61 lines ...
(Bapps.sh:465: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:466: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:469: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:470: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:474: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:475: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:478: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(Bapps.sh:479: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
Name:         mock
Namespace:    namespace-1593524640-18575
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 56 lines ...
Name:         mock
Namespace:    namespace-1593524640-18575
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 56 lines ...
Name:         mock
Namespace:    namespace-1593524640-18575
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 42 lines ...
Namespace:    namespace-1593524640-18575
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1593524640-18575
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 106 lines ...
+++ [0630 13:44:17] Testing persistent volumes
storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
E0630 13:44:18.070581   57355 pv_protection_controller.go:118] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
E0630 13:44:18.596375   57355 pv_protection_controller.go:118] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bpersistentvolume "pv0003" deleted
storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
storage.sh:45: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(BSuccessful
... skipping 72 lines ...
Roles:              <none>
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
CreationTimestamp:  Tue, 30 Jun 2020 13:37:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 30 lines ...
Roles:              <none>
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
CreationTimestamp:  Tue, 30 Jun 2020 13:37:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Roles:              <none>
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
CreationTimestamp:  Tue, 30 Jun 2020 13:37:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 30 lines ...
Roles:              <none>
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
CreationTimestamp:  Tue, 30 Jun 2020 13:37:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 38 lines ...
Roles:              <none>
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
CreationTimestamp:  Tue, 30 Jun 2020 13:37:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 30 lines ...
Roles:              <none>
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
CreationTimestamp:  Tue, 30 Jun 2020 13:37:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 30 lines ...
Roles:              <none>
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
CreationTimestamp:  Tue, 30 Jun 2020 13:37:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 29 lines ...
Roles:              <none>
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
CreationTimestamp:  Tue, 30 Jun 2020 13:37:49 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Tue, 30 Jun 2020 13:37:49 +0000   Tue, 30 Jun 2020 13:38:50 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 128 lines ...
yes
has:the server doesn't have a resource type
Successful
message:yes
has:yes
Successful
message:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
Successful
message:yes
0
has:0
Successful
message:0
has:0
Successful
message:yes
has not:Warning
E0630 13:44:26.255137   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Warning: the server doesn't have a resource type 'foo'
yes
has:Warning: the server doesn't have a resource type 'foo'
Successful
message:Warning: the server doesn't have a resource type 'foo'
... skipping 50 lines ...
I0630 13:44:27.523066   53853 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0630 13:44:27.523079   53853 clientconn.go:948] ClientConn switching balancer to "pick_first"
legacy-script.sh:833: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:834: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:835: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 21 lines ...
I0630 13:44:28.892386   57355 event.go:291] "Event occurred" [object namespace-1593524668-5575/cassandra kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: cassandra-xdfhz]="(MISSING)"
I0630 13:44:28.905752   57355 event.go:291] "Event occurred" [object namespace-1593524668-5575/cassandra kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: cassandra-vlvkn]="(MISSING)"
service/cassandra created
discovery.sh:91: Successful get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}: cassandra:cassandra:cassandra:cassandra:
(BI0630 13:44:29.410740   57355 event.go:291] "Event occurred" [object namespace-1593524668-5575/cassandra kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: cassandra-jn85c]="(MISSING)"
pod "cassandra-vlvkn" deleted
I0630 13:44:29.412451   57355 event.go:291] "Event occurred" [object namespace-1593524668-5575/cassandra kind Endpoints apiVersion v1 type Warning reason FailedToUpdateEndpoint message Failed to update endpoint namespace-1593524668-5575/cassandra: Operation cannot be fulfilled on endpoints "cassandra": the object has been modified; please apply your changes to the latest version and try again]="(MISSING)"
pod "cassandra-xdfhz" deleted
I0630 13:44:29.420911   57355 event.go:291] "Event occurred" [object namespace-1593524668-5575/cassandra kind ReplicationController apiVersion v1 type Normal reason SuccessfulCreate message Created pod: cassandra-pljpn]="(MISSING)"
replicationcontroller "cassandra" deleted
E0630 13:44:29.428742   57355 replica_set.go:532] sync "namespace-1593524668-5575/cassandra" failed with Operation cannot be fulfilled on replicationcontrollers "cassandra": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1593524668-5575/cassandra, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 5bd85f06-3218-487f-93ca-c4efdb28cde0, UID in object meta: 
service "cassandra" deleted
E0630 13:44:29.450603   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_kubectl_explain_tests
Running command: run_kubectl_explain_tests

+++ Running case: test-cmd.run_kubectl_explain_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 217 lines ...
+++ Running case: test-cmd.run_kubectl_all_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_all_namespace_tests
+++ [0630 13:44:34] Testing kubectl --all-namespace
get.sh:342: Successful get namespaces {{range.items}}{{if eq .metadata.name \"default\"}}{{.metadata.name}}:{{end}}{{end}}: default:
(Bget.sh:346: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0630 13:44:34.802962   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/valid-pod created
get.sh:350: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BNAMESPACE                   NAME        READY   STATUS    RESTARTS   AGE
namespace-1593524668-5575   valid-pod   0/1     Pending   0          1s
namespace/all-ns-test-1 created
serviceaccount/test created
... skipping 121 lines ...
namespace-1593524657-10304   default   0         18s
namespace-1593524659-19593   default   0         16s
namespace-1593524668-5575    default   0         7s
some-other-random            default   0         8s
has:all-ns-test-2
namespace "all-ns-test-1" deleted
E0630 13:44:38.065655   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "all-ns-test-2" deleted
I0630 13:44:45.886987   57355 namespace_controller.go:185] Namespace has been deleted all-ns-test-1
get.sh:376: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
get.sh:380: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 779 lines ...
message:node/127.0.0.1 already uncordoned (server dry run)
has:already uncordoned
node-management.sh:145: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 labeled
node-management.sh:150: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(BSuccessful
message:error: cannot specify both a node name and a --selector option
See 'kubectl drain -h' for help and examples
has:cannot specify both a node name
Successful
message:error: USAGE: cordon NODE [flags]
See 'kubectl cordon -h' for help and examples
has:error\: USAGE\: cordon NODE
node/127.0.0.1 already uncordoned
Successful
message:error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'
has:must provide one or more resources
... skipping 14 lines ...
+++ [0630 13:45:04] Testing kubectl plugins
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/version/kubectl-version
  - warning: kubectl-version overwrites existing command: "kubectl version"
error: one plugin warning was found
has:kubectl-version overwrites existing command: "kubectl version"
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
  - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
error: one plugin warning was found
has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
has:plugins are available
Successful
message:Unable read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
error: unable to find any kubectl plugins in your PATH
has:unable to find any kubectl plugins in your PATH
Successful
message:I am plugin foo
has:plugin foo
Successful
message:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
... skipping 10 lines ...

+++ Running case: test-cmd.run_impersonation_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_impersonation_tests
+++ [0630 13:45:04] Testing impersonation
Successful
message:error: requesting groups or user-extra for  without impersonating a user
has:without impersonating a user
Warning: certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest
certificatesigningrequest.certificates.k8s.io/foo created
authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
(Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
(BWarning: certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest
... skipping 19 lines ...
deployment.apps/test-1 created
I0630 13:45:06.604637   57355 event.go:291] "Event occurred" [object namespace-1593524706-12089/test-1-8647b7cbc9 kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: test-1-8647b7cbc9-bmbq7]="(MISSING)"
I0630 13:45:06.694925   57355 event.go:291] "Event occurred" [object namespace-1593524706-12089/test-2 kind Deployment apiVersion apps/v1 type Normal reason ScalingReplicaSet message Scaled up replica set test-2-549969cddb to 1]="(MISSING)"
I0630 13:45:06.699292   57355 event.go:291] "Event occurred" [object namespace-1593524706-12089/test-2-549969cddb kind ReplicaSet apiVersion apps/v1 type Normal reason SuccessfulCreate message Created pod: test-2-549969cddb-4wp4d]="(MISSING)"
deployment.apps/test-2 created
wait.sh:36: Successful get deployments {{range .items}}{{.metadata.name}},{{end}}: test-1,test-2,
(BE0630 13:45:08.877853   57355 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "test-1" deleted
deployment.apps "test-2" deleted
Successful
message:deployment.apps/test-1 condition met
deployment.apps/test-2 condition met
has:test-1 condition met
... skipping 25 lines ...
I0630 13:45:09.234011   53853 naming_controller.go:302] Shutting down NamingConditionController
I0630 13:45:09.234016   53853 nonstructuralschema_controller.go:198] Shutting down NonStructuralSchemaConditionController
I0630 13:45:09.234379   53853 secure_serving.go:231] Stopped listening on 127.0.0.1:6443
I0630 13:45:09.234995   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.234995   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.235024   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W0630 13:45:09.235081   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0630 13:45:09.235138   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.235224   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.235273   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.235287   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W0630 13:45:09.235318   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0630 13:45:09.235435   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0630 13:45:09.235450   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.235478   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W0630 13:45:09.235509   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0630 13:45:09.235525   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.235570   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.235601   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.235627   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.235722   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.235820   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.235852   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.235909   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W0630 13:45:09.236021   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0630 13:45:09.236025   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0630 13:45:09.236092   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0630 13:45:09.236097   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0630 13:45:09.236259   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W0630 13:45:09.236317   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0630 13:45:09.236330   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.236371   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.236406   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.236441   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.236487   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W0630 13:45:09.236494   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0630 13:45:09.236519   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.236533   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W0630 13:45:09.236562   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0630 13:45:09.236588   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0630 13:45:09.236601   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W0630 13:45:09.236209   53853 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0630 13:45:09.236686   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.236937   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.236998   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.237000   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0630 13:45:09.237010   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
E0630 13:45:09.237082   53853 controller.go:184] rpc error: code = Unavailable desc = transport is closing
I0630 13:45:09.237109   53853 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
junit report dir: /logs/artifacts
+++ [0630 13:45:09] Clean up complete
+ make test-integration
+++ [0630 13:45:14] Checking etcd is on PATH
/home/prow/go/src/k8s.io/kubernetes/third_party/etcd/etcd
... skipping 2 lines ...
Waiting for etcd to come up.
+++ [0630 13:45:14] On try 2, etcd: : {"health":"true"}
{"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"2","raft_term":"2"}}+++ [0630 13:45:14] Running integration test cases
+++ [0630 13:45:20] Running tests without code coverage
{"Time":"2020-06-30T13:46:44.910312801Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/podlogs","Output":"ok  \tk8s.io/kubernetes/test/integration/apiserver/podlogs\t7.507s\n"}
{"Time":"2020-06-30T13:46:52.100019275Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create: admission_test.go:1123: testing POST\n"}
{"Time":"2020-06-30T13:46:52.1805657Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): no preferred addresses found; known addresses: []\n"}
{"Time":"2020-06-30T13:46:52.180573997Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create: admission_test.go:1123: testing GET\n"}
{"Time":"2020-06-30T13:46:52.214208226Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): no preferred addresses found; known addresses: []\n"}
{"Time":"2020-06-30T13:46:52.214287466Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create: admission_test.go:1123: testing HEAD\n"}
{"Time":"2020-06-30T13:46:52.234653033Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): an error on the server (\"unknown\") has prevented the request from succeeding (head nodes node1)\n"}
{"Time":"2020-06-30T13:46:52.234660014Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create: admission_test.go:1123: testing OPTIONS\n"}
{"Time":"2020-06-30T13:46:52.24097943Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): no preferred addresses found; known addresses: []\n"}
{"Time":"2020-06-30T13:46:52.245341496Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/update","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/update: admission_test.go:1123: testing PUT\n"}
{"Time":"2020-06-30T13:46:52.253369336Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/update","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/update: admission_test.go:1136: debug: result of subresource proxy (error expected): no preferred addresses found; known addresses: []\n"}
{"Time":"2020-06-30T13:46:52.260176157Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/patch","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/patch: admission_test.go:1123: testing PATCH\n"}
{"Time":"2020-06-30T13:46:52.261438274Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/patch","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/patch: admission_test.go:1136: debug: result of subresource proxy (error expected): no preferred addresses found; known addresses: []\n"}
{"Time":"2020-06-30T13:46:52.264438708Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/delete","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/delete: admission_test.go:1123: testing DELETE\n"}
{"Time":"2020-06-30T13:46:52.26968038Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/delete","Output":"    TestWebhookAdmissionWithWatchCache/.v1.nodes.proxy/delete: admission_test.go:1136: debug: result of subresource proxy (error expected): no preferred addresses found; known addresses: []\n"}
{"Time":"2020-06-30T13:46:52.960733703Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/flowcontrol","Output":"ok  \tk8s.io/kubernetes/test/integration/apiserver/flowcontrol\t18.215s\n"}
{"Time":"2020-06-30T13:46:53.044380382Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.attach/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.attach/create: admission_test.go:1020: verifying GET\n"}
{"Time":"2020-06-30T13:46:53.051860142Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.attach/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.attach/create: admission_test.go:1037: debug: result of subresource connect: pod pod1 does not have a host assigned\n"}
{"Time":"2020-06-30T13:46:53.052010683Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.attach/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.attach/create: admission_test.go:1020: verifying POST\n"}
{"Time":"2020-06-30T13:46:53.059096704Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.attach/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.attach/create: admission_test.go:1037: debug: result of subresource connect: pod pod1 does not have a host assigned\n"}
{"Time":"2020-06-30T13:46:53.257672787Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.exec/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.exec/create: admission_test.go:1020: verifying GET\n"}
... skipping 2 lines ...
{"Time":"2020-06-30T13:46:53.274197604Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.exec/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.exec/create: admission_test.go:1037: debug: result of subresource connect: pod pod1 does not have a host assigned\n"}
{"Time":"2020-06-30T13:46:53.276750441Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.portforward/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.portforward/create: admission_test.go:1020: verifying GET\n"}
{"Time":"2020-06-30T13:46:53.284803557Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.portforward/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.portforward/create: admission_test.go:1037: debug: result of subresource connect: pod pod1 does not have a host assigned\n"}
{"Time":"2020-06-30T13:46:53.284836864Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.portforward/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.portforward/create: admission_test.go:1020: verifying POST\n"}
{"Time":"2020-06-30T13:46:53.296781347Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.portforward/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.portforward/create: admission_test.go:1037: debug: result of subresource connect: pod pod1 does not have a host assigned\n"}
{"Time":"2020-06-30T13:46:53.298097275Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create: admission_test.go:1123: testing POST\n"}
{"Time":"2020-06-30T13:46:53.314111563Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): address not allowed\n"}
{"Time":"2020-06-30T13:46:53.314160096Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create: admission_test.go:1123: testing GET\n"}
{"Time":"2020-06-30T13:46:53.31697747Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): address not allowed\n"}
{"Time":"2020-06-30T13:46:53.317032166Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create: admission_test.go:1123: testing HEAD\n"}
{"Time":"2020-06-30T13:46:53.321945704Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): the server rejected our request for an unknown reason (head pods pod1)\n"}
{"Time":"2020-06-30T13:46:53.321975089Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create: admission_test.go:1123: testing OPTIONS\n"}
{"Time":"2020-06-30T13:46:53.327671956Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): address not allowed\n"}
{"Time":"2020-06-30T13:46:53.329673839Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/update","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/update: admission_test.go:1123: testing PUT\n"}
{"Time":"2020-06-30T13:46:53.34618852Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/update","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/update: admission_test.go:1136: debug: result of subresource proxy (error expected): address not allowed\n"}
{"Time":"2020-06-30T13:46:53.348542656Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/patch","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/patch: admission_test.go:1123: testing PATCH\n"}
{"Time":"2020-06-30T13:46:53.354757646Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/patch","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/patch: admission_test.go:1136: debug: result of subresource proxy (error expected): address not allowed\n"}
{"Time":"2020-06-30T13:46:53.356983097Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/delete","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/delete: admission_test.go:1123: testing DELETE\n"}
{"Time":"2020-06-30T13:46:53.366323986Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/delete","Output":"    TestWebhookAdmissionWithWatchCache/.v1.pods.proxy/delete: admission_test.go:1136: debug: result of subresource proxy (error expected): address not allowed\n"}
{"Time":"2020-06-30T13:46:54.754281035Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create: admission_test.go:1123: testing POST\n"}
{"Time":"2020-06-30T13:46:54.763407978Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): services \"service1\" not found\n"}
{"Time":"2020-06-30T13:46:54.763439872Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create: admission_test.go:1123: testing GET\n"}
{"Time":"2020-06-30T13:46:54.768501406Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): services \"service1\" not found\n"}
{"Time":"2020-06-30T13:46:54.768531147Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create: admission_test.go:1123: testing HEAD\n"}
{"Time":"2020-06-30T13:46:54.774131323Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): the server could not find the requested resource (head services service1)\n"}
{"Time":"2020-06-30T13:46:54.77415888Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create: admission_test.go:1123: testing OPTIONS\n"}
{"Time":"2020-06-30T13:46:54.779735574Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): services \"service1\" not found\n"}
{"Time":"2020-06-30T13:46:54.781895305Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/update","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/update: admission_test.go:1123: testing PUT\n"}
{"Time":"2020-06-30T13:46:54.78745975Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/update","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/update: admission_test.go:1136: debug: result of subresource proxy (error expected): services \"service1\" not found\n"}
{"Time":"2020-06-30T13:46:54.789647984Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/patch","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/patch: admission_test.go:1123: testing PATCH\n"}
{"Time":"2020-06-30T13:46:54.796707122Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/patch","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/patch: admission_test.go:1136: debug: result of subresource proxy (error expected): services \"service1\" not found\n"}
{"Time":"2020-06-30T13:46:54.798773533Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/delete","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/delete: admission_test.go:1123: testing DELETE\n"}
{"Time":"2020-06-30T13:46:54.80469109Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/.v1.services.proxy/delete","Output":"    TestWebhookAdmissionWithWatchCache/.v1.services.proxy/delete: admission_test.go:1136: debug: result of subresource proxy (error expected): services \"service1\" not found\n"}
{"Time":"2020-06-30T13:46:55.208958119Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete","Output":"    TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete: admission_test.go:687: waiting for schema.GroupVersionResource{Group:\"apiextensions.k8s.io\", Version:\"v1\", Resource:\"customresourcedefinitions\"} to be deleted (name: openshiftwebconsoleconfigs.webconsole2.operator.openshift.io, finalizers: [customresourcecleanup.apiextensions.k8s.io])...\n"}
{"Time":"2020-06-30T13:46:55.311133833Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete","Output":"    TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete: admission_test.go:687: waiting for schema.GroupVersionResource{Group:\"apiextensions.k8s.io\", Version:\"v1\", Resource:\"customresourcedefinitions\"} to be deleted (name: openshiftwebconsoleconfigs.webconsole2.operator.openshift.io, finalizers: [customresourcecleanup.apiextensions.k8s.io])...\n"}
{"Time":"2020-06-30T13:46:55.521092387Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete","Output":"    TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete: admission_test.go:733: waiting for other finalizers on schema.GroupVersionResource{Group:\"apiextensions.k8s.io\", Version:\"v1\", Resource:\"customresourcedefinitions\"} openshiftwebconsoleconfigs.webconsole2.operator.openshift.io to be removed, existing finalizers are [test/k8s.io customresourcecleanup.apiextensions.k8s.io]\n"}
{"Time":"2020-06-30T13:46:55.634603124Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete","Output":"    TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete: admission_test.go:733: waiting for other finalizers on schema.GroupVersionResource{Group:\"apiextensions.k8s.io\", Version:\"v1\", Resource:\"customresourcedefinitions\"} openshiftwebconsoleconfigs.webconsole2.operator.openshift.io to be removed, existing finalizers are [test/k8s.io customresourcecleanup.apiextensions.k8s.io]\n"}
{"Time":"2020-06-30T13:46:56.038399392Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete","Output":"    TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete: admission_test.go:687: waiting for schema.GroupVersionResource{Group:\"apiextensions.k8s.io\", Version:\"v1beta1\", Resource:\"customresourcedefinitions\"} to be deleted (name: openshiftwebconsoleconfigs.webconsole.operator.openshift.io, finalizers: [customresourcecleanup.apiextensions.k8s.io])...\n"}
{"Time":"2020-06-30T13:46:56.147305622Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete","Output":"    TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete: admission_test.go:687: waiting for schema.GroupVersionResource{Group:\"apiextensions.k8s.io\", Version:\"v1beta1\", Resource:\"customresourcedefinitions\"} to be deleted (name: openshiftwebconsoleconfigs.webconsole.operator.openshift.io, finalizers: [customresourcecleanup.apiextensions.k8s.io])...\n"}
{"Time":"2020-06-30T13:46:56.287586461Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete","Output":"    TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete: admission_test.go:733: waiting for other finalizers on schema.GroupVersionResource{Group:\"apiextensions.k8s.io\", Version:\"v1beta1\", Resource:\"customresourcedefinitions\"} openshiftwebconsoleconfigs.webconsole.operator.openshift.io to be removed, existing finalizers are [test/k8s.io customresourcecleanup.apiextensions.k8s.io]\n"}
{"Time":"2020-06-30T13:47:01.668252491Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestSelfSubjectAccessReview","Output":"meout.go:228 +0xb2\\nnet/http.Error(0x7f18e1168158, 0xc004591fb0, 0xc00930c5a0, 0x60, 0x1f4)\\n\\t/usr/local/go/src/net/http/server.go:2024 +0x1f4\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.InternalError(0x7f18e1168158, 0xc004591fb0, 0xc00930f600, 0x538e440, 0xc009333360)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/errors.go:75 +0x11a\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f18e1168158, 0xc004591fb0, 0xc00930f600)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:69 +0x482\\nnet/http.HandlerFunc.ServeHTTP(0xc0059fe440, 0x7f18e1168158, 0xc004591fb0, 0xc00930f600)\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f18e1168158, 0xc004591fb0, 0xc00930f6"}
{"Time":"2020-06-30T13:47:01.668262482Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestSelfSubjectAccessReview","Output":"00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x4ba\\nnet/http.HandlerFunc.ServeHTTP(0xc002e10ed0, 0x7f18e1168158, 0xc004591fb0, 0xc00930f600)\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f18e1168158, 0xc004591fb0, 0xc00930f600)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x201b\\nnet/http.HandlerFunc.ServeHTTP(0xc0059fe480, 0x7f18e1168158, 0xc004591fb0, 0xc00930f600)\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f18e1168158, 0xc004591fb0, 0xc00930f500)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:70 +0x66d\\nnet/http.HandlerFun"}
{"Time":"2020-06-30T13:47:02.864504127Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver","Test":"TestListOptions/watchCacheEnabled=true/limit=0_continue=_rv=invalid_rvMatch=","Output":"r/pkg/server/filters/timeout.go:228 +0xb2\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0xc0251f9830, 0x1f4)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:455 +0x45\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.(*deferredResponseWriter).Write(0xc006a3ea00, 0xc0251d0000, 0xc0, 0x3c79, 0x0, 0x0, 0xc024c64150)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:202 +0x1e6\\nencoding/json.(*Encoder).Encode(0xc024c64170, 0x464aec0, 0xc0251fce60, 0x0, 0x410279)\\n\\t/usr/local/go/src/encoding/json/stream.go:231 +0x1ca\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).doEncode(0xc0000cd220, 0x5312300, 0xc0251fce60, 0x5301360, 0xc006a3ea00, 0x0, 0x0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go"}
{"Time":"2020-06-30T13:47:02.864513975Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver","Test":"TestListOptions/watchCacheEnabled=true/limit=0_continue=_rv=invalid_rvMatch=","Output":"/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:326 +0x2e4\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc0000cd220, 0x5312300, 0xc0251fce60, 0x5301360, 0xc006a3ea00, 0x3841b90, 0x6)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:300 +0x172\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc0251fcf00, 0x5312300, 0xc0251fce60, 0x5301360, 0xc006a3ea00, 0x0, 0x0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:228 +0x32d\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc0251fcf00, 0x5312300, 0xc0251fce60, 0x5301360, 0xc006a3ea00, 0x53126c0, 0xc0000cd220)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/v"}
{"Time":"2020-06-30T13:47:02.864523509Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver","Test":"TestListOptions/watchCacheEnabled=true/limit=0_continue=_rv=invalid_rvMatch=","Output":"endor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:184 +0x178\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject(0x470c4a0, 0x10, 0x7fe3f04ec758, 0xc0251fcf00, 0x5366060, 0xc01f23c430, 0xc025602b00, 0x1f4, 0x5312300, 0xc0251fce60)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:96 +0x127\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated(0x5369520, 0xc01f48d5c0, 0x5369860, 0x79e3318, 0x46f006d, 0x4, 0x46ee93e, 0x2, 0x5366060, 0xc01f23c430, ...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:251 +0x555\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated(0x53056a0, 0xc0251e3860, 0x5369520, 0xc01f48d5c0, 0x46f006d, 0x4, 0x46ee93e, 0x2, 0x536"}
{"Time":"2020-06-30T13:47:02.864544916Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver","Test":"TestListOptions/watchCacheEnabled=true/limit=0_continue=_rv=invalid_rvMatch=","Output":"(0xc0251f97a0, 0xc010ac0150)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:336 +0x282\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc019afaa20, 0x7fe3f04fcc70, 0xc01f23c420, 0xc025602b00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:288 +0xae8\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4705e0c, 0xe, 0xc019afaa20, 0xc00d5e5f10, 0x7fe3f04fcc70, 0xc01f23c420, 0xc025602b00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x51a\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/fi"}
{"Time":"2020-06-30T13:47:02.864569062Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver","Test":"TestListOptions/watchCacheEnabled=true/limit=0_continue=_rv=invalid_rvMatch=","Output":"g/endpoints/filters/impersonation.go:50 +0x201b\\nnet/http.HandlerFunc.ServeHTTP(0xc019b8e6c0, 0x7fe3f04fcc70, 0xc01f23c420, 0xc025602b00)\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe3f04fcc70, 0xc01f23c420, 0xc025602a00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:70 +0x66d\\nnet/http.HandlerFunc.ServeHTTP(0xc002c8f9f0, 0x7fe3f04fcc70, 0xc01f23c420, 0xc025602a00)\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0256004e0, 0xc019f3bfa0, 0x536fde0, 0xc01f23c420, 0xc025602a00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:113 +0xb8\\ncreated by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP\\n\\"}
... skipping 8 lines ...
{"Time":"2020-06-30T13:47:09.623830366Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"netes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0xc00c4590b0, 0x1f7)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:455 +0x45\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.(*deferredResponseWriter).Write(0xc00bced5e0, 0xc008f39800, 0xa3, 0x595, 0x0, 0x0, 0xc00b972148)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:202 +0x1e6\\nencoding/json.(*Encoder).Encode(0xc00b972168, 0x46bd7e0, 0xc00d275400, 0x0, 0x410279)\\n\\t/usr/local/go/src/encoding/json/stream.go:231 +0x1ca\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).doEncode(0xc00062a230, 0x539b420, 0xc00d275400, 0x538a4a0, 0xc00bced5e0, 0x0, 0x0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/ru"}
{"Time":"2020-06-30T13:47:09.623853343Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"ntime/serializer/json/json.go:326 +0x2e4\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc00062a230, 0x539b420, 0xc00d275400, 0x538a4a0, 0xc00bced5e0, 0x3896fc9, 0x6)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:300 +0x172\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc00d2754a0, 0x539b420, 0xc00d275400, 0x538a4a0, 0xc00bced5e0, 0x0, 0x0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:228 +0x32d\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc00d2754a0, 0x539b420, 0xc00d275400, 0x538a4a0, 0xc00bced5e0, 0x539b7e0, 0xc00062a230)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/version"}
{"Time":"2020-06-30T13:47:09.623861935Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"ing/versioning.go:184 +0x178\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject(0x4780915, 0x10, 0x7f18e220b488, 0xc00d2754a0, 0x53f0440, 0xc0045914a0, 0xc00bc64b00, 0x1f7, 0x539b420, 0xc00d275400)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:96 +0x127\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated(0x53f39c0, 0xc00c972ae0, 0x53f3d00, 0x7b571b0, 0x0, 0x0, 0x4761df0, 0x2, 0x53f0440, 0xc0045914a0, ...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:251 +0x555\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated(0x5389d00, 0xc00d275360, 0x53f39c0, 0xc00c972ae0, 0x0, 0x0, 0x4761df0, 0x2, 0x53f0440, 0xc0045914a0, ...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_out"}
{"Time":"2020-06-30T13:47:09.623872271Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"put/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:270 +0x167\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.(*RequestScope).err(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:89\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.ConnectResource.func1.1()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:188 +0x251\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.RecordLongRunning(0xc00bc64b00, 0xc0067bfc30, 0x476c21b, 0x9, 0xc00c9eef68)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:296 +0x27f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.ConnectResource.func1(0x53f0440, 0xc0045914a0, 0xc00bc64b00)\\n\\t/home/prow/go/src/k8s.io"}
{"Time":"2020-06-30T13:47:09.623881157Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:185 +0x45a\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulConnectResource.func1(0xc00c459020, 0xc006961500)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1211 +0x98\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc00c459020, 0xc006961500)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:336 +0x282\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc00c974000, 0x7f18e1168158, 0xc004591490, 0xc00bc64b00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:288 +0xae8\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...)\\n\\t/home/prow/go/src/k8s.io"}
{"Time":"2020-06-30T13:47:09.623901759Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x4ba\\nnet/http.HandlerFunc.ServeHTTP(0xc00c95fd70, 0x7f18e1168158, 0xc004591490, 0xc00bc64b00)\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f18e1168158, 0xc004591490, 0xc00bc64b00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x201b\\nnet/http.HandlerFunc.ServeHTTP(0xc00c943200, 0x7f18e1168158, 0xc004591490, 0xc00bc64b00)\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f18e1168158, 0xc004591490, 0xc00bc64a00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:70 +0x66d\\nnet/http.HandlerFunc.ServeHTTP(0xc00c931d10, 0x7f18e1168158, 0xc004591490, 0xc00bc64a00"}
{"Time":"2020-06-30T13:47:09.623910789Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":")\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c7d2180, 0xc00c96c680, 0x53facc0, 0xc004591490, 0xc00bc64a00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:113 +0xb8\\ncreated by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:99 +0x1c0\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"kind\\\\\\\":\\\\\\\"Status\\\\\\\",\\\\\\\"apiVersion\\\\\\\":\\\\\\\"v1\\\\\\\",\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"no endpoints available for service \\\\\\\\\\\\\\\"a\\\\\\\\\\\\\\\"\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"ServiceUnavailable\\\\\\\",\\\\\\\"code\\\\\\\":503}\\\\n\\\"\\n\"\n"}
{"Time":"2020-06-30T13:47:10.604696724Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/certificates","Test":"TestCSRSignerNameApprovalPlugin/should_admit_when_a_user_has_permission_for_the_exact_signerName","Output":"    TestCSRSignerNameApprovalPlugin/should_admit_when_a_user_has_permission_for_the_exact_signerName: testserver.go:200: Waiting for /healthz to be ok...\n"}
{"Time":"2020-06-30T13:47:14.777546981Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apimachinery","Test":"TestWatchRestartsIfTimeoutNotReached/regular_watcher_should_fail","Output":"    TestWatchRestartsIfTimeoutNotReached/regular_watcher_should_fail: watch_restart_test.go:251: Watch duration: 52.816270218s; timeout: 2m0s\n"}
{"Time":"2020-06-30T13:47:17.580477426Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0xc009136450, 0x1f7)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:455 +0x45\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.(*deferredResponseWriter).Write(0xc008e46f50, 0xc0046ea000, 0xa3, 0x86b, 0x0, 0x0, 0xc00be2a148)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:202 +0x1e6\\nencoding/json.(*Encoder).Encode(0xc00be2a168, 0x46bd7e0, 0xc011c526e0, 0x0, 0x410279)\\n\\t/usr/local/go/src/encoding/json/stream.go:231 +0x1ca\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).doEncode(0xc00062a230, 0x539b420, 0xc011c526e0, 0x538a4a0, 0xc008e46f50, 0x0, 0x0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachiner"}
{"Time":"2020-06-30T13:47:17.580487383Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"y/pkg/runtime/serializer/json/json.go:326 +0x2e4\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc00062a230, 0x539b420, 0xc011c526e0, 0x538a4a0, 0xc008e46f50, 0x3896fc9, 0x6)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:300 +0x172\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc011c52960, 0x539b420, 0xc011c526e0, 0x538a4a0, 0xc008e46f50, 0x0, 0x0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:228 +0x32d\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc011c52960, 0x539b420, 0xc011c526e0, 0x538a4a0, 0xc008e46f50, 0x539b7e0, 0xc00062a230)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer"}
{"Time":"2020-06-30T13:47:17.580496139Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"/versioning/versioning.go:184 +0x178\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject(0x4780915, 0x10, 0x7f18e220b488, 0xc011c52960, 0x53f0440, 0xc0045917e0, 0xc013cdc200, 0x1f7, 0x539b420, 0xc011c526e0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:96 +0x127\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated(0x53f39c0, 0xc014d29440, 0x53f3d00, 0x7b571b0, 0x0, 0x0, 0x4761df0, 0x2, 0x53f0440, 0xc0045917e0, ...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:251 +0x555\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated(0x5389d00, 0xc011c52640, 0x53f39c0, 0xc014d29440, 0x0, 0x0, 0x4761df0, 0x2, 0x53f0440, 0xc0045917e0, ...)\\n\\t/home/prow/go/src/k8s.io/kuberne"}
{"Time":"2020-06-30T13:47:17.580526786Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"tes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:270 +0x167\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.(*RequestScope).err(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:89\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.ConnectResource.func1.1()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:188 +0x251\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.RecordLongRunning(0xc013cdc200, 0xc008b89d90, 0x476c21b, 0x9, 0xc0158c2f68)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:296 +0x27f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.ConnectResource.func1(0x53f0440, 0xc0045917e0, 0xc013cdc200)\\n\\t/home/prow/go/sr"}
{"Time":"2020-06-30T13:47:17.580537139Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"c/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:185 +0x45a\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulConnectResource.func1(0xc009136360, 0xc006825420)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1211 +0x98\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc009136360, 0xc006825420)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:336 +0x282\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc014a693b0, 0x7f18e1168158, 0xc0045917d0, 0xc013cdc200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:288 +0xae8\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...)\\n\\t/home/prow/go/sr"}
{"Time":"2020-06-30T13:47:17.580555038Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x4ba\\nnet/http.HandlerFunc.ServeHTTP(0xc014d2b0b0, 0x7f18e1168158, 0xc0045917d0, 0xc013cdc200)\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f18e1168158, 0xc0045917d0, 0xc013cdc200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x201b\\nnet/http.HandlerFunc.ServeHTTP(0xc014d30040, 0x7f18e1168158, 0xc0045917d0, 0xc013cdc200)\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f18e1168158, 0xc0045917d0, 0xc013cdc100)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:70 +0x66d\\nnet/http.HandlerFunc.ServeHTTP(0xc014a53950, 0x7f18e1168158, 0xc0045917d0, 0xc0"}
{"Time":"2020-06-30T13:47:17.58056349Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"13cdc100)\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008c8ba40, 0xc014d32020, 0x53facc0, 0xc0045917d0, 0xc013cdc100)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:113 +0xb8\\ncreated by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:99 +0x1c0\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"kind\\\\\\\":\\\\\\\"Status\\\\\\\",\\\\\\\"apiVersion\\\\\\\":\\\\\\\"v1\\\\\\\",\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"no endpoints available for service \\\\\\\\\\\\\\\"a\\\\\\\\\\\\\\\"\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"ServiceUnavailable\\\\\\\",\\\\\\\"code\\\\\\\":503}\\\\n\\\"\\n\"\n"}
{"Time":"2020-06-30T13:47:18.836585594Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/certificates","Test":"TestCSRSignerNameApprovalPlugin/should_admit_when_a_user_has_permission_for_the_wildcard-suffixed_signerName","Output":"    TestCSRSignerNameApprovalPlugin/should_admit_when_a_user_has_permission_for_the_wildcard-suffixed_signerName: testserver.go:312: Resolved testserver package path to: \"/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing\"\n"}
{"Time":"2020-06-30T13:47:18.836936435Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/certificates","Test":"TestCSRSignerNameApprovalPlugin/should_admit_when_a_user_has_permission_for_the_wildcard-suffixed_signerName","Output":"    TestCSRSignerNameApprovalPlugin/should_admit_when_a_user_has_permission_for_the_wildcard-suffixed_signerName: testserver.go:183: runtime-config=map[api/all:true]\n"}
{"Time":"2020-06-30T13:47:18.836957155Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/certificates","Test":"TestCSRSignerNameApprovalPlugin/should_admit_when_a_user_has_permission_for_the_wildcard-suffixed_signerName","Output":"    TestCSRSignerNameApprovalPlugin/should_admit_when_a_user_has_permission_for_the_wildcard-suffixed_signerName: testserver.go:184: Starting kube-apiserver on port 43083...\n"}
{"Time":"2020-06-30T13:47:20.426935207Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/certificates","Test":"TestCSRSignerNameApprovalPlugin/should_admit_when_a_user_has_permission_for_the_wildcard-suffixed_signerName","Output":"    TestCSRSignerNameApprovalPlugin/should_admit_when_a_user_has_permission_for_the_wildcard-suffixed_signerName: testserver.go:200: Waiting for /healthz to be ok...\n"}
{"Time":"2020-06-30T13:47:22.674070992Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create: admission_test.go:1123: testing POST\n"}
{"Time":"2020-06-30T13:47:22.682232798Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): no preferred addresses found; known addresses: []\n"}
{"Time":"2020-06-30T13:47:22.682261674Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create: admission_test.go:1123: testing GET\n"}
{"Time":"2020-06-30T13:47:22.688426725Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): no preferred addresses found; known addresses: []\n"}
{"Time":"2020-06-30T13:47:22.688454456Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create: admission_test.go:1123: testing HEAD\n"}
{"Time":"2020-06-30T13:47:22.693322104Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): an error on the server (\"unknown\") has prevented the request from succeeding (head nodes node1)\n"}
{"Time":"2020-06-30T13:47:22.69335061Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create: admission_test.go:1123: testing OPTIONS\n"}
{"Time":"2020-06-30T13:47:22.699139518Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): no preferred addresses found; known addresses: []\n"}
{"Time":"2020-06-30T13:47:22.702260938Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/update","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/update: admission_test.go:1123: testing PUT\n"}
{"Time":"2020-06-30T13:47:22.709220166Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/update","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/update: admission_test.go:1136: debug: result of subresource proxy (error expected): no preferred addresses found; known addresses: []\n"}
{"Time":"2020-06-30T13:47:22.710887995Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/patch","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/patch: admission_test.go:1123: testing PATCH\n"}
{"Time":"2020-06-30T13:47:22.717013235Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/patch","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/patch: admission_test.go:1136: debug: result of subresource proxy (error expected): no preferred addresses found; known addresses: []\n"}
{"Time":"2020-06-30T13:47:22.718529007Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/delete","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/delete: admission_test.go:1123: testing DELETE\n"}
{"Time":"2020-06-30T13:47:22.730288223Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/delete","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.nodes.proxy/delete: admission_test.go:1136: debug: result of subresource proxy (error expected): no preferred addresses found; known addresses: []\n"}
{"Time":"2020-06-30T13:47:23.475121623Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.attach/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.attach/create: admission_test.go:1020: verifying GET\n"}
{"Time":"2020-06-30T13:47:23.48146563Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.attach/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.attach/create: admission_test.go:1037: debug: result of subresource connect: pod pod1 does not have a host assigned\n"}
{"Time":"2020-06-30T13:47:23.481492181Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.attach/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.attach/create: admission_test.go:1020: verifying POST\n"}
{"Time":"2020-06-30T13:47:23.486345302Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.attach/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.attach/create: admission_test.go:1037: debug: result of subresource connect: pod pod1 does not have a host assigned\n"}
{"Time":"2020-06-30T13:47:23.569864699Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.exec/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.exec/create: admission_test.go:1020: verifying GET\n"}
{"Time":"2020-06-30T13:47:23.576587989Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.exec/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.exec/create: admission_test.go:1037: debug: result of subresource connect: pod pod1 does not have a host assigned\n"}
{"Time":"2020-06-30T13:47:23.57661241Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.exec/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.exec/create: admission_test.go:1020: verifying POST\n"}
{"Time":"2020-06-30T13:47:23.58229472Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.exec/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.exec/create: admission_test.go:1037: debug: result of subresource connect: pod pod1 does not have a host assigned\n"}
{"Time":"2020-06-30T13:47:23.584895029Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.portforward/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.portforward/create: admission_test.go:1020: verifying GET\n"}
{"Time":"2020-06-30T13:47:23.590031044Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.portforward/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.portforward/create: admission_test.go:1037: debug: result of subresource connect: pod pod1 does not have a host assigned\n"}
{"Time":"2020-06-30T13:47:23.590060301Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.portforward/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.portforward/create: admission_test.go:1020: verifying POST\n"}
{"Time":"2020-06-30T13:47:23.595140203Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.portforward/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.portforward/create: admission_test.go:1037: debug: result of subresource connect: pod pod1 does not have a host assigned\n"}
{"Time":"2020-06-30T13:47:23.598061616Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create: admission_test.go:1123: testing POST\n"}
{"Time":"2020-06-30T13:47:23.605153382Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): address not allowed\n"}
{"Time":"2020-06-30T13:47:23.605187147Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create: admission_test.go:1123: testing GET\n"}
{"Time":"2020-06-30T13:47:23.610932965Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): address not allowed\n"}
{"Time":"2020-06-30T13:47:23.610983522Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create: admission_test.go:1123: testing HEAD\n"}
{"Time":"2020-06-30T13:47:23.617033078Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): the server rejected our request for an unknown reason (head pods pod1)\n"}
{"Time":"2020-06-30T13:47:23.617126514Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create: admission_test.go:1123: testing OPTIONS\n"}
{"Time":"2020-06-30T13:47:23.622952131Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): address not allowed\n"}
{"Time":"2020-06-30T13:47:23.624950419Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/update","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/update: admission_test.go:1123: testing PUT\n"}
{"Time":"2020-06-30T13:47:23.630578278Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/update","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/update: admission_test.go:1136: debug: result of subresource proxy (error expected): address not allowed\n"}
{"Time":"2020-06-30T13:47:23.632633545Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/patch","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/patch: admission_test.go:1123: testing PATCH\n"}
{"Time":"2020-06-30T13:47:23.638404109Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/patch","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/patch: admission_test.go:1136: debug: result of subresource proxy (error expected): address not allowed\n"}
{"Time":"2020-06-30T13:47:23.640481147Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/delete","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/delete: admission_test.go:1123: testing DELETE\n"}
{"Time":"2020-06-30T13:47:23.645920694Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/delete","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.pods.proxy/delete: admission_test.go:1136: debug: result of subresource proxy (error expected): address not allowed\n"}
{"Time":"2020-06-30T13:47:24.909995046Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create: admission_test.go:1123: testing POST\n"}
{"Time":"2020-06-30T13:47:24.915548543Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): services \"service1\" not found\n"}
{"Time":"2020-06-30T13:47:24.915574675Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create: admission_test.go:1123: testing GET\n"}
{"Time":"2020-06-30T13:47:24.922103955Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): services \"service1\" not found\n"}
{"Time":"2020-06-30T13:47:24.922133632Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create: admission_test.go:1123: testing HEAD\n"}
{"Time":"2020-06-30T13:47:24.930822478Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): the server could not find the requested resource (head services service1)\n"}
{"Time":"2020-06-30T13:47:24.930850905Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create: admission_test.go:1123: testing OPTIONS\n"}
{"Time":"2020-06-30T13:47:24.935944375Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/create: admission_test.go:1136: debug: result of subresource proxy (error expected): services \"service1\" not found\n"}
{"Time":"2020-06-30T13:47:24.940338544Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/update","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/update: admission_test.go:1123: testing PUT\n"}
{"Time":"2020-06-30T13:47:24.945431411Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/update","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/update: admission_test.go:1136: debug: result of subresource proxy (error expected): services \"service1\" not found\n"}
{"Time":"2020-06-30T13:47:24.947368001Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/patch","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/patch: admission_test.go:1123: testing PATCH\n"}
{"Time":"2020-06-30T13:47:24.953236764Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/patch","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/patch: admission_test.go:1136: debug: result of subresource proxy (error expected): services \"service1\" not found\n"}
{"Time":"2020-06-30T13:47:24.95510524Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/delete","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/delete: admission_test.go:1123: testing DELETE\n"}
{"Time":"2020-06-30T13:47:24.96220218Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/delete","Output":"    TestWebhookAdmissionWithoutWatchCache/.v1.services.proxy/delete: admission_test.go:1136: debug: result of subresource proxy (error expected): services \"service1\" not found\n"}
{"Time":"2020-06-30T13:47:25.3505469Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete","Output":"    TestWebhookAdmissionWithoutWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete: admission_test.go:687: waiting for schema.GroupVersionResource{Group:\"apiextensions.k8s.io\", Version:\"v1\", Resource:\"customresourcedefinitions\"} to be deleted (name: openshiftwebconsoleconfigs.webconsole2.operator.openshift.io, finalizers: [customresourcecleanup.apiextensions.k8s.io])...\n"}
{"Time":"2020-06-30T13:47:25.542145692Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete","Output":"    TestWebhookAdmissionWithoutWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete: admission_test.go:733: waiting for other finalizers on schema.GroupVersionResource{Group:\"apiextensions.k8s.io\", Version:\"v1\", Resource:\"customresourcedefinitions\"} openshiftwebconsoleconfigs.webconsole2.operator.openshift.io to be removed, existing finalizers are [test/k8s.io customresourcecleanup.apiextensions.k8s.io]\n"}
{"Time":"2020-06-30T13:47:25.899189991Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete","Output":"    TestWebhookAdmissionWithoutWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete: admission_test.go:687: waiting for schema.GroupVersionResource{Group:\"apiextensions.k8s.io\", Version:\"v1beta1\", Resource:\"customresourcedefinitions\"} to be deleted (name: openshiftwebconsoleconfigs.webconsole.operator.openshift.io, finalizers: [customresourcecleanup.apiextensions.k8s.io])...\n"}
{"Time":"2020-06-30T13:47:26.035877167Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/admissionwebhook","Test":"TestWebhookAdmissionWithoutWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete","Output":"    TestWebhookAdmissionWithoutWatchCache/apiextensions.k8s.io.v1beta1.customresourcedefinitions/delete: admission_test.go:733: waiting for other finalizers on schema.GroupVersionResource{Group:\"apiextensions.k8s.io\", Version:\"v1beta1\", Resource:\"customresourcedefinitions\"} openshiftwebconsoleconfigs.webconsole.operator.openshift.io to be removed, existing finalizers are [test/k8s.io customresourcecleanup.apiextensions.k8s.io]\n"}
{"Time":"2020-06-30T13:47:29.105293527Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/certificates","Test":"TestCSRSignerNameApprovalPlugin/should_deny_if_a_user_does_not_have_permission_for_the_given_signerName","Output":"    TestCSRSignerNameApprovalPlugin/should_deny_if_a_user_does_not_have_permission_for_the_given_signerName: testserver.go:312: Resolved testserver package path to: \"/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing\"\n"}
{"Time":"2020-06-30T13:47:29.105827368Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/certificates","Test":"TestCSRSignerNameApprovalPlugin/should_deny_if_a_user_does_not_have_permission_for_the_given_signerName","Output":"    TestCSRSignerNameApprovalPlugin/should_deny_if_a_user_does_not_have_permission_for_the_given_signerName: testserver.go:183: runtime-config=map[api/all:true]\n"}
... skipping 79 lines ...
{"Time":"2020-06-30T13:48:58.307942924Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/certificates","Test":"TestController_AutoApproval/should_not_auto-approve_CSR_that_has_kube-apiserver-client-kubelet_signerName_that_does_not_match_requirements","Output":"    TestController_AutoApproval/should_not_auto-approve_CSR_that_has_kube-apiserver-client-kubelet_signerName_that_does_not_match_requirements: testserver.go:200: Waiting for /healthz to be ok...\n"}
{"Time":"2020-06-30T13:49:07.586410486Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/cronjob","Output":"ok  \tk8s.io/kubernetes/test/integration/cronjob\t45.102s\n"}
{"Time":"2020-06-30T13:49:09.759549678Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/certificates","Test":"TestController_AutoApproval/should_not_auto-approve_CSR_that_has_kube-apiserver-client_signerName_that_DOES_match_kubelet_CSR_requirements","Output":"    TestController_AutoApproval/should_not_auto-approve_CSR_that_has_kube-apiserver-client_signerName_that_DOES_match_kubelet_CSR_requirements: testserver.go:312: Resolved testserver package path to: \"/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing\"\n"}
{"Time":"2020-06-30T13:49:09.765711295Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/certificates","Test":"TestController_AutoApproval/should_not_auto-approve_CSR_that_has_kube-apiserver-client_signerName_that_DOES_match_kubelet_CSR_requirements","Output":"    TestController_AutoApproval/should_not_auto-approve_CSR_that_has_kube-apiserver-client_signerName_that_DOES_match_kubelet_CSR_requirements: testserver.go:183: runtime-config=map[api/all:true]\n"}
{"Time":"2020-06-30T13:49:09.765729995Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/certificates","Test":"TestController_AutoApproval/should_not_auto-approve_CSR_that_has_kube-apiserver-client_signerName_that_DOES_match_kubelet_CSR_requirements","Output":"    TestController_AutoApproval/should_not_auto-approve_CSR_that_has_kube-apiserver-client_signerName_that_DOES_match_kubelet_CSR_requirements: testserver.go:184: Starting kube-apiserver on port 45675...\n"}
{"Time":"2020-06-30T13:49:12.168218021Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/certificates","Test":"TestController_AutoApproval/should_not_auto-approve_CSR_that_has_kube-apiserver-client_signerName_that_DOES_match_kubelet_CSR_requirements","Output":"    TestController_AutoApproval/should_not_auto-approve_CSR_that_has_kube-apiserver-client_signerName_that_DOES_match_kubelet_CSR_requirements: testserver.go:200: Waiting for /healthz to be ok...\n"}
{"Time":"2020-06-30T13:49:13.946830669Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/bound_to_service_account","Output":"    TestServiceAccountTokenCreate/bound_to_service_account: svcaccttoken_test.go:203: status: {Authenticated:true User:{Username:system:serviceaccount:myns:test-svcacct UID:2ef7ebc0-6e2c-4434-b507-f908dc445cda Groups:[system:serviceaccounts system:serviceaccounts:myns] Extra:map[]} Audiences:[api] Error:}\n"}
{"Time":"2020-06-30T13:49:13.959033034Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/bound_to_service_account","Output":"    TestServiceAccountTokenCreate/bound_to_service_account: svcaccttoken_test.go:208: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, serviceaccounts \"test-svcacct\" not found]}\n"}
{"Time":"2020-06-30T13:49:13.992408462Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/bound_to_service_account_and_pod","Output":"    TestServiceAccountTokenCreate/bound_to_service_account_and_pod: svcaccttoken_test.go:259: status: {Authenticated:true User:{Username:system:serviceaccount:myns:test-svcacct UID:bfb391c8-7daa-40c7-94da-f159b88ac529 Groups:[system:serviceaccounts system:serviceaccounts:myns] Extra:map[authentication.kubernetes.io/pod-name:[test-pod] authentication.kubernetes.io/pod-uid:[8d960597-2d43-4301-9bfc-a3164f85d4d6]]} Audiences:[api] Error:}\n"}
{"Time":"2020-06-30T13:49:14.014712016Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/bound_to_service_account_and_pod","Output":"    TestServiceAccountTokenCreate/bound_to_service_account_and_pod: svcaccttoken_test.go:270: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, Token has been invalidated]}\n"}
{"Time":"2020-06-30T13:49:14.046059609Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/bound_to_service_account_and_secret","Output":"    TestServiceAccountTokenCreate/bound_to_service_account_and_secret: svcaccttoken_test.go:322: status: {Authenticated:true User:{Username:system:serviceaccount:myns:test-svcacct UID:f4cf047e-623d-41e7-802d-07de769cfa5b Groups:[system:serviceaccounts system:serviceaccounts:myns] Extra:map[]} Audiences:[api] Error:}\n"}
{"Time":"2020-06-30T13:49:14.057348959Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/bound_to_service_account_and_secret","Output":"    TestServiceAccountTokenCreate/bound_to_service_account_and_secret: svcaccttoken_test.go:324: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, Token has been invalidated]}\n"}
{"Time":"2020-06-30T13:49:14.094397231Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/expired_token","Output":"    TestServiceAccountTokenCreate/expired_token: svcaccttoken_test.go:364: status: {Authenticated:true User:{Username:system:serviceaccount:myns:test-svcacct UID:fcbf9b6b-c050-47bc-9caf-9616f3a972a8 Groups:[system:serviceaccounts system:serviceaccounts:myns] Extra:map[]} Audiences:[api] Error:}\n"}
{"Time":"2020-06-30T13:49:14.095788522Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/expired_token","Output":"    TestServiceAccountTokenCreate/expired_token: svcaccttoken_test.go:385: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, Token has expired.]}\n"}
{"Time":"2020-06-30T13:49:14.113882855Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/expiration_extended_token","Output":"    TestServiceAccountTokenCreate/expiration_extended_token: svcaccttoken_test.go:413: status: {Authenticated:true User:{Username:system:serviceaccount:myns:test-svcacct UID:e0efa367-2551-4d3a-874f-05ccfd16fffc Groups:[system:serviceaccounts system:serviceaccounts:myns] Extra:map[authentication.kubernetes.io/pod-name:[test-pod] authentication.kubernetes.io/pod-uid:[2c6c4c8d-45ac-427d-81a5-5a05555edbc3]]} Audiences:[api] Error:}\n"}
{"Time":"2020-06-30T13:49:14.132246301Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_without_an_api_audience_is_invalid","Output":"    TestServiceAccountTokenCreate/a_token_without_an_api_audience_is_invalid: svcaccttoken_test.go:457: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, token audiences [\"not-the-api\"] is invalid for the target audiences [\"api\"]]}\n"}
{"Time":"2020-06-30T13:49:14.146992253Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_tokenrequest_without_an_audience_is_valid_against_the_api","Output":"    TestServiceAccountTokenCreate/a_tokenrequest_without_an_audience_is_valid_against_the_api: svcaccttoken_test.go:475: status: {Authenticated:true User:{Username:system:serviceaccount:myns:test-svcacct UID:b40c49df-a9de-4b41-a58d-4e2239aad136 Groups:[system:serviceaccounts system:serviceaccounts:myns] Extra:map[]} Audiences:[api] Error:}\n"}
{"Time":"2020-06-30T13:49:14.165326431Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_should_be_invalid_after_recreating_same_name_pod","Output":"    TestServiceAccountTokenCreate/a_token_should_be_invalid_after_recreating_same_name_pod: svcaccttoken_test.go:507: status: {Authenticated:true User:{Username:system:serviceaccount:myns:test-svcacct UID:31e1324f-f76a-46e0-9557-420ad7301263 Groups:[system:serviceaccounts system:serviceaccounts:myns] Extra:map[authentication.kubernetes.io/pod-name:[test-pod] authentication.kubernetes.io/pod-uid:[91a638a5-e379-42bd-9d96-ccd22dbec486]]} Audiences:[api] Error:}\n"}
{"Time":"2020-06-30T13:49:14.186257478Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_should_be_invalid_after_recreating_same_name_pod","Output":"    TestServiceAccountTokenCreate/a_token_should_be_invalid_after_recreating_same_name_pod: svcaccttoken_test.go:509: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, Token has been invalidated]}\n"}
{"Time":"2020-06-30T13:49:14.19316238Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_should_be_invalid_after_recreating_same_name_pod","Output":"    TestServiceAccountTokenCreate/a_token_should_be_invalid_after_recreating_same_name_pod: svcaccttoken_test.go:514: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, Pod UID (b3486d5f-86d5-4112-8d51-221d0ccefcf4) does not match claim (91a638a5-e379-42bd-9d96-ccd22dbec486)]}\n"}
{"Time":"2020-06-30T13:49:14.219868054Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_should_be_invalid_after_recreating_same_name_secret","Output":"    TestServiceAccountTokenCreate/a_token_should_be_invalid_after_recreating_same_name_secret: svcaccttoken_test.go:548: status: {Authenticated:true User:{Username:system:serviceaccount:myns:test-svcacct UID:5a44ec16-bd20-4db3-aca3-9973903d9b4a Groups:[system:serviceaccounts system:serviceaccounts:myns] Extra:map[]} Audiences:[api] Error:}\n"}
{"Time":"2020-06-30T13:49:14.228769794Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_should_be_invalid_after_recreating_same_name_secret","Output":"    TestServiceAccountTokenCreate/a_token_should_be_invalid_after_recreating_same_name_secret: svcaccttoken_test.go:550: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, Token has been invalidated]}\n"}
{"Time":"2020-06-30T13:49:14.24847894Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_should_be_invalid_after_recreating_same_name_secret","Output":"    TestServiceAccountTokenCreate/a_token_should_be_invalid_after_recreating_same_name_secret: svcaccttoken_test.go:555: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, Secret UID (b62e84fb-d266-42a8-819b-55afccf8f316) does not match claim (ed9bab03-a09f-42f8-898e-7c997179a485)]}\n"}
{"Time":"2020-06-30T13:49:14.274497315Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_request_within_expiration_time","Output":"    TestServiceAccountTokenCreate/a_token_request_within_expiration_time: svcaccttoken_test.go:592: status: {Authenticated:true User:{Username:system:serviceaccount:myns:test-svcacct UID:237d5d73-7fe0-46b2-a548-26539a343454 Groups:[system:serviceaccounts system:serviceaccounts:myns] Extra:map[]} Audiences:[api] Error:}\n"}
{"Time":"2020-06-30T13:49:14.283758651Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_request_within_expiration_time","Output":"    TestServiceAccountTokenCreate/a_token_request_within_expiration_time: svcaccttoken_test.go:594: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, Token has been invalidated]}\n"}
{"Time":"2020-06-30T13:49:14.294657295Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_request_within_expiration_time","Output":"    TestServiceAccountTokenCreate/a_token_request_within_expiration_time: svcaccttoken_test.go:599: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, Secret UID (b3f50589-d5b5-4845-b0de-e66003a633ce) does not match claim (e4702d93-9122-4400-9a6d-eb650afd208e)]}\n"}
{"Time":"2020-06-30T13:49:14.448650109Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_request_with_out-of-range_expiration","Output":"    TestServiceAccountTokenCreate/a_token_request_with_out-of-range_expiration: svcaccttoken_test.go:636: status: {Authenticated:true User:{Username:system:serviceaccount:myns:test-svcacct UID:5625790c-3cea-4692-85e9-215c63570985 Groups:[system:serviceaccounts system:serviceaccounts:myns] Extra:map[]} Audiences:[api] Error:}\n"}
{"Time":"2020-06-30T13:49:14.547192647Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_request_with_out-of-range_expiration","Output":"    TestServiceAccountTokenCreate/a_token_request_with_out-of-range_expiration: svcaccttoken_test.go:638: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, Token has been invalidated]}\n"}
{"Time":"2020-06-30T13:49:14.627245178Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_request_with_out-of-range_expiration","Output":"    TestServiceAccountTokenCreate/a_token_request_with_out-of-range_expiration: svcaccttoken_test.go:643: status: {Authenticated:false User:{Username: UID: Groups:[] Extra:map[]} Audiences:[] Error:[invalid bearer token, Secret UID (c9553305-d0cf-4a29-9254-29080afea19f) does not match claim (c43e589a-d185-4477-9a52-949dbd76a48f)]}\n"}
{"Time":"2020-06-30T13:49:14.688978061Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_is_valid_against_the_HTTP-provided_service_account_issuer_metadata","Output":"    TestServiceAccountTokenCreate/a_token_is_valid_against_the_HTTP-provided_service_account_issuer_metadata: svcaccttoken_test.go:650: get token\n"}
{"Time":"2020-06-30T13:49:14.7145588Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_is_valid_against_the_HTTP-provided_service_account_issuer_metadata","Output":"    TestServiceAccountTokenCreate/a_token_is_valid_against_the_HTTP-provided_service_account_issuer_metadata: svcaccttoken_test.go:667: get discovery doc\n"}
{"Time":"2020-06-30T13:49:14.71460359Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_is_valid_against_the_HTTP-provided_service_account_issuer_metadata","Output":"    TestServiceAccountTokenCreate/a_token_is_valid_against_the_HTTP-provided_service_account_issuer_metadata: svcaccttoken_test.go:698: raw discovery doc response:\n"}
{"Time":"2020-06-30T13:49:14.714610207Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_is_valid_against_the_HTTP-provided_service_account_issuer_metadata","Output":"        ---{\"issuer\":\"https://foo.bar.example.com\",\"jwks_uri\":\"https://192.168.10.4:443/openid/v1/jwks\",\"response_types_supported\":[\"id_token\"],\"subject_types_supported\":[\"public\"],\"id_token_signing_alg_values_supported\":[\"ES256\"]}\n"}
{"Time":"2020-06-30T13:49:14.714624864Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_is_valid_against_the_HTTP-provided_service_account_issuer_metadata","Output":"    TestServiceAccountTokenCreate/a_token_is_valid_against_the_HTTP-provided_service_account_issuer_metadata: svcaccttoken_test.go:729: get jwks from http://127.0.0.1:45105/openid/v1/jwks\n"}
{"Time":"2020-06-30T13:49:14.71463837Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestServiceAccountTokenCreate/a_token_is_valid_against_the_HTTP-provided_service_account_issuer_metadata","Output":"    TestServiceAccountTokenCreate/a_token_is_valid_against_the_HTTP-provided_service_account_issuer_metadata: svcaccttoken_test.go:754: raw JWKS: \n"}
... skipping 574 lines ...
{"Time":"2020-06-30T13:54:58.432085966Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/volume","Test":"TestPersistentVolumeBindRace","Output":"507b0, 0xc004961260)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:336 +0x282\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc01c1139e0, 0x7f2e7832b548, 0xc01a7e00c0, 0xc01c049800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:288 +0xae8\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x446a8ec, 0xe, 0xc01c1139e0, 0xc01c1079d0, 0x7f2e7832b548, 0xc01a7e00c0, 0xc01c049800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x51a\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.Wi"}
{"Time":"2020-06-30T13:54:58.432216936Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/volume","Test":"TestPersistentVolumeBindRace","Output":"nts/filters/impersonation.go:50 +0x201b\\nnet/http.HandlerFunc.ServeHTTP(0xc01c38ac00, 0x7f2e7832b548, 0xc01a7e00c0, 0xc01c049800)\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2e7832b548, 0xc01a7e00c0, 0xc01c049700)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:70 +0x66d\\nnet/http.HandlerFunc.ServeHTTP(0xc01c382d20, 0x7f2e7832b548, 0xc01a7e00c0, 0xc01c049700)\\n\\t/usr/local/go/src/net/http/server.go:2012 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01ae45b60, 0xc01c399c40, 0x5054680, 0xc01a7e00c0, 0xc01c049700)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:113 +0xb8\\ncreated by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP\\n\\t/home/p"}
{"Time":"2020-06-30T13:55:41.640168578Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/volume","Output":"ok  \tk8s.io/kubernetes/test/integration/volume\t98.853s\n"}
{"Time":"2020-06-30T13:56:30.769229705Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/volumescheduling","Output":"ok  \tk8s.io/kubernetes/test/integration/volumescheduling\t145.085s\n"}
{"Time":"2020-06-30T13:57:27.584939262Z","Action":"output","Package":"k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration","Test":"TestNonStructuralSchemaCondition/no_top-level_type","Output":"    TestNonStructuralSchemaCondition/no_top-level_type: validation_test.go:1679: Got violations: \"[spec.versions[0].schema.openAPIV3Schema.type: Required value: must not be empty at the root, spec.versions[1].schema.openAPIV3Schema.type: Required value: must not be empty at the root]\"\n"}
{"Time":"2020-06-30T13:57:27.695731948Z","Action":"output","Package":"k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration","Test":"TestNonStructuralSchemaCondition/non-object_top-level_type","Output":"    TestNonStructuralSchemaCondition/non-object_top-level_type: validation_test.go:1679: Got violations: \"[spec.versions[0].schema.openAPIV3Schema.type: Invalid value: \\\"integer\\\": must be object at the root, spec.versions[1].schema.openAPIV3Schema.type: Invalid value: \\\"integer\\\": must be object at the root]\"\n"}
{"Time":"2020-06-30T13:57:27.807279454Z","Action":"output","Package":"k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration","Test":"TestNonStructuralSchemaCondition/forbidden_in_nested_value_validation","Output":"    TestNonStructuralSchemaCondition/forbidden_in_nested_value_validation: validation_test.go:1679: Got violations: \"[spec.versions[0].schema.openAPIV3Schema.allOf[0].properties[foo].additionalProperties: Forbidden: must be undefined to be structural, spec.versions[0].schema.openAPIV3Schema.allOf[0].properties[foo].description: Forbidden: must be empty to be structural, spec.versions[0].schema.openAPIV3Schema.allOf[0].properties[foo].nullable: Forbidden: must be false to be structural, spec.versions[0].schema.openAPIV3Schema.allOf[0].properties[foo].title: Forbidden: must be empty to be structural, spec.versions[0].schema.openAPIV3Schema.allOf[0].properties[foo].type: Forbidden: must be empty to be structural, spec.versions[0].schema.openAPIV3Schema.anyOf[0].items.additionalProperties: Forbidden: must be undefined to be structural, spec.versions[0].schema.openAPIV3Schema.anyOf[0].items.description: Forbidden: must be empty to be structural, spec.versions[0].schema.openAPIV3Schema.anyOf[0].items.nullable: Forb"}
{"Time":"2020-06-30T13:57:27.91506474Z","Action":"output","Package":"k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration","Test":"TestNonStructuralSchemaCondition/invalid_regex_pattern","Output":"    TestNonStructuralSchemaCondition/invalid_regex_pattern: validation_test.go:1679: Got violations: \"[spec.versions[0].schema.openAPIV3Schema.properties[foo].pattern: Invalid value: \\\"+\\\": must be a valid regular expression, but isn't: error parsing regexp: missing argument to repetition operator: `+`, spec.versions[1].schema.openAPIV3Schema.properties[foo].pattern: Invalid value: \\\"+\\\": must be a valid regular expression, but isn't: error parsing regexp: missing argument to repetition operator: `+`]\"\n"}
{"Time":"2020-06-30T13:57:28.031391193Z","Action":"output","Package":"k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration","Test":"TestNonStructuralSchemaCondition/missing_types_without_extensions","Output":"    TestNonStructuralSchemaCondition/missing_types_without_extensions: validation_test.go:1679: Got violations: \"[spec.versions[0].schema.openAPIV3Schema.properties[abc].additionalProperties.properties[a].items.additionalProperties.type: Required value: must not be empty for specified object fields, spec.versions[0].schema.openAPIV3Schema.properties[abc].additionalProperties.properties[a].items.type: Required value: must not be empty for specified array items, spec.versions[0].schema.openAPIV3Schema.properties[abc].additionalProperties.properties[a].type: Required value: must not be empty for specified object fields, spec.versions[0].schema.openAPIV3Schema.properties[abc].additionalProperties.type: Required value: must not be empty for specified object fields, spec.versions[0].schema.openAPIV3Schema.properties[abc].type: Required value: must not be empty for specified object fields, spec.versions[0].schema.openAPIV3Schema.properties[bar].items.additionalProperties.items.type: Required value: must not be empty"}
{"Time":"2020-06-30T13:57:28.14146452Z","Action":"output","Package":"k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration","Test":"TestNonStructuralSchemaCondition/forbidden_additionalProperties_at_the_root","Output":"    TestNonStructuralSchemaCondition/forbidden_additionalProperties_at_the_root: validation_test.go:1679: Got violations: \"[spec.versions[0].schema.openAPIV3Schema.additionalProperties: Forbidden: must not be used at the root, spec.versions[1].schema.openAPIV3Schema.additionalProperties: Forbidden: must not be used at the root]\"\n"}
{"Time":"2020-06-30T13:57:28.253859465Z","Action":"output","Package":"k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration","Test":"TestNonStructuralSchemaCondition/structural_incomplete","Output":"    TestNonStructuralSchemaCondition/structural_incomplete: validation_test.go:1679: Got violations: \"[spec.versions[0].schema.openAPIV3Schema.properties[a]: Required value: because it is defined in spec.versions[0].schema.openAPIV3Schema.not.properties[a], spec.versions[0].schema.openAPIV3Schema.properties[b].properties[a]: Required value: because it is defined in spec.versions[0].schema.openAPIV3Schema.not.properties[b].not.properties[a], spec.versions[0].schema.openAPIV3Schema.properties[b].properties[b].items: Required value: because it is defined in spec.versions[0].schema.openAPIV3Schema.not.properties[b].not.properties[b].items, spec.versions[0].schema.openAPIV3Schema.properties[b].properties[b].items: Required value: must be specified, spec.versions[0].schema.openAPIV3Schema.properties[c].items.items: Required value: because it is defined in spec.versions[0].schema.openAPIV3Schema.not.properties[c].items.not.items, spec.versions[0].schema.openAPIV3Schema.properties[d].items: Required value: because it"}
{"Time":"2020-06-30T13:57:28.871487429Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/scheduler","Output":"ok  \tk8s.io/kubernetes/test/integration/scheduler\t295.320s\n"}
{"Time":"2020-06-30T13:57:32.156006963Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/master","Test":"TestObjectSizeResponses/2_MB_labels","Output":"t.go:228 +0xb2\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0xc00d491da0, 0x1f4)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:455 +0x45\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.(*deferredResponseWriter).Write(0xc1536dac30, 0xc11e456000, 0xc4, 0x47998e, 0x0, 0x0, 0xc052bc7dd0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:202 +0x1e6\\nencoding/json.(*Encoder).Encode(0xc052bc7df0, 0x45e54a0, 0xc0c0722a00, 0x0, 0x410279)\\n\\t/usr/local/go/src/encoding/json/stream.go:231 +0x1ca\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).doEncode(0xc00052e0a0, 0x5299ec0, 0xc0c0722a00, 0x52894e0, 0xc1536dac30, 0x0, 0x0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/ve"}
{"Time":"2020-06-30T13:57:32.156021227Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/master","Test":"TestObjectSizeResponses/2_MB_labels","Output":"ndor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:326 +0x2e4\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc00052e0a0, 0x5299ec0, 0xc0c0722a00, 0x52894e0, 0xc1536dac30, 0x37ec6b6, 0x6)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:300 +0x172\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc0c0722aa0, 0x5299ec0, 0xc0c0722a00, 0x52894e0, 0xc1536dac30, 0x0, 0x0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:228 +0x32d\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc0c0722aa0, 0x5299ec0, 0xc0c0722a00, 0x52894e0, 0xc1536dac30, 0x529a280, 0xc00052e0a0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery"}
... skipping 95 lines ...
    TestUpdateNodeObjects: synthetic_master_test.go:722: UPDATE_NODE_APISERVER is not set

=== SKIP: test/integration/scheduler_perf TestSchedule100Node3KPods (0.00s)
    TestSchedule100Node3KPods: scheduler_test.go:73: Skipping because we want to run short tests


=== Failed
=== FAIL: test/integration/client TestCertRotationContinuousRequests (56.80s)
I0630 13:48:03.296445  108287 controller.go:123] Shutting down OpenAPI controller
I0630 13:48:03.296462  108287 establishing_controller.go:87] Shutting down EstablishingController
I0630 13:48:03.296486  108287 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I0630 13:48:03.296532  108287 dynamic_cafile_content.go:182] Shutting down request-header::/tmp/kubernetes-kube-apiserver344466617/proxy-ca.crt
I0630 13:48:03.296545  108287 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/tmp/ca.crt
I0630 13:48:03.296566  108287 controller.go:87] Shutting down OpenAPI AggregationController
... skipping 254 lines ...
I0630 13:48:15.422786  108287 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0630 13:48:15.422809  108287 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
W0630 13:48:15.456612  108287 lease.go:229] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0630 13:48:15.458087  108287 controller.go:223] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
W0630 13:48:25.446004  108287 lease.go:229] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0630 13:48:25.447458  108287 controller.go:223] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
E0630 13:48:30.081983  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.087650  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.100601  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.121818  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.162379  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.242565  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.402701  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:30.722859  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
E0630 13:48:31.081057  108287 cert_rotation.go:168] key failed with : tls: private key does not match public key
W0630 13:48:35.446106  108287 lease.go:229] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0630 13:48:35.447438  108287 controller.go:223] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
W0630 13:48:45.444955  108287 lease.go:229] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0630 13:48:45.446502  108287 controller.go:223] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
W0630 13:48:55.450338  108287 lease.go:229] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0630 13:48:55.451879  108287 controller.go:223] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
    TestCertRotationContinuousRequests: cert_rotation_test.go:169: Get "https://127.0.0.1:46379/api/v1/namespaces/default/serviceaccounts": context canceled
W0630 13:49:00.083537  108287 cacher.go:148] Terminating all watchers from cacher *apiextensions.CustomResourceDefinition
E0630 13:49:00.083945  108287 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
W0630 13:49:00.084189  108287 cacher.go:148] Terminating all watchers from cacher *core.LimitRange
W0630 13:49:00.084339  108287 cacher.go:148] Terminating all watchers from cacher *core.ResourceQuota
W0630 13:49:00.084390  108287 cacher.go:148] Terminating all watchers from cacher *core.Secret
W0630 13:49:00.084624  108287 cacher.go:148] Terminating all watchers from cacher *core.ConfigMap
W0630 13:49:00.084713  108287 cacher.go:148] Terminating all watchers from cacher *core.Namespace
W0630 13:49:00.084830  108287 cacher.go:148] Terminating all watchers from cacher *core.Endpoints
... skipping 18 lines ...
I0630 13:49:00.093376  108287 autoregister_controller.go:165] Shutting down autoregister controller
I0630 13:49:00.093398  108287 establishing_controller.go:87] Shutting down EstablishingController


DONE 2793 tests, 6 skipped, 1 failure in 12.413s
+++ [0630 13:58:15] Saved JUnit XML test report to /logs/artifacts/junit_20200630-134520.xml
make[1]: *** [Makefile:185: test] Error 1
!!! [0630 13:58:15] Call tree:
!!! [0630 13:58:15]  1: hack/make-rules/test-integration.sh:97 runTests(...)
+++ [0630 13:58:15] Cleaning up etcd
+++ [0630 13:58:16] Integration test cleanup complete
make: *** [Makefile:204: test-integration] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...