This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 18 failed / 759 succeeded
Started2022-07-12 11:25
Elapsed45m25s
Revisionmaster

Test Failures


Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] 44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAggregator\sShould\sbe\sable\sto\ssupport\sthe\s1\.17\sSample\sAPI\sServer\susing\sthe\scurrent\sAggregator\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 12 11:51:17.648: creating a new flunders resource
Unexpected error:
    <*errors.StatusError | 0xc002711d60>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "flunders.wardle.example.com \"rest-flunder-1191427857\" is forbidden: not yet ready to handle request",
            Reason: "Forbidden",
            Details: {
                Name: "rest-flunder-1191427857",
                Group: "wardle.example.com",
                Kind: "flunders",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    flunders.wardle.example.com "rest-flunder-1191427857" is forbidden: not yet ready to handle request
occurred
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:415
				
				Click to see stdout/stderrfrom junit_14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] 1m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sServiceAccounts\sServiceAccountIssuerDiscovery\sshould\ssupport\sOIDC\sdiscovery\sof\sservice\saccount\sissuer\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 12 11:48:52.246: Unexpected error:
    <*errors.errorString | 0xc000293b50>: {
        s: "pod \"oidc-discovery-validator\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-12 11:47:51 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-12 11:48:18 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-12 11:48:18 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-12 11:47:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.47.40 PodIP:100.96.1.56 PodIPs:[{IP:100.96.1.56}] StartTime:2022-07-12 11:47:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2022-07-12 11:47:53 +0000 UTC,FinishedAt:2022-07-12 11:48:18 +0000 UTC,ContainerID:docker://7c9b43ef939b38368362e461c4c608de0edc5e7fa291703a6637835ce51d15e9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.39 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e ContainerID:docker://7c9b43ef939b38368362e461c4c608de0edc5e7fa291703a6637835ce51d15e9 Started:0xc003c6c7a0}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    pod "oidc-discovery-validator" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-12 11:47:51 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-12 11:48:18 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-12 11:48:18 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-12 11:47:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.47.40 PodIP:100.96.1.56 PodIPs:[{IP:100.96.1.56}] StartTime:2022-07-12 11:47:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2022-07-12 11:47:53 +0000 UTC,FinishedAt:2022-07-12 11:48:18 +0000 UTC,ContainerID:docker://7c9b43ef939b38368362e461c4c608de0edc5e7fa291703a6637835ce51d15e9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.39 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e ContainerID:docker://7c9b43ef939b38368362e461c4c608de0edc5e7fa291703a6637835ce51d15e9 Started:0xc003c6c7a0}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:789
				
				Click to see stdout/stderrfrom junit_17.xml

Find oidc-discovery-validator mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should handle in-cluster config 2m21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\shandle\sin\-cluster\sconfig$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:646
Jul 12 11:46:27.437: Expected
    <exec.CodeExitError>: {
        Err: {
            s: "error running /home/prow/go/src/k8s.io/kops/_rundir/241b06ff-01d5-11ed-a50b-a6ae044f977a/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-u2004-k22-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2614 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:\nCommand stdout:\nI0712 11:45:17.336401     197 merged_client_builder.go:163] Using in-cluster namespace\nI0712 11:45:32.341290     197 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 15004 milliseconds\nI0712 11:45:32.341358     197 cached_discovery.go:121] skipped caching discovery info due to Get \"http://invalid/api?timeout=32s\": dial tcp: lookup invalid on 100.64.0.10:53: read udp 100.96.4.6:37325->100.64.0.10:53: i/o timeout\nI0712 11:45:47.344023     197 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 15002 milliseconds\nI0712 11:45:47.344093     197 cached_discovery.go:121] skipped caching discovery info due to Get \"http://invalid/api?timeout=32s\": dial tcp: lookup invalid on 100.64.0.10:53: read udp 100.96.4.6:52622->100.64.0.10:53: i/o timeout\nI0712 11:45:47.344106     197 shortcut.go:89] Error loading discovery information: Get \"http://invalid/api?timeout=32s\": dial tcp: lookup invalid on 100.64.0.10:53: read udp 100.96.4.6:52622->100.64.0.10:53: i/o timeout\nI0712 11:46:02.352204     197 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 15007 milliseconds\nI0712 11:46:02.352272     197 cached_discovery.go:121] skipped caching discovery info due to Get \"http://invalid/api?timeout=32s\": dial tcp: lookup invalid on 100.64.0.10:53: read udp 100.96.4.6:32907->100.64.0.10:53: i/o timeout\nI0712 11:46:17.355208     197 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 15002 milliseconds\nI0712 11:46:17.355392     197 cached_discovery.go:121] skipped caching discovery info due to Get \"http://invalid/api?timeout=32s\": dial tcp: lookup invalid on 100.64.0.10:53: read udp 100.96.4.6:48026->100.64.0.10:53: i/o timeout\nI0712 11:46:27.357430     197 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 10001 milliseconds\nI0712 11:46:27.357491     197 cached_discovery.go:121] skipped caching discovery info due to Get \"http://invalid/api?timeout=32s\": dial tcp: lookup invalid on 100.64.0.10:53: read udp 100.96.4.6:60040->100.64.0.10:53: read: connection refused\nI0712 11:46:27.357545     197 helpers.go:235] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.64.0.10:53: read udp 100.96.4.6:60040->100.64.0.10:53: read: connection refused\nF0712 11:46:27.357574     197 helpers.go:116] The connection to the server invalid was refused - did you specify the right host or port?\ngoroutine 1 [running]:\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc0007d2200, 0x89, 0x1e2)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x30d3380, 0xc000000003, 0x0, 0x0, 0xc000196070, 0x2, 0x27f46b8, 0xa, 0x74, 0x40e300)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x30d3380, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc00056c460, 0x1, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0000480c0, 0x5a, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x226a860, 0xc00075bec0, 0x20ec0f0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116\nk8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc0003da000, 0xc0007c6090, 0x1, 0x3)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:180 +0x159\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0003da000, 0xc0007c6060, 0x3, 0x3, 0xc0003da000, 0xc0007c6060)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:856 +0x2c2\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000796780, 0xc000132120, 0xc000100050, 0x5)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960 +0x375\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897\nmain.main()\n\t_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x1f7\n\ngoroutine 18 [chan receive]:\nk8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x30d3380)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b\ncreated by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:420 +0xdf\n\ngoroutine 6 [select]:\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x20ebff8, 0x2268d20, 0xc0007c6000, 0x1, 0xc000102b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x20ebff8, 0x12a05f200, 0x0, 0x1, 0xc000102b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x20ebff8, 0x12a05f200, 0xc000102b40)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d\ncreated by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96\n\nstderr:\n+ /tmp/kubectl get pods '--server=invalid' '--v=6'\ncommand terminated with exit code 255\n\nerror:\nexit status 255",
        },
        Code: 255,
    }
to contain substring
    <string>: Unable to connect to the server
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:757
				
				Click to see stdout/stderrfrom junit_14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] 2m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\sbe\sable\sto\schange\sthe\stype\sfrom\sExternalName\sto\sClusterIP\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 12 11:52:32.316: Unexpected error:
    <*errors.errorString | 0xc001b202e0>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
occurred
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1336
				
				Click to see stdout/stderrfrom junit_25.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols 2m33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\sbe\sable\sto\supdate\sservice\stype\sto\sNodePort\slistening\son\ssame\sport\snumber\sbut\sdifferent\sprotocols$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1235
Jul 12 11:48:11.104: Unexpected error:
    <*errors.errorString | 0xc0006b8370>: {
        s: "service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol
occurred
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1269
				
				Click to see stdout/stderrfrom junit_24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] 2m41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\shave\ssession\saffinity\swork\sfor\sNodePort\sservice\s\[LinuxOnly\]\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 12 11:48:26.149: Unexpected error:
    <*errors.errorString | 0xc002ec20f0>: {
        s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol
occurred
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3278
				
				Click to see stdout/stderrfrom junit_15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Services should serve a basic endpoint from pods [Conformance] 2m33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\sserve\sa\sbasic\sendpoint\sfrom\spods\s\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 12 11:49:31.447: Unexpected error:
    <*errors.errorString | 0xc0026e03e0>: {
        s: "service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol
occurred
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:818
				
				Click to see stdout/stderrfrom junit_09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works 5m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\saws\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\(allowExpansion\)\]\svolume\-expand\sVerify\sif\soffline\sPVC\sexpansion\sworks$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
Jul 12 11:50:45.408: While creating pods for resizing
Unexpected error:
    <*errors.errorString | 0xc001d69590>: {
        s: "pod \"pod-128e0776-d6d3-4266-bdb7-b447c068fe7c\" is not Running: timed out waiting for the condition",
    }
    pod "pod-128e0776-d6d3-4266-bdb7-b447c068fe7c" is not Running: timed out waiting for the condition
occurred
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:192
				
				Click to see stdout/stderrfrom junit_10.xml

Find pod-128e0776-d6d3-4266-bdb7-b447c068fe7c mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents 5m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\saws\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\sfsgroupchangepolicy\s\(Always\)\[LinuxOnly\]\,\spod\screated\swith\san\sinitial\sfsgroup\,\snew\spod\sfsgroup\sapplied\sto\svolume\scontents$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
Jul 12 11:49:44.441: Unexpected error:
    <*errors.errorString | 0xc002f9d720>: {
        s: "pod \"pod-30591be9-8100-4e08-968f-9bdb2128ca73\" is not Running: timed out waiting for the condition",
    }
    pod "pod-30591be9-8100-4e08-968f-9bdb2128ca73" is not Running: timed out waiting for the condition
occurred
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:250
				
				Click to see stdout/stderrfrom junit_22.xml

Find pod-30591be9-8100-4e08-968f-9bdb2128ca73 mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents 5m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\saws\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\sfsgroupchangepolicy\s\(OnRootMismatch\)\[LinuxOnly\]\,\spod\screated\swith\san\sinitial\sfsgroup\,\snew\spod\sfsgroup\sapplied\sto\svolume\scontents$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
Jul 12 11:51:34.906: Unexpected error:
    <*errors.errorString | 0xc003b60bc0>: {
        s: "pod \"pod-fb6181a4-7c1e-41a5-b3e2-913881e8ebf6\" is not Running: timed out waiting for the condition",
    }
    pod "pod-fb6181a4-7c1e-41a5-b3e2-913881e8ebf6" is not Running: timed out waiting for the condition
occurred
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:250
				
				Click to see stdout/stderrfrom junit_01.xml

Find pod-fb6181a4-7c1e-41a5-b3e2-913881e8ebf6 mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory 5m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\saws\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sexisting\sdirectory$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul 12 11:49:19.765: Unexpected error:
    <*errors.errorString | 0xc003950560>: {
        s: "expected pod \"pod-subpath-test-dynamicpv-d77z\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-d77z\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-subpath-test-dynamicpv-d77z" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-d77z" to be "Succeeded or Failed"
occurred
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742