This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-10-14 08:50
Elapsed1h15m
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 47 lines ...
go: downloading golang.org/x/sys v0.0.0-20200803210538-64077c9b5642
go: downloading gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7
go: downloading github.com/fsnotify/fsnotify v1.4.9
go: github.com/onsi/ginkgo upgrade => v1.14.2
[dev-env] building image
removing old image ingress-controller/controller:1.0.0-dev
Error: No such image: ingress-controller/controller:1.0.0-dev
go: downloading k8s.io/apimachinery v0.19.2
go: downloading github.com/prometheus/client_golang v1.7.1
go: downloading k8s.io/kubernetes v1.19.2
go: downloading k8s.io/client-go v0.19.2
go: downloading github.com/pkg/errors v0.9.1
go: downloading github.com/armon/go-proxyproto v0.0.0-20200108142055-f0b8253b1507
... skipping 206 lines ...
Removing intermediate container 5e9032af9af3
 ---> a6dc1817c1fb
Step 24/27 : USER www-data
 ---> Running in 7d2eac18e971
Removing intermediate container 7d2eac18e971
 ---> 9905b89b7de2
Step 25/27 : RUN  ln -sf /dev/stdout /var/log/nginx/access.log   && ln -sf /dev/stderr /var/log/nginx/error.log
 ---> Running in 2a81a82fab5a
Removing intermediate container 2a81a82fab5a
 ---> 84a0381f080f
Step 26/27 : ENTRYPOINT ["/usr/bin/dumb-init", "--"]
 ---> Running in 97cdd7b90ae8
Removing intermediate container 97cdd7b90ae8
... skipping 225 lines ...

• Failure in Spec Setup (BeforeEach) [317.099 seconds]
[Setting] [Security] no-auth-locations [BeforeEach] should return status code 200 when accessing '/noauth' unauthenticated 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/no_auth_locations.go:83

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-no-auth-locations-1602667633233127721-wmsh7"}
  	Test:       	[Setting] [Security] no-auth-locations should return status code 200 when accessing '/noauth' unauthenticated
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 6 lines ...

• Failure in Spec Setup (BeforeEach) [318.188 seconds]
[Service] backend status code 503 [BeforeEach] should return 503 when all backend service endpoints are unavailable 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/servicebackend/service_backend.go:53

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-service-backend-1602667632951746120-gddxh"}
  	Test:       	[Service] backend status code 503 should return 503 when all backend service endpoints are unavailable
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 6 lines ...

• Failure in Spec Setup (BeforeEach) [321.221 seconds]
[Setting] enable-multi-accept [BeforeEach] should be disabled when set to false 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/multi_accept.go:49

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-enable-multi-accept-1602667633191243254-bwtsz"}
  	Test:       	[Setting] enable-multi-accept should be disabled when set to false
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 6 lines ...

• Failure in Spec Setup (BeforeEach) [322.228 seconds]
[Setting] hash size [BeforeEach] Check the variable hash size should set variables-hash-bucket-size 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/hash-size.go:80

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-hash-size-1602667632785028488-42fx2"}
  	Test:       	[Setting] hash size Check the variable hash size should set variables-hash-bucket-size
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 6 lines ...

• Failure in Spec Setup (BeforeEach) [322.917 seconds]
[Annotations] proxy-ssl-* [BeforeEach] proxy-ssl-location-only flag should change the nginx config server part 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/annotations/proxyssl.go:150

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-proxyssl-1602667632474494751-v6jq9"}
  	Test:       	[Annotations] proxy-ssl-* proxy-ssl-location-only flag should change the nginx config server part
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 6 lines ...

• Failure in Spec Setup (BeforeEach) [323.234 seconds]
[Default Backend] [BeforeEach] disables access logging for default backend 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/defaultbackend/default_backend.go:106

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-default-backend-1602667632790496823-wpzqk"}
  	Test:       	[Default Backend] disables access logging for default backend
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 6 lines ...

• Failure in Spec Setup (BeforeEach) [322.919 seconds]
[Flag] disable-catch-all [BeforeEach] should delete Ingress updated to catch-all 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/disable_catch_all.go:70

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-disabled-catch-all-1602667633064303244-xhqvh"}
  	Test:       	[Flag] disable-catch-all should delete Ingress updated to catch-all
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 13 lines ...

• Failure in Spec Setup (BeforeEach) [354.417 seconds]
[Setting] [Security] no-auth-locations [BeforeEach] should return status code 200 when accessing '/noauth' unauthenticated 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/no_auth_locations.go:83

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-no-auth-locations-1602667950307075818-2pf47"}
  	Test:       	[Setting] [Security] no-auth-locations should return status code 200 when accessing '/noauth' unauthenticated
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 5 lines ...

• Failure in Spec Setup (BeforeEach) [362.892 seconds]
[Service] backend status code 503 [BeforeEach] should return 503 when all backend service endpoints are unavailable 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/servicebackend/service_backend.go:53

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-service-backend-1602667951131584518-sqsl5"}
  	Test:       	[Service] backend status code 503 should return 503 when all backend service endpoints are unavailable
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 5 lines ...

• Failure in Spec Setup (BeforeEach) [361.201 seconds]
[Setting] enable-multi-accept [BeforeEach] should be disabled when set to false 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/multi_accept.go:49

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-enable-multi-accept-1602667954439681320-dnlx2"}
  	Test:       	[Setting] enable-multi-accept should be disabled when set to false
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 5 lines ...

• Failure in Spec Setup (BeforeEach) [366.232 seconds]
[Flag] disable-catch-all [BeforeEach] should delete Ingress updated to catch-all 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/disable_catch_all.go:70

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-disabled-catch-all-1602667955998490793-tsxjm"}
  	Test:       	[Flag] disable-catch-all should delete Ingress updated to catch-all
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 5 lines ...

• Failure in Spec Setup (BeforeEach) [369.307 seconds]
[Annotations] proxy-ssl-* [BeforeEach] proxy-ssl-location-only flag should change the nginx config server part 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/annotations/proxyssl.go:150

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-proxyssl-1602667955387883973-s7nqt"}
  	Test:       	[Annotations] proxy-ssl-* proxy-ssl-location-only flag should change the nginx config server part
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 5 lines ...

• Failure in Spec Setup (BeforeEach) [371.019 seconds]
[Default Backend] [BeforeEach] disables access logging for default backend 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/defaultbackend/default_backend.go:106

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-default-backend-1602667956000463559-6gh86"}
  	Test:       	[Default Backend] disables access logging for default backend
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 7 lines ...

• Failure in Spec Setup (BeforeEach) [318.055 seconds]
[Annotations] affinity session-cookie-name [BeforeEach] should set the path to /something on the generated cookie 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/annotations/affinity.go:99

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-affinity-1602668212915140435-jfjt5"}
  	Test:       	[Annotations] affinity session-cookie-name should set the path to /something on the generated cookie
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 108 lines ...

• Failure in Spec Setup (BeforeEach) [412.498 seconds]
[Annotations] affinity session-cookie-name [BeforeEach] should set the path to /something on the generated cookie 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/annotations/affinity.go:99

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-affinity-1602668531069656277-xsl9t"}
  	Test:       	[Annotations] affinity session-cookie-name should set the path to /something on the generated cookie
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 41 lines ...
		
		-- init modules
		local ok, res
		
		ok, res = pcall(require, "lua_ingress")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		lua_ingress = res
		lua_ingress.set_config({
			use_forwarded_headers = false,
			use_proxy_protocol = false,
			is_ssl_passthrough_enabled = false,
... skipping 6 lines ...
			hsts_preload = false,
		})
		end
		
		ok, res = pcall(require, "configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		configuration = res
		end
		
		ok, res = pcall(require, "balancer")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		balancer = res
		end
		
		ok, res = pcall(require, "monitor")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		monitor = res
		end
		
		ok, res = pcall(require, "certificate")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		certificate = res
		certificate.is_ocsp_stapling_enabled = false
		end
		
		ok, res = pcall(require, "plugins")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		plugins = res
		end
		-- load all plugins that'll be used here
	plugins.init({  })
	}
... skipping 81 lines ...
		
		default 1;
	}
	
	access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;
	
	error_log  /var/log/nginx/error.log notice;
	
	resolver 10.96.0.10 valid=30s;
	
	# See https://www.nginx.com/blog/websocket-nginx
	map $http_upgrade $connection_upgrade {
		default          upgrade;
... skipping 194 lines ...
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
... skipping 4 lines ...
		location /healthz {
			
			access_log off;
			return 200;
		}
		
		# this is required to avoid error if nginx is being monitored
		# with an external software (like sysdig)
		location /nginx_status {
			
			allow 127.0.0.1;
			
			allow ::1;
... skipping 80 lines ...
		
		-- init modules
		local ok, res
		
		ok, res = pcall(require, "configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		configuration = res
		end
		
		ok, res = pcall(require, "tcp_udp_configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		tcp_udp_configuration = res
		end
		
		ok, res = pcall(require, "tcp_udp_balancer")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		tcp_udp_balancer = res
		end
	}
	
	init_worker_by_lua_block {
... skipping 3 lines ...
	lua_add_variable $proxy_upstream_name;
	
	log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';
	
	access_log /var/log/nginx/access.log log_stream ;
	
	error_log  /var/log/nginx/error.log;
	
	upstream upstream_balancer {
		server 0.0.0.1:1234; # placeholder
		
		balancer_by_lua_block {
			tcp_udp_balancer.balance()
... skipping 73 lines ...

• Failure [650.645 seconds]
[Default Backend] custom service [It] uses custom default backend that returns 200 as status code 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/defaultbackend/custom_default_backend.go:36

  
  	Error Trace:	custom_default_backend.go:46
  	            				runner.go:113
  	            				runner.go:64
  	            				it_node.go:26
  	            				spec.go:215
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timed out waiting for the condition"}
  	Test:       	[Default Backend] custom service uses custom default backend that returns 200 as status code
  	Messages:   	updating deployment
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/defaultbackend/custom_default_backend.go:46
------------------------------
... skipping 2 lines ...


• [SLOW TEST:60.375 seconds]
Debug CLI should produce valid JSON for /dbg general 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/dbg/main.go:85
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 1h0m0s timeout","severity":"error","time":"2020-10-14T09:50:13Z"}
W1014 09:50:46.868213      29 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W1014 09:50:53.970500      29 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress


• [SLOW TEST:259.125 seconds]
[Setting] [SSL] TLS protocols, ciphers and headers) should configure TLS protocol enforcing TLS v1.0 
... skipping 20 lines ...

• Failure in Spec Setup (BeforeEach) [314.679 seconds]
[Flag] ingress-class [BeforeEach] Without a specific ingress-class should ignore Ingress with class 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/ingress_class.go:70

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-ingress-class-1602668842016027289-tmp6x"}
  	Test:       	[Flag] ingress-class Without a specific ingress-class should ignore Ingress with class
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 28 lines ...

• Failure in Spec Setup (BeforeEach) [315.451 seconds]
[Setting] hash size [BeforeEach] Check server names hash size should set server_names_hash_bucket_size 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/hash-size.go:40

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-hash-size-1602668865977221026-mlg9m"}
  	Test:       	[Setting] hash size Check server names hash size should set server_names_hash_bucket_size
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 50 lines ...

• Failure in Spec Setup (BeforeEach) [316.852 seconds]
[Flag] ingress-class [BeforeEach] With a specific ingress-class should ignore Ingress with no class 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/ingress_class.go:122

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-ingress-class-1602669071243758989-js5f5"}
  	Test:       	[Flag] ingress-class With a specific ingress-class should ignore Ingress with no class
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 64 lines ...
		
		-- init modules
		local ok, res
		
		ok, res = pcall(require, "lua_ingress")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		lua_ingress = res
		lua_ingress.set_config({
			use_forwarded_headers = false,
			use_proxy_protocol = false,
			is_ssl_passthrough_enabled = false,
... skipping 6 lines ...
			hsts_preload = false,
		})
		end
		
		ok, res = pcall(require, "configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		configuration = res
		end
		
		ok, res = pcall(require, "balancer")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		balancer = res
		end
		
		ok, res = pcall(require, "monitor")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		monitor = res
		end
		
		ok, res = pcall(require, "certificate")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		certificate = res
		certificate.is_ocsp_stapling_enabled = false
		end
		
		ok, res = pcall(require, "plugins")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		plugins = res
		end
		-- load all plugins that'll be used here
	plugins.init({  })
	}
... skipping 81 lines ...
		
		default 1;
	}
	
	access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;
	
	error_log  /var/log/nginx/error.log notice;
	
	resolver 10.96.0.10 valid=30s;
	
	# See https://www.nginx.com/blog/websocket-nginx
	map $http_upgrade $connection_upgrade {
		default          upgrade;
... skipping 194 lines ...
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
... skipping 4 lines ...
		location /healthz {
			
			access_log off;
			return 200;
		}
		
		# this is required to avoid error if nginx is being monitored
		# with an external software (like sysdig)
		location /nginx_status {
			
			allow 127.0.0.1;
			
			allow ::1;
... skipping 169 lines ...
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
... skipping 99 lines ...
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
... skipping 147 lines ...
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
... skipping 76 lines ...
		
		-- init modules
		local ok, res
		
		ok, res = pcall(require, "configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		configuration = res
		end
		
		ok, res = pcall(require, "tcp_udp_configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		tcp_udp_configuration = res
		end
		
		ok, res = pcall(require, "tcp_udp_balancer")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		tcp_udp_balancer = res
		end
	}
	
	init_worker_by_lua_block {
... skipping 3 lines ...
	lua_add_variable $proxy_upstream_name;
	
	log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';
	
	access_log /var/log/nginx/access.log log_stream ;
	
	error_log  /var/log/nginx/error.log;
	
	upstream upstream_balancer {
		server 0.0.0.1:1234; # placeholder
		
		balancer_by_lua_block {
			tcp_udp_balancer.balance()
... skipping 52 lines ...
I1014 09:57:19.009147       7 controller.go:162] "Backend successfully reloaded"
I1014 09:57:19.009262       7 controller.go:173] "Initial sync, sleeping for 1 second"
I1014 09:57:19.009322       7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"e2e-tests-global-external-auth-1602668697485690873-ccb9k", Name:"nginx-ingress-controller-579c84cc94-ljf4s", UID:"087d5e9c-8e2d-413d-b6bc-111a0789c667", APIVersion:"v1", ResourceVersion:"19110", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I1014 09:57:19.813354       7 main.go:184] "Received SIGTERM, shutting down"
I1014 09:57:19.813400       7 nginx.go:365] "Shutting down controller queues"
I1014 09:57:19.816921       7 nginx.go:381] "Stopping NGINX process"
W1014 09:57:20.041399       7 controller.go:191] Dynamic reconfiguration failed: Post "http://127.0.0.1:10246/configuration/backends": dial tcp 127.0.0.1:10246: connect: connection refused
E1014 09:57:20.041441       7 controller.go:195] Unexpected failure reconfiguring NGINX:
Post "http://127.0.0.1:10246/configuration/backends": dial tcp 127.0.0.1:10246: connect: connection refused
E1014 09:57:20.041470       7 queue.go:130] "requeuing" err="Post \"http://127.0.0.1:10246/configuration/backends\": dial tcp 127.0.0.1:10246: connect: connection refused" key="initial-sync"
W1014 09:57:20.531495       7 main.go:188] Error during shutdown: signal: terminated
I1014 09:57:20.531537       7 main.go:192] "Handled quit, awaiting Pod deletion"

STEP: Dumping namespace content
Oct 14 09:57:24.075: INFO: NAME                                            READY   STATUS    RESTARTS   AGE
pod/echo-7bcbb76d45-d2thc                       1/1     Running   0          11m
pod/httpbin-7df8b59b74-4c2nq                    1/1     Running   0          11m
... skipping 18 lines ...

• Failure [746.988 seconds]
[Setting] [Security] global-auth-url when global external authentication is configured [It] should return status code 200 when request whitelisted (via ingress annotation) service and 401 when request protected service 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/global_external_auth.go:124

  
  	Error Trace:	reporter.go:23
  	            				reporter.go:23
  	            				chain.go:21
  	            				request.go:986
  	            				request.go:906
  	            				request.go:866
  	            				global_external_auth.go:142
... skipping 7 lines ...
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	
  	            		Error Trace:	reporter.go:23
  	            		            				chain.go:21
  	            		            				request.go:986
  	            		            				request.go:906
  	            		            				request.go:866
  	            		            				global_external_auth.go:142
  	            		            				runner.go:113
... skipping 6 lines ...
  	            		            				spec_runner.go:66
  	            		            				suite.go:79
  	            		            				ginkgo_dsl.go:229
  	            		            				ginkgo_dsl.go:210
  	            		            				e2e.go:68
  	            		            				e2e_test.go:30
  	            		Error:      	Get "http://10.96.250.246/foo": dial tcp 10.96.250.246:80: connect: connection refused
  	Test:       	[Setting] [Security] global-auth-url when global external authentication is configured should return status code 200 when request whitelisted (via ingress annotation) service and 401 when request protected service
  

  /home/prow/go/pkg/mod/github.com/gavv/httpexpect/v2@v2.1.0/reporter.go:23
------------------------------
[BeforeEach] [Flag] ingress-class
... skipping 4 lines ...

• Failure in Spec Setup (BeforeEach) [317.127 seconds]
[Flag] ingress-class [BeforeEach] Without a specific ingress-class should ignore Ingress with class 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/ingress_class.go:70

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-ingress-class-1602669156743641454-l5czj"}
  	Test:       	[Flag] ingress-class Without a specific ingress-class should ignore Ingress with class
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 6 lines ...

• Failure in Spec Setup (BeforeEach) [321.405 seconds]
[Default Backend] [BeforeEach] enables access logging for default backend 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/defaultbackend/default_backend.go:89

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-default-backend-1602669170142363721-vdn4f"}
  	Test:       	[Default Backend] enables access logging for default backend
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 24 lines ...

• Failure in Spec Setup (BeforeEach) [316.030 seconds]
[Setting] access-log [BeforeEach] stream-access-log-path use the specified configuration 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/settings/access_log.go:63

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-access-log-1602669278104944185-jqg2m"}
  	Test:       	[Setting] access-log stream-access-log-path use the specified configuration
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 48 lines ...

• Failure in Spec Setup (BeforeEach) [313.834 seconds]
[Annotations] cors-* [BeforeEach] should set cors max-age 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/annotations/cors.go:76

  
  	Error Trace:	framework.go:114
  	            				runner.go:113
  	            				runner.go:64
  	            				setup_nodes.go:15
  	            				spec.go:193
  	            				spec.go:138
  	            				spec_runner.go:200
  	            				spec_runner.go:170
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Expected nil, but got: &errors.errorString{s:"timeout waiting at least one ingress-nginx pod running in namespace e2e-tests-cors-1602669473977996926-57gbl"}
  	Test:       	[Annotations] cors-* should set cors max-age
  	Messages:   	updating ingress controller pod information
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/framework/framework.go:114
------------------------------
... skipping 52 lines ...
		
		-- init modules
		local ok, res
		
		ok, res = pcall(require, "lua_ingress")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		lua_ingress = res
		lua_ingress.set_config({
			use_forwarded_headers = false,
			use_proxy_protocol = false,
			is_ssl_passthrough_enabled = false,
... skipping 6 lines ...
			hsts_preload = false,
		})
		end
		
		ok, res = pcall(require, "configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		configuration = res
		end
		
		ok, res = pcall(require, "balancer")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		balancer = res
		end
		
		ok, res = pcall(require, "monitor")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		monitor = res
		end
		
		ok, res = pcall(require, "certificate")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		certificate = res
		certificate.is_ocsp_stapling_enabled = false
		end
		
		ok, res = pcall(require, "plugins")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		plugins = res
		end
		-- load all plugins that'll be used here
	plugins.init({  })
	}
... skipping 81 lines ...
		
		default 1;
	}
	
	access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;
	
	error_log  /var/log/nginx/error.log notice;
	
	resolver 10.96.0.10 valid=30s;
	
	# See https://www.nginx.com/blog/websocket-nginx
	map $http_upgrade $connection_upgrade {
		default          upgrade;
... skipping 194 lines ...
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
... skipping 4 lines ...
		location /healthz {
			
			access_log off;
			return 200;
		}
		
		# this is required to avoid error if nginx is being monitored
		# with an external software (like sysdig)
		location /nginx_status {
			
			allow 127.0.0.1;
			
			allow ::1;
... skipping 121 lines ...
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
... skipping 76 lines ...
		
		-- init modules
		local ok, res
		
		ok, res = pcall(require, "configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		configuration = res
		end
		
		ok, res = pcall(require, "tcp_udp_configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		tcp_udp_configuration = res
		end
		
		ok, res = pcall(require, "tcp_udp_balancer")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		tcp_udp_balancer = res
		end
	}
	
	init_worker_by_lua_block {
... skipping 3 lines ...
	lua_add_variable $proxy_upstream_name;
	
	log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';
	
	access_log /var/log/nginx/access.log log_stream ;
	
	error_log  /var/log/nginx/error.log;
	
	upstream upstream_balancer {
		server 0.0.0.1:1234; # placeholder
		
		balancer_by_lua_block {
			tcp_udp_balancer.balance()
... skipping 47 lines ...
I1014 10:02:50.101968       7 controller.go:145] "Configuration changes detected, backend reload required"
I1014 10:02:50.110093       7 leaderelection.go:253] successfully acquired lease e2e-tests-dynamic-certificate-1602669704188502592-j2jlj/ingress-controller-leader-nginx
I1014 10:02:50.117581       7 status.go:84] "New leader elected" identity="nginx-ingress-controller-64984668bc-8w7f5"
I1014 10:02:52.630052       7 controller.go:162] "Backend successfully reloaded"
I1014 10:02:52.631745       7 controller.go:173] "Initial sync, sleeping for 1 second"
I1014 10:02:52.635835       7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"e2e-tests-dynamic-certificate-1602669704188502592-j2jlj", Name:"nginx-ingress-controller-64984668bc-8w7f5", UID:"064dcbb1-8fd5-4a23-91de-343bc7084668", APIVersion:"v1", ResourceVersion:"22429", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W1014 10:03:07.012895       7 backend_ssl.go:46] Error obtaining X.509 certificate: no object matching key "e2e-tests-dynamic-certificate-1602669704188502592-j2jlj/foo.com" in local store
I1014 10:03:07.013350       7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"e2e-tests-dynamic-certificate-1602669704188502592-j2jlj", Name:"foo.com", UID:"e1bbbdf5-5671-4c5a-b45d-752c373412a6", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"22533", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress e2e-tests-dynamic-certificate-1602669704188502592-j2jlj/foo.com
W1014 10:03:07.626103       7 controller.go:1153] Error getting SSL certificate "e2e-tests-dynamic-certificate-1602669704188502592-j2jlj/foo.com": local SSL certificate e2e-tests-dynamic-certificate-1602669704188502592-j2jlj/foo.com was not found. Using default certificate
I1014 10:03:07.627928       7 controller.go:145] "Configuration changes detected, backend reload required"
I1014 10:03:11.893211       7 controller.go:162] "Backend successfully reloaded"
I1014 10:03:11.894664       7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"e2e-tests-dynamic-certificate-1602669704188502592-j2jlj", Name:"nginx-ingress-controller-64984668bc-8w7f5", UID:"064dcbb1-8fd5-4a23-91de-343bc7084668", APIVersion:"v1", ResourceVersion:"22429", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W1014 10:03:12.204968       7 controller.go:1153] Error getting SSL certificate "e2e-tests-dynamic-certificate-1602669704188502592-j2jlj/foo.com": local SSL certificate e2e-tests-dynamic-certificate-1602669704188502592-j2jlj/foo.com was not found. Using default certificate
10.244.3.2 - - [14/Oct/2020:10:03:23 +0000] "GET / HTTP/1.1" 200 620 "-" "Go-http-client/1.1" 88 0.001 [e2e-tests-dynamic-certificate-1602669704188502592-j2jlj-echo-80] [] 10.244.2.64:80 620 0.000 200 794eca20c372da71c37a2c7d5cb30e08
I1014 10:03:25.783066       7 store.go:425] "Secret was added and it is used in ingress annotations. Parsing" secret="e2e-tests-dynamic-certificate-1602669704188502592-j2jlj/foo.com"
I1014 10:03:25.783978       7 backend_ssl.go:66] "Adding secret to local store" name="e2e-tests-dynamic-certificate-1602669704188502592-j2jlj/foo.com"
10.244.3.2 - - [14/Oct/2020:10:03:41 +0000] "GET / HTTP/1.1" 200 620 "-" "Go-http-client/1.1" 88 0.013 [e2e-tests-dynamic-certificate-1602669704188502592-j2jlj-echo-80] [] 10.244.2.64:80 620 0.020 200 24ba02a5d004516aa1e6f6b36f7098d4
I1014 10:03:41.390217       7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"e2e-tests-dynamic-certificate-1602669704188502592-j2jlj", Name:"foo.com", UID:"e1bbbdf5-5671-4c5a-b45d-752c373412a6", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"22754", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress e2e-tests-dynamic-certificate-1602669704188502592-j2jlj/foo.com
I1014 10:03:41.403132       7 controller.go:145] "Configuration changes detected, backend reload required"
... skipping 21 lines ...

• Failure [126.798 seconds]
[Lua] dynamic certificates given an ingress with TLS correctly configured [It] picks up a non-certificate only change 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/lua/dynamic_certificates.go:218

  
  	Error Trace:	dynamic_configuration.go:241
  	            				dynamic_certificates.go:230
  	            				runner.go:113
  	            				runner.go:64
  	            				it_node.go:26
  	            				spec.go:215
  	            				spec.go:138
... skipping 2 lines ...
  	            				spec_runner.go:66
  	            				suite.go:79
  	            				ginkgo_dsl.go:229
  	            				ginkgo_dsl.go:210
  	            				e2e.go:68
  	            				e2e_test.go:30
  	Error:      	Not equal: 
  	            	expected: 404
  	            	actual  : 200
  	Test:       	[Lua] dynamic certificates given an ingress with TLS correctly configured picks up a non-certificate only change
  

  /home/prow/go/src/k8s.io/ingress-nginx/test/e2e/lua/dynamic_configuration.go:241
... skipping 3 lines ...


• [SLOW TEST:200.947 seconds]
[Annotations] mirror-* should set mirror-target to http://localhost/mirror 
/home/prow/go/src/k8s.io/ingress-nginx/test/e2e/annotations/mirror.go:36
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:250","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2020-10-14T10:05:13Z"}