I seem to be running into the following error at random when cloning.
abort: HTTP Error 502: Proxy Error
This error seems to be related to an apache bug, found here.
The potential solution of "If you’re hitting a keepalive race, you can also set smax=0 and a ttl lower then your backend keepalive timeout. " was stated and I tried that by modifying rhodecode.conf proxy pass to include smax=0 ttl=120 documentation here: mod_proxy - Apache HTTP Server Version 2.4.
This solution did not seem to work as I have run into the issue again. I am wondering where I can find the default ttl being by rhodecode so I can confirm the proxy pass ttl is lower.
I have looked in the rhodecode.ini file and do not see a default set.
Any suggestions appreciated.
We are using RhodeCode Enterprise 4.12.4 Community Edition.
This could be a timeout generated by our workers potentially. Please upgrade to the later versions 4.24.X, and check rhodecode.ini for those settings.
; ###########################
; GUNICORN APPLICATION SERVER
; ###########################
; run with gunicorn --log-config rhodecode.ini --paste rhodecode.ini
; Module to use, this setting shouldn't be changed
use = egg:gunicorn#main
; Sets the number of process workers. More workers means more concurrent connections
; RhodeCode can handle at the same time. Each additional worker also it increases
; memory usage as each has it's own set of caches.
; Recommended value is (2 * NUMBER_OF_CPUS + 1), eg 2CPU = 5 workers, but no more
; than 8-10 unless for really big deployments .e.g 700-1000 users.
; `instance_id = *` must be set in the [app:main] section below (which is the default)
; when using more than 1 worker.
workers = 2
; Gunicorn access log level
loglevel = info
; Process name visible in process list
proc_name = rhodecode
; Type of worker class, one of `sync`, `gevent`
; Recommended type is `gevent`
worker_class = gevent
; The maximum number of simultaneous clients per worker. Valid only for gevent
worker_connections = 10
; Max number of requests that worker will handle before being gracefully restarted.
; Prevents memory leaks, jitter adds variability so not all workers are restarted at once.
max_requests = 1000
max_requests_jitter = 30
; Amount of time a worker can spend with handling a request before it
; gets killed and restarted. By default set to 21600 (6hrs)
; Examples: 1800 (30min), 3600 (1hr), 7200 (2hr), 43200 (12h)
timeout = 21600
; The maximum size of HTTP request line in bytes.
; 0 for unlimited
limit_request_line = 0
; Limit the number of HTTP headers fields in a request.
; By default this value is 100 and can't be larger than 32768.
limit_request_fields = 32768
; Limit the allowed size of an HTTP request header field.
; Value is a positive number or 0.
; Setting it to 0 will allow unlimited header field sizes.
limit_request_field_size = 0
; Timeout for graceful workers restart.
; After receiving a restart signal, workers have this much time to finish
; serving requests. Workers still alive after the timeout (starting from the
; receipt of the restart signal) are force killed.
; Examples: 1800 (30min), 3600 (1hr), 7200 (2hr), 43200 (12h)
graceful_timeout = 3600
# The number of seconds to wait for requests on a Keep-Alive connection.
# Generally set in the 1-5 seconds range.
keepalive = 2
; Maximum memory usage that each worker can use before it will receive a
; graceful restart signal 0 = memory monitoring is disabled
; Examples: 268435456 (256MB), 536870912 (512MB)
; 1073741824 (1GB), 2147483648 (2GB), 4294967296 (4GB)
memory_max_usage = 0
; How often in seconds to check for memory usage for each gunicorn worker
memory_usage_check_interval = 60
; Threshold value for which we don't recycle worker if GarbageCollection
; frees up enough resources. Before each restart we try to run GC on worker
; in case we get enough free memory after that, restart will not happen.
memory_usage_recovery_threshold = 0.8