Cross-Site forgery detected for nearly all POST requests

Hiya all,

we are running RhodeCode CE, 4.27.1 behind a SSL terminating Apache proxy. For authentication, we use header authentication where we extract the username from the certificate (a bit weird but historically grown). We redirect any non-SSL access to the SSL version.

This basically works. However:

1.) Sometimes (not always), when trying to e.g. show the main repository page, there are requests to http://baseurl (sans SSL) that are, of course, blocked by the browser

2.) When trying to do anything that involves a POST request, e.g. change any setting in the admin panel, chances are high that we receive either a

403 Forbidden
Cross-site request forgery detected, request denied…

or a plain 404.

Not always, though: sometimes it works, usually after reloading the page multiple times or trying to log out (which yields another 403) and going back. This makes it more or less unusable. It also seems this has changed at some point, are there any changes between RC 4.17 and 4.27 that might be related - we did an upgrade recently and I haven’t noticed the problem before, but maybe I just was lucky.

Also the log for these failed requests shows a POST URL with plain http, most of the time.

We have used the guides from here: Apache HTTP Server Configuration — RhodeCode Enterprise 4.27.1 4.27.1 documentation for the apache configuration, and an older one which I can’t find anymore for the Header authentication.

The fact that it seems to work after enough coaxing leads me to believe that it’s not a general problem with the setup, but maybe some kind of e.g. race condition, caching problem or the like. However right now I’m totally stumped where even to begin. I’ve tried to enable and disable basically every setting in the rhodecode config that could have to do something with this, but to no avail. These issues seem to be related, but maybe they’re not…

We don’t really want to turn off HEADER authentication, but it would be an option if nothing else works.

I can provide logs and/or config files if necessary, just let me know which ones you think are helpful.


CSRF relies on sessions data. What session backend do you use configured inside the .ini file ?

We are using the default memory backend:

## .session.type is type of storage options for the session, current allowed
## types are file, ext:memcached, ext:redis, ext:database, and memory (default).
#beaker.session.type = file
#beaker.session.data_dir = %(here)s/data/sessions

## db based session, fast, and allows easy management over logged in users
#beaker.session.type = ext:database
#beaker.session.table_name = db_session = postgresql://postgres:secret@localhost/rhodecode = mysql://root:secret@ = 3600 = false

beaker.session.key = community-1
beaker.session.secret = 65d22a3741c24b6f802f7867649d6cf9
beaker.session.lock_dir = %(here)s/data/sessions/lock

## Secure encrypted cookie. Requires AES and AES python libraries
## you must disable beaker.session.secret to use this
#beaker.session.encrypt_key = key_for_encryption
#beaker.session.validate_key = validation_key

## sets session as invalid(also logging out user) if it haven not been
## accessed for given amount of time in seconds
beaker.session.timeout = 2592000
beaker.session.httponly = true
## Path to use for the cookie. Set to prefix if you use prefix middleware
beaker.session.cookie_path = /rhodecode

## uncomment for https secure cookie = false

## auto save the session to not to use .save() = false

## default cookie expiration time in seconds, set to `true` to set expire
## at browser close
#beaker.session.cookie_expires = 3600

That said - I have rolled back all settings to the state we had before the upgrade and so far the problem seems to be gone. We had manually tried to optimize the instance by tuning the number of workers and this seemed to have broken the setup, but perhaps it was an unrelated change someone did out of desperation. The fact that it occasionally worked still seems to point to a problem with workers/caches IMHO, otherwise I would have expected it to never work. But maybe I’m misunderstanding sth.

So for Session you NEVER want to use memory backend, it would cause the sessions be different request by request, unless by any chance same request goes to same worker which would have the session data in memory. Probably we should be more explicit about that.

please switch to Redis or file based sessions and that should be fine.

more workes == more independent memory inside each worker that would make the problem surface more.

Thx - I will try this, ya we had inherited this from very early on, I mean “memory” in itself sounds super fast if you don’t have a lot of sessions, but nowadays there are better options. Makes sense with the workers and that it sometimes seemed to work.