New RhodeCode Installer using Docker

Hi, community !

We’re rolling out a new installer for RhodeCode.
It’s called rcstack.

This is our new docker stack, and also would allow easier path to migration to python3. If you want to try it out and give us feedback, please check out the new stack here:

mkdir docker-rhodecode && cd docker-rhodecode
curl -L -s -o rcstack https://dls.rhodecode.com/get-rcstack && chmod +x rcstack
./rcstack get-started

Posting just minimal info to get feedback if things are clear and straightforward.

Docker Stack will replace the currently used rccontrol installer in the near feature.

RhodeCode Team

Installed on Alpine Linux, got most of the way there…

First issue was related to BusyBox version of ‘cp’ command. The setup script failed doing cp --backup=numbered, but adding the version of cp found in coreutils fixed that issue.

apk add --upgrade coreutils

Second issue was related to a printf command that BusyBox didn’t like. A quick addition of the appropriate package fixed that too.

apk -U add findutils

The setup finally succeeded, and I was able to run ./rcstack init. Also, the router and services stacks were setup fine.

However, executing ./rcstack stack rhodecode up --detach results with this error:

 => ERROR [rhodecode_base 14/25] COPY --chown=rhodecode:rhodecode .cache/locale-archive /var/opt/   0.0s

And:

failed to solve: failed to compute cache key: failed to calculate checksum of ref moby::q01k4c85arg7b8skxl0lpj5si: "/.cache/locale-archive": not found

If I dig into the service/rhodecode/rhodecode.dockerfile and comment the line mentioned above, the setup gets a little further, then fails again, likely because essential steps were skipped.

So, the Docker setup is not super intuitive, however, it appears that it will work if a few errors are resolved. The initial errors are related to using Alpine as a minimalist Docker platform, which may not affect many users and likely doesn’t require mitigation, however it seems the errors encountered while starting the Rhodecode container are happening inside the container.

Also, I am not certain how to proceed once the setup is complete, as I don’t see any further documentation on the Docker file. There are a few more moving parts with the Docker setup, like Prometheus and Grafana, that are not part of the standard installation. However, getting started might make sense once consulting the standard docs.

Then again, it could be user error. Please let me know if this is the case…

Just to verify, I installed on Debian and received the same error when starting the Rhodecode container.

Hi !

Thanks for sharing your feedback!

  • Re: copy cp --backup=numbered We need to make this optional, and use one without --backup if it’s not available.

  • Re: ERROR [rhodecode_base 14/25] COPY
    This is very odd because this indicates installer will try to build a new docker image rather pulling one from here: Docker

Are you sure you got the correct image set ?

i.e 4.28.0 is version set. ?

whats the output of ./rcstack version-info ?

also to prevent a build you can run:

./rcstack stack rhodecode up --detach --no-build

As shown above:

  • The version appears to be 4.28.

  • ./rcstack stack rhodecode up --detach --no-build also fails.

However, the other containers are working:

can you try again, we think we fixed missing image for ce edition

Okay, now the Rhodecode container does start successfully, but it continuously restarts.

Because the Rhodecode container is constantly restarting, the admin is inaccessible.

The docs folder contains examples from older Docker setup procedures, so I am not certain what to look for. Any further recommendations?

yes, please run ./rccontrol stack rhodecode logs -f you should see the error on container restart if you wait a bit

Do you have a detailed error on that by any chance ?

You will probably see the same thing if you run on a fresh Debian install.

rc_cluster_apps-celery-1       | {"timestamp": "2023-05-21T05:43:52.882904+00:00", "levelname": "ERROR", "name": "celery.worker.consumer.consumer", "message": "consumer: Cannot connect to amqp://guest:**@127.0.0.1:5672//: [Errno 111] Conne
ction refused.\nTrying again in 32.00 seconds... (16/100)\n", "req_id": "00000000-0000-0000-0000-000000000000"}        
rc_cluster_apps-celery-beat-1  | {"timestamp": "2023-05-21T05:44:01.017637+00:00", "levelname": "ERROR", "name": "rhodecode.rc_ee.lib.celerylib.scheduler", "message": "Failed to fetch schedule entries", "req_id": "00000000-0000-0000-0000-0
00000000000", "exc_info": "Traceback (most recent call last):\n  File \"rc_ee/lib/celerylib/scheduler.py\", line 279, in rc_ee.lib.celerylib.scheduler.DbScheduler.get_all_schedules\n  File \"/nix/store/l02aqnlgizsvr8f1gcyz9wi9x3czz0vb-pyth
on2.7-sqlalchemy-1.3.15/lib/python2.7/site-packages/sqlalchemy/orm/query.py\", line 3244, in all\n    return list(self)\n  File \"/nix/store/dxpzvsgwhvbii962bahikcb72q0wnb0k-python2.7-rhodecode-enterprise-ce-4.28.0/lib/python2.7/site-packa
ges/rhodecode/lib/caching_query.py\", line 93, in __iter__\n    return super_.__iter__()\n  File \"/nix/store/l02aqnlgizsvr8f1gcyz9wi9x3czz0vb-python2.7-sqlalchemy-1.3.15/lib/python2.7/site-packages/sqlalchemy/orm/query.py\", line 3403, in
 __iter__\n    return self._execute_and_instances(context)\n  File \"/nix/store/dxpzvsgwhvbii962bahikcb72q0wnb0k-python2.7-rhodecode-enterprise-ce-4.28.0/lib/python2.7/site-packages/rhodecode/lib/caching_query.py\", line 118, in _execute_a
nd_instances\n    return super_._execute_and_instances(context)\n  File \"/nix/store/l02aqnlgizsvr8f1gcyz9wi9x3czz0vb-python2.7-sqlalchemy-1.3.15/lib/python2.7/site-packages/sqlalchemy/orm/query.py\", line 3425, in _execute_and_instances\n
    querycontext, self._connection_from_session, close_with_result=True\n  File \"/nix/store/l02aqnlgizsvr8f1gcyz9wi9x3czz0vb-python2.7-sqlalchemy-1.3.15/lib/python2.7/site-packages/sqlalchemy/orm/query.py\", line 3440, in _get_bind_args\n
    mapper=self._bind_mapper(), clause=querycontext.statement, **kw\n  File \"/nix/store/l02aqnlgizsvr8f1gcyz9wi9x3czz0vb-python2.7-sqlalchemy-1.3.15/lib/python2.7/site-packages/sqlalchemy/orm/query.py\", line 3418, in _connection_from_ses
sion\n    conn = self.session.connection(**kw)\n  File \"/nix/store/l02aqnlgizsvr8f1gcyz9wi9x3czz0vb-python2.7-sqlalchemy-1.3.15/lib/python2.7/site-packages/sqlalchemy/orm/session.py\", line 1128, in connection\n    bind = self.get_bind(ma
pper, clause=clause, **kw)\n  File \"/nix/store/l02aqnlgizsvr8f1gcyz9wi9x3czz0vb-python2.7-sqlalchemy-1.3.15/lib/python2.7/site-packages/sqlalchemy/orm/session.py\", line 1551, in get_bind\n    % (\", \".join(context))\nUnboundExecutionErr
or: Could not locate a bind configured on mapper mapped class ScheduleEntry->schedule_entries, SQL expression or this Session"}

Regarding Alpine Linux, the specific printf related error is:

Does this look right?                                     
                                                          
RhodeCode Edition  : rhodecode-ce                                                                                    
License Token      : xxx-xxx-xxx-xxx                                                                             
Hostname           : hostname
Use SSL            : n                                    
Email              : email@hostname                                                                         
Admin user         : admin                                                                                           
Admin password     : secret0                                                                                     
                                                                                                                     
ENTER to continue, 'n' to try again, Ctrl+C to exit: 
re-using existing config at: .rcstack.ini
bootstrap_config: init runtime env config at: /home/rhodecode/.custom/.runtime.env
'/home/rhodecode/templates/ini/0_edge/rhodecode.ini' -> '/home/rhodecode/config/_shared/rhodecode.ini'
'/home/rhodecode/templates/ini/0_edge/vcsserver.ini' -> '/home/rhodecode/config/_shared/vcsserver.ini'
* bootstrap: 'bootstrap_v1_overrides' stage not found; running now...
find: unrecognized: -printf
BusyBox v1.36.0 (2023-05-15 03:12:37 UTC) multi-call binary.

Is it possible to setup a log directory on the host, to catch these kinds of errors?

The rhodecode/docker-compose-apps.src.yaml sets a bash history file, but could something similar be done for error logs?

a new stack was pushed today:

  • it no longer relies on find present
  • cp will fallback to without --backup=numbered if this is not available in the system

Thank you for contributing your feedback

It would be hard since it’s docker internals.

but the error you posted is for celery/beat, Can you check the logs of RhodeCode ?

./rccontrol stack rhodecode logs rhodecode -f

FYI:

we did a debian 10 install from a new machine using rhodecode-ce edition.

This worked ootb without no errors.

Which version you used ?

Okay, here is an update. Installed on a fresh Alpine Linux machine today…

  1. Find no longer required.
  2. backup=numbered no longer used
  3. Docker cluster installation completes as expected
  4. rhodecode still not working (see attached)

After install, attempt to connect to running instance:

Verify all service running:

Today’s update was performed on a fresh installation of Alpine Linux in order to test the find and backup functions. And, the ce edition was selected.

I believe the Docker setup should work the same regardless of the underlying host, correct? Assuming so, you should be seeing what I am seeing.

Any further suggestions? I looked at the docs under rhodecode/docs, and other than adjusting the ini files, I do not see anything obvious, but perhaps I am missing something…