File tree Expand file tree Collapse file tree 6 files changed +82
-37
lines changed
Expand file tree Collapse file tree 6 files changed +82
-37
lines changed Original file line number Diff line number Diff line change @@ -8,11 +8,6 @@ Description
88
99TODO:
1010
11- Strategies
12- ----------
13-
14- TODO:
15-
1611Interaction schema
1712------------------
1813
@@ -79,3 +74,13 @@ Basic configuration
7974.. autopydantic_model :: syncmaster.settings.auth.keycloak.KeycloakProviderSettings
8075.. autopydantic_model :: syncmaster.settings.auth.jwt.JWTSettings
8176
77+
78+ Local installation and testing
79+ -----------------------------
80+
81+ You can test Keycloak auth locally with docker compose:
82+
83+
84+ .. code-block :: console
85+
86+ $ docker compose -f docker-compose.test.yml up keycloak -d
Original file line number Diff line number Diff line change @@ -79,15 +79,11 @@ Available *extras* are:
7979Run database
8080~~~~~~~~~~~~
8181
82- Start Postgres instance somewhere, and set up environment variables :
82+ Start Postgres instance somewhere, and set up environment variable :
8383
8484.. code-block :: bash
8585
86- POSTGRES_HOST=localhost
87- POSTGRES_PORT=5432
88- POSTGRES_DB=postgres
89- POSTGRES_USER=user
90- POSTGRES_PASSWORD=password
86+ SYNCMASTER__DATABASE__URL=postgresql+asyncpg://syncmaster:changeme@db:5432/syncmaster
9187
9288 You can use virtually any database supported by `SQLAlchemy <https://docs.sqlalchemy.org/en/20/core/engines.html#database-urls >`_,
9389but the only one we really tested is Postgres.
@@ -111,33 +107,11 @@ options and commands are just the same.
111107Run RabbitMQ
112108~~~~~~~~~~~~
113109
114- Start RabbitMQ instance somewhere, and set up environment variables :
110+ Start RabbitMQ instance somewhere, and set up environment variable :
115111
116112.. code-block :: bash
117113
118- RABBITMQ_HOST=somehost
119- RABBITMQ_PORT=5672
120- RABBITMQ_USER=user
121- RABBITMQ_PASSWORD=password
122-
123- Run worker
124- ~~~~~~~~~~
125-
126- .. note ::
127-
128- Before starting the worker you need to create a queue.
129- The queue is created by sending a post request to ``/queues `` endpoint (See Swagger doc for details).
130-
131- to start the worker you need to run the command
132-
133- .. code-block :: console
134-
135- $ celery -A syncmaster.worker.config.celery worker --loglevel=info --max-tasks-per-child=1 -Q queue_name
136-
137- .. note ::
138-
139- The specified celery options are given as an example, you can specify other options you need.
140-
114+ SYNCMASTER__BROKER__URL=amqp://guest:guest@rabbitmq:5672/
141115
142116 Run backend
143117~~~~~~~~~~~
Original file line number Diff line number Diff line change @@ -121,7 +121,7 @@ Example:
121121 "started_at" : " 2024-01-19T16:30:07+03:00" ,
122122 "ended_at" : null ,
123123 "status" : " STARTED" ,
124- "log_url" : " https://kinaba .url/..." ,
124+ "log_url" : " https://kibana .url/..." ,
125125 "transfer_dump" : {
126126 " transfer object JSON"
127127 },
Original file line number Diff line number Diff line change 1717 backend/install
1818 backend/architecture
1919 backend/auth/index
20- backend/openapi
2120 backend/configuration/index
21+ backend/openapi
2222
2323
2424.. toctree ::
2525 :maxdepth: 2
2626 :caption: Worker
2727 :hidden:
2828
29+ worker/start_worker
30+ worker/monitoring
2931 worker/configuration/index
3032
3133
Original file line number Diff line number Diff line change 1+ Monitoring the Celery Worker
2+ ============================
3+
4+ Each run in the system is linked to a log URL where the Celery worker logs are available. This log URL might point to an Elastic instance or another logging tool such as Grafana. The log URL is generated based on a template configured in the backend.
5+
6+ The configuration parameter is:
7+
8+ .. code-block :: bash
9+
10+ SYNCMASTER__SERVER__LOG_URL_TEMPLATE=https://grafana.example.com? correlation_id={{ correlation_id }}& run_id={{ run.id }}
11+
12+ You can search for each run by either its correlation id `CORRELATION_CELERY_HEADER_ID ` in http headers or the ``Run.Id ``.
13+
Original file line number Diff line number Diff line change 1+ Starting the Celery Worker
2+ ==========================
3+
4+ .. note ::
5+
6+ Before starting the worker you need to create a queue.
7+ The queue is created by sending a post request to ``/queues `` endpoint (See Swagger doc for details).
8+
9+
10+ With docker
11+ -----------
12+
13+ Installation process
14+ ~~~~~~~~~~~~~~~~~~~~
15+
16+ Docker will download worker image of syncmaster & broker, and run them.
17+ Options can be set via ``.env `` file or ``environment `` section in ``docker-compose.yml ``
18+
19+ .. dropdown :: ``docker-compose.yml``
20+
21+ .. literalinclude :: ../../docker-compose.yml
22+
23+ .. dropdown :: ``.env.docker``
24+
25+ .. literalinclude :: ../../.env.docker
26+
27+ To start the worker container you need to run the command:
28+
29+ .. code-block :: bash
30+
31+ docker compose up worker -d --wait --wait-timeout 200
32+
33+
34+
35+ Without docker
36+ --------------
37+
38+ To start the worker you need to run the command
39+
40+ .. code-block :: bash
41+
42+ python -m celery -A syncmaster.worker.celery worker
43+
44+ You can specify options like concurrency and queues by adding additional flags:
45+
46+ .. code-block :: bash
47+
48+ celery -A -A syncmaster.worker.celery worker --concurrency=4 --max-tasks-per-child=1 --loglevel=info
49+
50+
51+ Refer to the `Celery <https://docs.celeryq.dev/en/stable/ >`_ documentation for more advanced start options.
You can’t perform that action at this time.
0 commit comments