Skip to content

Commit 015c706

Browse files
Merge branch 'main' into history-buffer-service
2 parents 66589a6 + e766396 commit 015c706

File tree

1 file changed

+0
-51
lines changed

1 file changed

+0
-51
lines changed

CLAUDE.md

Lines changed: 0 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -12,25 +12,6 @@ ESSlivedata is a live data reduction visualization framework for the European Sp
1212

1313
**IMPORTANT**: In the devcontainer, this project uses micromamba with Python 3.11 in the base environment. The environment is automatically activated - you do not need to activate it manually.
1414

15-
For manual setup outside the devcontainer (if needed):
16-
17-
```sh
18-
# Create virtual environment with Python 3.11
19-
python3.11 -m venv venv
20-
21-
# Activate the virtual environment
22-
source venv/bin/activate
23-
24-
# Install all development dependencies
25-
pip install -r requirements/dev.txt
26-
27-
# Install package in editable mode
28-
pip install -e .
29-
30-
# Setup pre-commit hooks (automatically runs on git commit)
31-
pre-commit install
32-
```
33-
3415
**Note**: In the devcontainer, all Python commands (`python`, `pytest`, `tox`, etc.) automatically use the micromamba base environment. Pre-commit hooks will run automatically on `git commit` if properly installed.
3516

3617
### Running Tests
@@ -72,9 +53,6 @@ python -m pylint src/ess/livedata
7253
python -m pylint --disable=C0114,C0115,C0116 src/ess/livedata
7354
# Run pylint on specific file
7455
python -m pylint src/ess/livedata/core/message.py
75-
76-
# Type checking with mypy (minimize errors, but not strictly enforced)
77-
tox -e mypy
7856
```
7957

8058
**Note**: The project primarily relies on `ruff` for linting.
@@ -138,8 +116,6 @@ python -m ess.livedata.services.timeseries --instrument dummy --dev
138116
python -m ess.livedata.dashboard.reduction --instrument dummy
139117
```
140118

141-
Note: Use `--sink png` argument with processing services to save outputs as PNG files instead of publishing to Kafka for testing.
142-
143119
## Architecture Overview
144120

145121
### Core Architecture Pattern
@@ -185,7 +161,6 @@ Kafka Topics → MessageSource → Processor → Preprocessor → JobManager →
185161
**Configuration** (`src/ess/livedata/config/`):
186162
- `config_loader.py`: Loads YAML/Jinja2 configurations per instrument
187163
- `instruments/`: Instrument-specific configurations (DREAM, Bifrost, LOKI, etc.)
188-
- `workflows.py`: Workflow definitions using sciline workflows
189164

190165
**Handlers** (`src/ess/livedata/handlers/`):
191166
- `detector_data_handler.py`: Preprocessor factory for detector events
@@ -200,7 +175,6 @@ Kafka Topics → MessageSource → Processor → Preprocessor → JobManager →
200175
- `ConfigService`: Central configuration management with Pydantic models
201176
- `DataService`: Manages data streams and notifies subscribers
202177
- `WorkflowController`: Orchestrates workflow configuration and execution
203-
- `ConfigBackedParam`: Translation layer between Param widgets and Pydantic models
204178

205179
### Dashboard Architecture
206180

@@ -233,18 +207,6 @@ For in-depth understanding of ESSlivedata's architecture, see the following desi
233207
## Service Factory Pattern
234208

235209
New services are created using `DataServiceBuilder`:
236-
237-
```python
238-
builder = DataServiceBuilder(
239-
instrument='dummy',
240-
name='my_service',
241-
preprocessor_factory=MyPreprocessorFactory(),
242-
adapter=MyMessageAdapter() # optional
243-
)
244-
service = builder.build_from_config(topics=[...])
245-
service.start()
246-
```
247-
248210
All services use `OrchestratingProcessor` for job-based processing. See [src/ess/livedata/service_factory.py](src/ess/livedata/service_factory.py) for details.
249211

250212
## Configuration System
@@ -297,19 +259,6 @@ def simple_method(self) -> int:
297259

298260
## Important Patterns
299261

300-
### Adding a New Service
301-
302-
1. Create preprocessor factory extending `JobBasedPreprocessorFactoryBase[Tin, Tout]`
303-
2. Implement `make_preprocessor()` to create accumulators for different stream types
304-
3. Register workflows with the instrument configuration
305-
4. Use `DataServiceBuilder` to construct service
306-
5. Add service module in `services/`
307-
6. Add instrument configuration in `config/defaults/`
308-
309-
### Adding Dashboard Widgets
310-
311-
1. Workflow and plotter configuration uses widgets generated from Pydantic model for validation on the frontend and serialization for Kafka communication
312-
313262
### Message Processing
314263

315264
- Messages have `timestamp`, `stream`, and `value` fields

0 commit comments

Comments
 (0)