Skip to content

Commit 9a5028c

Browse files
committed
Fix mermaid error
1 parent 9bc2dde commit 9a5028c

File tree

2 files changed

+13
-11
lines changed

2 files changed

+13
-11
lines changed

docs/03_training_loop___trainer___.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ sequenceDiagram
114114
Trainer->>Trainer: Calculate loss(y_pred, y)
115115
Trainer->>+OptimizerScheduler: Backpropagate loss & Update weights (optimizer.step())
116116
OptimizerScheduler-->>-Trainer: Weights updated
117-
Trainer->>-DataLoaderTrain: Repeat for all batches in dl_train
117+
Trainer->>DataLoaderTrain: Repeat for all batches in dl_train
118118
Trainer-->>-Trainer: Return average train_loss for epoch
119119
120120
Trainer->>+Trainer: val_epoch(dl_val) # Ask Trainer to validate
@@ -123,7 +123,7 @@ sequenceDiagram
123123
Trainer->>+Model: Forward pass: model(x) (No gradient tracking)
124124
Model-->>-Trainer: Return prediction y_pred
125125
Trainer->>Trainer: Calculate loss(y_pred, y)
126-
Trainer->>-DataLoaderVal: Repeat for all batches in dl_val
126+
Trainer->>DataLoaderVal: Repeat for all batches in dl_val
127127
Trainer-->>-Trainer: Return average val_loss for epoch
128128
129129
Trainer->>+LoggerEarlyStop: Log metrics (train_loss, val_loss, lr)
@@ -141,7 +141,6 @@ sequenceDiagram
141141
alt Training Finished Normally or Early Stopped
142142
Trainer-->>-RunFunc: Return final_val_loss
143143
end
144-
end
145144
```
146145

147146
This diagram shows the cycle: for each epoch, the `Trainer` calls `train_epoch` (which iterates through training batches, performs forward/backward passes, and updates weights) and `val_epoch` (which iterates through validation batches and calculates loss without updating weights). After each epoch, it logs metrics, checks for early stopping, and adjusts the learning rate.

docs/06_pruning_strategy___pflpruner___.md

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -103,30 +103,33 @@ sequenceDiagram
103103
loop For each Epoch (within trainer.train)
104104
Trainer->>Trainer: Run train_epoch()
105105
Trainer->>Trainer: Run val_epoch()
106-
Trainer->>+PFLPruner: pruner.report(trial_id=N, epoch=E, value=val_loss)
106+
Trainer->>+PFLPruner: pruner.report(trial_id=N, epoch=E, value=val_loss) # PFLPruner 활성화 시작
107107
PFLPruner->>+TrialState: Update loss history for Trial N
108108
TrialState-->>-PFLPruner: Done
109109
PFLPruner->>PFLPruner: Calculate Predicted Final Loss (PFL)
110110
PFLPruner->>PFLPruner: Compare PFL with Top-K finished trials
111-
PFLPruner->>+PFLPruner: pruner.should_prune() ?
111+
PFLPruner->>PFLPruner: Check if should_prune() is True?
112112
alt Pruning conditions met
113+
# report 호출에 대한 응답으로 True 반환하며 PFLPruner 비활성화
113114
PFLPruner-->>-Trainer: Return True
114115
Trainer->>Trainer: Raise optuna.TrialPruned exception
115-
Trainer-->>-ObjectiveFunc: Exception caught
116-
ObjectiveFunc-->>-OptunaStudy: Report Trial N as Pruned
116+
Trainer-->>-ObjectiveFunc: Exception caught # Trainer 비활성화
117+
ObjectiveFunc-->>-OptunaStudy: Report Trial N as Pruned # ObjectiveFunc 비활성화
118+
# 여기서 루프가 중단될 수 있음 (break 등 명시적 표현은 Mermaid 표준에 없음)
117119
else Pruning conditions NOT met
120+
# report 호출에 대한 응답으로 False 반환하며 PFLPruner 비활성화
118121
PFLPruner-->>-Trainer: Return False
119122
Trainer->>Trainer: Continue to next epoch...
120123
end
121124
end
122-
alt Trial Finishes Normally (or Early Stopping)
125+
alt Trial Finishes Normally (or Early Stopping outside pruning)
126+
# 루프 종료 후 Trainer 비활성화 (정상 종료 또는 Early Stopping)
123127
Trainer-->>-ObjectiveFunc: Return final_val_loss
124128
ObjectiveFunc->>+PFLPruner: pruner.complete_trial(trial_id=N)
125129
PFLPruner->>PFLPruner: Update Top-K completed trials if necessary
126-
PFLPruner-->>-ObjectiveFunc: Done
127-
ObjectiveFunc-->>-OptunaStudy: Report Trial N result (final_val_loss)
130+
PFLPruner-->>-ObjectiveFunc: Done # PFLPruner 비활성화
131+
ObjectiveFunc-->>-OptunaStudy: Report Trial N result (final_val_loss) # ObjectiveFunc 비활성화
128132
end
129-
end
130133
```
131134

132135
1. **Setup:** When `main.py` sets up the Optuna study ([Chapter 5]), it also creates the `PFLPruner` instance based on the `optimize_config.yaml`. This pruner instance is passed down through the `objective` function to the `util.run` function, and finally to the `Trainer` when it's initialized for a specific trial.

0 commit comments

Comments
 (0)