Skip to content

Conversation

@ParamThakkar123
Copy link
Contributor

@ParamThakkar123 ParamThakkar123 commented Jun 18, 2025

Description

Added EXP3 Scoring function in continuation with pr #2358

Motivation and Context

Why is this change required? What problem does it solve?
If it fixes an open issue, please link to the issue here.
You can use the syntax close #15213 if this solves the issue #15213

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

What types of changes does your code introduce? Remove all that do not apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of examples)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

Vincent Moens and others added 29 commits August 4, 2024 17:09
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Jun 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/3013

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 1 Cancelled Job, 1 Unrelated Failure

As of commit 7344bcb with merge base 9d9f6cb (image):

NEW FAILURE - The following job has failed:

CANCELLED JOB - The following job was cancelled. Please retry:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 18, 2025
@ParamThakkar123
Copy link
Contributor Author

@vmoens I implemented the EXP3 algorithm continuing with #2358. Can you please review this ?

Copy link
Collaborator

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG
Just need tests, docstrings and add it to the docs (see docs/ directory where you'll need to manually add the classes where they fit, I can help if it's unclear).

@ParamThakkar123
Copy link
Contributor Author

@vmoens I branched out from your PR branch. I will be adding more docstrings, tests and also the other two methods which are yet to be implemented

@ParamThakkar123 ParamThakkar123 requested a review from vmoens June 20, 2025 06:56
@vmoens vmoens changed the title Added EXP3 Scoring function in continuation with pr #2358 [Feature] Added EXP3 Scoring function in continuation with pr #2358 Jun 20, 2025
@ParamThakkar123
Copy link
Contributor Author

Yes @vmoens , all the tests that I have added are related to all the three scoring functions

@ParamThakkar123 ParamThakkar123 requested a review from vmoens June 20, 2025 13:44
@vmoens
Copy link
Collaborator

vmoens commented Jan 14, 2026

Can you look at #6b57d53?

Here's a detailed breakdown of what I changed:

1. torchrl/modules/mcts/scores.py - Core Implementation Fixes

1.1 Added warnings import and removed unused nn import

View code change
# Before:
import torch
from tensordict import NestedKey, TensorDictBase
from tensordict.nn import TensorDictModuleBase
from torch import nn  # UNUSED

# After:
import warnings  # ADDED
import torch
from tensordict import NestedKey, TensorDictBase
from tensordict.nn import TensorDictModuleBase
# Removed: from torch import nn

1.2 Added type annotation and docstring to MCTSScore base class

View code change
# Before:
class MCTSScore(TensorDictModuleBase):
    @abstractmethod
    def forward(self, node):
        pass

# After:
class MCTSScore(TensorDictModuleBase):
    """Abstract base class for MCTS score computation modules."""

    @abstractmethod
    def forward(self, node: TensorDictBase) -> TensorDictBase:
        pass

1.3 Fixed PUCTScore.forward to handle batched inputs

The original code didn't broadcast n_total properly when it had fewer dimensions than visits.

View code change
# Before:
def forward(self, node: TensorDictBase) -> TensorDictBase:
    win_count = node.get(self.win_count_key)
    visits = node.get(self.visits_key)
    n_total = node.get(self.total_visits_key)
    prior_prob = node.get(self.prior_prob_key)
    node.set(
        self.score_key,
        (win_count / visits) + self.c * prior_prob * n_total.sqrt() / (1 + visits),
    )
    return node

# After:
def forward(self, node: TensorDictBase) -> TensorDictBase:
    win_count = node.get(self.win_count_key)
    visits = node.get(self.visits_key)
    n_total = node.get(self.total_visits_key)
    prior_prob = node.get(self.prior_prob_key)
    # Handle broadcasting for batched inputs
    if n_total.ndim > 0 and n_total.ndim < visits.ndim:
        n_total = n_total.unsqueeze(-1)
    node.set(
        self.score_key,
        (win_count / visits) + self.c * prior_prob * n_total.sqrt() / (1 + visits),
    )
    return node

1.4 Fixed UCBScore.forward to handle batched inputs (same issue)

View code change
# Before:
def forward(self, node: TensorDictBase) -> TensorDictBase:
    win_count = node.get(self.win_count_key)
    visits = node.get(self.visits_key)
    n_total = node.get(self.total_visits_key)
    node.set(
        self.score_key,
        (win_count / visits) + self.c * n_total.sqrt() / (1 + visits),
    )
    return node

# After:
def forward(self, node: TensorDictBase) -> TensorDictBase:
    win_count = node.get(self.win_count_key)
    visits = node.get(self.visits_key)
    n_total = node.get(self.total_visits_key)
    # Handle broadcasting for batched inputs
    if n_total.ndim > 0 and n_total.ndim < visits.ndim:
        n_total = n_total.unsqueeze(-1)
    node.set(
        self.score_key,
        (win_count / visits) + self.c * n_total.sqrt() / (1 + visits),
    )
    return node

1.5 Fixed EXP3Score.forward to handle batched num_actions tensors

The original code only accepted scalar tensors, but TensorDict requires tensors to match batch dimensions.

View code change
# Before:
def forward(self, node: TensorDictBase) -> TensorDictBase:
    num_actions = node.get(self.num_actions_key)

    if self.weights_key not in node.keys(include_nested=True):
        batch_size = node.batch_size
        if isinstance(num_actions, torch.Tensor) and num_actions.numel() == 1:
            k = int(num_actions.item())
        elif isinstance(num_actions, int):
            k = num_actions
        else:
            raise ValueError(
                f"'{self.num_actions_key}' ('num_actions') must be an integer or a scalar tensor."
            )
        # ... rest of code with duplicate validation

# After:
def forward(self, node: TensorDictBase) -> TensorDictBase:
    num_actions = node.get(self.num_actions_key)

    # Extract scalar value from num_actions (handles batched tensors too)
    if isinstance(num_actions, torch.Tensor):
        # For batched tensors, take the first element (all should be same)
        k = int(num_actions.flatten()[0].item())
    elif isinstance(num_actions, int):
        k = num_actions
    else:
        raise ValueError(
            f"'{self.num_actions_key}' ('num_actions') must be an integer or a tensor."
        )

    if self.weights_key not in node.keys(include_nested=True):
        batch_size = node.batch_size
        weights_shape = (*batch_size, k)
        weights = torch.ones(weights_shape, device=node.device)
        node.set(self.weights_key, weights)
    else:
        weights = node.get(self.weights_key)

    k_from_weights = weights.shape[-1]
    if k_from_weights != k:
        raise ValueError(
            f"Shape of weights {weights.shape} implies {k_from_weights} actions, "
            f"but num_actions is {k}."
        )
    # ... rest of code

1.6 Fixed error message format inconsistency

View code change
# Before (missing space after period):
f"Shape of weights {weights.shape} implies {k} actions."
f"but num_actions is {num_actions.item()}"

# After:
f"Shape of weights {weights.shape} implies {k} actions, "
f"but num_actions is {num_actions.item()}"

1.7 Fixed EXP3Score.update_weights - Changed exceptions to warnings

The tests expected UserWarning but the code raised ValueError. Also fixed dead code after raise.

View code change
# Before:
if not (0 <= reward <= 1):
    raise ValueError(
        f"Reward {reward} is outside the expected [0, 1] range for EXP3."
    )
# ...
if torch.any(prob_i <= 0):
    raise ValueError(
        f"Probability p_i(t) for action {action_idx} is {prob_i}, which is <= 0."
        " This might lead to issues in weight update."
    )
    prob_i = torch.clamp(prob_i, min=1e-9)  # DEAD CODE - never executed!

# After:
if not (0 <= reward <= 1):
    warnings.warn(
        f"Reward {reward} is outside the expected [0,1] range for EXP3.",
        UserWarning,
    )
# ...
if torch.any(prob_i <= 0):
    prob_i_val = prob_i.item() if prob_i.numel() == 1 else prob_i
    warnings.warn(
        f"Probability p_i(t) for action {action_idx} is {prob_i_val}. "
        "Weight will not be updated for zero probability actions.",
        UserWarning,
    )
    # Don't update weights for zero probability - just return
    return

1.8 Fixed UCB1TunedScore.forward - Result not saved

View code change
# Before (result discarded!):
v_i_v = empirical_variance_v + bias_correction_v
v_i_v.clamp(min=0)  # Does nothing!

# After:
v_i_v = empirical_variance_v + bias_correction_v
v_i_v = v_i_v.clamp(min=0)  # Now saves the result

1.9 Removed PUCT_VARIANT placeholder and added docstring to MCTSScores enum

View code change
# Before:
class MCTSScores(Enum):
    PUCT = functools.partial(PUCTScore, c=5)
    UCB = functools.partial(UCBScore, c=math.sqrt(2))
    UCB1_TUNED = functools.partial(UCB1TunedScore, exploration_constant=2.0)
    EXP3 = functools.partial(EXP3Score, gamma=0.1)
    PUCT_VARIANT = "PUCT-Variant"  # Just a string placeholder!

# After:
class MCTSScores(Enum):
    """Enum providing factory functions for common MCTS score configurations."""

    PUCT = functools.partial(PUCTScore, c=5)
    UCB = functools.partial(UCBScore, c=math.sqrt(2))
    UCB1_TUNED = functools.partial(UCB1TunedScore, exploration_constant=2.0)
    EXP3 = functools.partial(EXP3Score, gamma=0.1)
    # Removed PUCT_VARIANT

2. torchrl/modules/__init__.py - Added module exports

View code change
# Added import:
from .mcts import (  # usort:skip
    EXP3Score,
    MCTSScore,
    MCTSScores,
    PUCTScore,
    UCB1TunedScore,
    UCBScore,
)

# Added to __all__:
"EXP3Score",
"MCTSScore",
"MCTSScores",
"PUCTScore",
"UCB1TunedScore",
"UCBScore",

3. test/test_mcts.py - Test fixes

3.1 Fixed create_node helper for batched num_actions

View code change
# Before:
if batch_size:
    data = {
        custom_keys["num_actions_key"]: torch.tensor(
            [num_actions] * batch_size, device=device
        )
    }

# After:
if batch_size:
    # num_actions needs batch dimension to match TensorDict batch_size
    data = {
        custom_keys["num_actions_key"]: torch.full(
            (batch_size,), num_actions, device=device, dtype=torch.long
        )
    }

3.2 Added import for UCB1TunedScore

View code change
# Before:
from torchrl.modules.mcts.scores import EXP3Score, PUCTScore, UCBScore

# After:
from torchrl.modules.mcts.scores import EXP3Score, PUCTScore, UCB1TunedScore, UCBScore

3.3 Added create_ucb1_tuned_node helper function and TestUCB1TunedScore test class

Added comprehensive tests for UCB1TunedScore which previously had no tests.

View test methods added
  • test_initialization - Tests different exploration constants
  • test_forward_basic - Basic score computation
  • test_forward_unvisited_actions - Unvisited actions get large scores
  • test_forward_batch - Batched inputs work correctly
  • test_forward_variance_clamping - Variance is clamped properly
  • test_custom_keys - Custom key names work
  • test_exploration_vs_exploitation - Balance between exploration/exploitation

Summary Table

File Issue Fix
scores.py Unused nn import Removed
scores.py Missing warnings import Added
scores.py MCTSScore missing type hints Added type annotation and docstring
scores.py PUCTScore.forward batch broadcasting Added n_total.unsqueeze(-1) when needed
scores.py UCBScore.forward batch broadcasting Added n_total.unsqueeze(-1) when needed
scores.py EXP3Score.forward batched num_actions Handle batched tensors with flatten()[0]
scores.py Error message format Fixed comma/space consistency
scores.py update_weights exceptions vs warnings Changed to warnings.warn + fixed dead code
scores.py UCB1TunedScore clamp not saved Changed to v_i_v = v_i_v.clamp(min=0)
scores.py PUCT_VARIANT placeholder Removed
modules/__init__.py Missing module exports Added mcts imports and __all__ entries
test_mcts.py create_node batch handling Use torch.full() for proper batch dims
test_mcts.py Missing UCB1TunedScore tests Added full test class

@vmoens vmoens added the enhancement New feature or request label Jan 19, 2026
@vmoens vmoens merged commit 7b8be97 into pytorch:main Jan 25, 2026
107 of 110 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants