Skip to content

Commit bfa02eb

Browse files
[Deprecations] clear usage of deprecated package, function, and field (#1860)
SUMMARY: Clear out some deprecation warnings in logs: - [x] Replace `pynvml` with `nvidia-ml-py` (which has the same API): ``` ../../../../../opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/torch/cuda/__init__.py:63 /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. import pynvml # type: ignore[import] ``` - [x] Replace`safe_permute(x,y, dim=d)` with `x.index_select(dim=d, index=y)`: ``` tests/llmcompressor/observers/test_min_max.py::test_g_idx /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/llmcompressor/observers/base.py:172: DeprecationWarning: safe_permute is deprecated and will be removed in a future release. Please use Tensor.index_select instead. observed = safe_permute(observed, perm, dim=1) ``` - [x] Only display warning for `MagnituePruningModifier.leave_enabled` if field has been set to something other than default value ``` /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/llmcompressor/modifiers/pruning/magnitude/base.py:43: DeprecationWarning: MagnitudePruningModifier.leave_enable has been deprecated warnings.warn( ``` TEST PLAN: No net new src code, tests pass Signed-off-by: Brian Dellabetta <[email protected]> Co-authored-by: Dipika Sikka <[email protected]>
1 parent 4f5d396 commit bfa02eb

File tree

3 files changed

+12
-7
lines changed

3 files changed

+12
-7
lines changed

setup.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,11 @@ def localversion_func(version: ScmVersion) -> str:
133133
if BUILD_TYPE == "release"
134134
else "accelerate>=1.6.0"
135135
),
136-
("pynvml>=11.5.3,<=13.0.1" if BUILD_TYPE == "release" else "pynvml>=11.5.3"),
136+
(
137+
"nvidia-ml-py>=12.560.30,<=13.580.82"
138+
if BUILD_TYPE == "release"
139+
else "nvidia-ml-py>=12.560.30"
140+
),
137141
("pillow>=10.4.0,<=11.3.0" if BUILD_TYPE == "release" else "pillow>=10.4.0"),
138142
(
139143
"compressed-tensors==0.11.0"

src/llmcompressor/modifiers/pruning/magnitude/base.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -40,10 +40,12 @@ class MagnitudePruningModifier(Modifier, LayerParamMasking):
4040

4141
@field_validator("leave_enabled")
4242
def validate_leave_enabled(value: bool) -> bool:
43-
warnings.warn(
44-
"MagnitudePruningModifier.leave_enable has been deprecated",
45-
DeprecationWarning,
46-
)
43+
if value:
44+
warnings.warn(
45+
"MagnitudePruningModifier.leave_enabled has been deprecated "
46+
"and will be set to False.",
47+
DeprecationWarning,
48+
)
4749
return False
4850

4951
def on_initialize(self, state: State, **kwargs) -> bool:

src/llmcompressor/observers/base.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,6 @@
1010
)
1111
from compressed_tensors.quantization.utils import is_fp4
1212
from compressed_tensors.registry.registry import RegistryMixin
13-
from compressed_tensors.utils import safe_permute
1413
from loguru import logger
1514
from torch import FloatTensor, IntTensor, Tensor
1615

@@ -169,7 +168,7 @@ def get_qparams(
169168
group_sizes = group_sizes[torch.argsort(group_indices)]
170169

171170
perm = torch.argsort(g_idx)
172-
observed = safe_permute(observed, perm, dim=1)
171+
observed = observed.index_select(dim=1, index=perm)
173172

174173
# TODO: experiment with vectorizing for loop for performance
175174
end = 0

0 commit comments

Comments
 (0)