Skip to content

[ENH] Proposal: Unified ROCKET Interface with 'device' parameter #3174

@Adityakushwaha2006

Description

@Adityakushwaha2006

Describe the feature or idea you want to propose

while I was deep-diving into the codebase to fix the ROCKET GPU consistency issue (working on that parallelly), I spent a lot of time tracing the execution flow between Rocket and ROCKETGPU.

One thing that really stood out to me was the friction in switching between hardware. Right now, if a user wants to move from CPU to GPU,they have to change their imports, rename the class, and rewrite their script.It seemed very redundant and like we were exposing the internal architecture to the user rather than giving them a clean API.

Describe your proposed solution

The Proposal

I’ve been researching this extensively, and I propose we unify these two into a single interface. The user should just be able to express their intent.

What it would look like:

Python


# Default (Legacy behavior)
Rocket(n_kernels=10000, device="cpu")

# Explicit GPU usage
Rocket(n_kernels=10000, device="gpu")

# Automatic selection (Best available hardware)
Rocket(n_kernels=10000, device="auto")

Describe alternatives you've considered, if relevant

No response

Additional context

The Logic & "Nitty-Gritties"
I didn't want to suggest this until I was sure it wouldn't break anything, so I’ve researched the potential edge cases. Here is the implementation plan I’ve drafted to handle the complexities:

The "Switchboard" Architecture:

I plan to rename the existing Rocket to _RocketCPU (internal) so the original Numba logic remains strictly untouched.

The new Rocket class will act as a Facade, dispatching the work to either the CPU or GPU backend based on the device param.

Handling Dependencies (The tricky part):

Lazy Loading: I’ve ensured that tensorflow is only imported if the user explicitly asks for device='gpu'. CPU users won't carry that overhead.

Pickling: Since GPU objects (TensorFlow) can't be pickled, I’ve designed the wrapper to throw clear, helpful errors for the GPU variant while keeping the CPU variant fully pickle-safe.

Parameter Mismatches: I noticed CPU and GPU params differ (e.g., n_jobs vs batch_size). I’ve mapped these out so the wrapper warns the user if they pass an incompatible parameter, rather than silently ignoring it.

Current Status
I have already prototyped this locally and verified that it maintains 100% backward compatibility with existing imports. I'll create a PR for this as soon as i can confirm that it shall work consistently and wont require further work on this.

Metadata

Metadata

Labels

enhancementNew feature, improvement request or other non-bug code enhancement

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions