Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Dec 4, 2025

Addresses feedback to avoid extending std namespace and make template dependencies explicit for the AMD platform handle cache.

Changes:

  • Moved std::hash<cudaIpcMemHandle_t> specialization to CudaIpcMemHandleHash functor in anonymous namespace
  • Moved global operator== to CudaIpcMemHandleEqual functor in anonymous namespace
  • Added noexcept to equality operator (memcmp is non-throwing)
  • Updated std::unordered_map to explicitly use custom hash and equality as template parameters

Before:

namespace std {
template <>
struct hash<cudaIpcMemHandle_t> { /* ... */ };
}

inline bool operator==(const cudaIpcMemHandle_t& lhs, const cudaIpcMemHandle_t& rhs) { /* ... */ }

std::unordered_map<cudaIpcMemHandle_t, std::weak_ptr<void>> peerMemoryHandleMap;

After:

namespace {
struct CudaIpcMemHandleHash { /* ... */ };
struct CudaIpcMemHandleEqual { 
  bool operator()(/* ... */) const noexcept { /* ... */ }
};

std::unordered_map<cudaIpcMemHandle_t, std::weak_ptr<void>, 
                   CudaIpcMemHandleHash, CudaIpcMemHandleEqual> peerMemoryHandleMap;
}

✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

@Binyang2014 Binyang2014 marked this pull request as ready for review December 4, 2025 19:41
Copilot AI changed the title [WIP] Address feedback on handle cache implementation Move cudaIpcMemHandle_t hash and equality to custom namespace Dec 4, 2025
@Binyang2014 Binyang2014 merged commit bf513b4 into binyli/handle_cache Dec 4, 2025
4 of 5 checks passed
@Binyang2014 Binyang2014 deleted the copilot/sub-pr-698-again branch December 4, 2025 19:41
Copilot AI requested a review from Binyang2014 December 4, 2025 19:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants