Conversation
AkimioJR
commented
Feb 6, 2026
- 修复go多线程下创建清理问题
- 使用defer清理资源
There was a problem hiding this comment.
Pull request overview
This PR optimizes the Go bindings for the CoralReefPlayer library by addressing multi-threading issues and improving resource cleanup. The changes focus on adding thread-safety through mutex synchronization and using Go's defer pattern for proper C memory cleanup.
Changes:
- Added sync.RWMutex to protect the shared callbacks map in multi-threaded scenarios
- Replaced manual C.free calls with defer statements for automatic resource cleanup
- Added mutex locking in goCallback, Play, and Destroy methods to prevent race conditions
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
binding/go/gcrp/gcrp.go
Outdated
| defer mutex.Unlock() | ||
|
|
||
| player.callbackId = lastId | ||
| callbacks[lastId] = callback | ||
| C.crp_play(player.handle, curl, &coption, C.crp_callback(C.goCallback), unsafe.Pointer(uintptr(lastId))) | ||
| lastId += 1 |
There was a problem hiding this comment.
Deadlock risk: If C.crp_play or C.crp_destroy trigger synchronous callbacks before returning (which is possible in C libraries), and those callbacks try to acquire the mutex.RLock() in goCallback while the mutex is already held by Lock() here, this will deadlock. Consider using separate mutexes for callback registration vs. callback execution, or ensure the C library doesn't make synchronous callbacks.
| defer mutex.Unlock() | |
| player.callbackId = lastId | |
| callbacks[lastId] = callback | |
| C.crp_play(player.handle, curl, &coption, C.crp_callback(C.goCallback), unsafe.Pointer(uintptr(lastId))) | |
| lastId += 1 | |
| id := lastId | |
| player.callbackId = id | |
| callbacks[id] = callback | |
| lastId++ | |
| mutex.Unlock() | |
| C.crp_play(player.handle, curl, &coption, C.crp_callback(C.goCallback), unsafe.Pointer(uintptr(id))) |
binding/go/gcrp/gcrp.go
Outdated
| mutex.RLock() | ||
| defer mutex.RUnlock() |
There was a problem hiding this comment.
The RLock is acquired after extracting the id from userData. If the callback is deleted between line 100 and line 101, accessing callbacks[id] later will still panic. Consider moving the mutex.RLock() to before line 100, or add existence checks when accessing the map.
binding/go/gcrp/gcrp.go
Outdated
| mutex.Lock() | ||
| defer mutex.Unlock() |
There was a problem hiding this comment.
The mutex lock is acquired after C.crp_play is called, but C.crp_play may trigger callbacks immediately (including goCallback which reads from the callbacks map with RLock). This creates a race condition where goCallback might try to access callbacks[lastId] before it's been added to the map on line 192. Move the mutex.Lock() to before the coption struct initialization (before line 171) to protect the entire critical section.
binding/go/gcrp/gcrp.go
Outdated
| player.callbackId = lastId | ||
| callbacks[lastId] = callback | ||
| C.crp_play(player.handle, curl, &coption, C.crp_callback(C.goCallback), unsafe.Pointer(uintptr(lastId))) | ||
| lastId += 1 |
There was a problem hiding this comment.
Integer overflow risk: lastId is incremented indefinitely without bounds checking. After approximately 2^31 iterations (on 32-bit systems) or 2^63 iterations (on 64-bit systems), this will overflow. Consider using a safer ID generation strategy, such as checking for wrap-around or reusing IDs from deleted callbacks.
| player.callbackId = lastId | |
| callbacks[lastId] = callback | |
| C.crp_play(player.handle, curl, &coption, C.crp_callback(C.goCallback), unsafe.Pointer(uintptr(lastId))) | |
| lastId += 1 | |
| // Allocate a new callback ID, handling wrap-around and reusing IDs from | |
| // callbacks that have been removed from the map. | |
| startID := lastId | |
| id := lastId | |
| for { | |
| if _, exists := callbacks[id]; !exists { | |
| break | |
| } | |
| next := id + 1 | |
| // Detect wrap-around: if next <= id, we've overflowed and should wrap to 0. | |
| if next <= id { | |
| next = 0 | |
| } | |
| id = next | |
| // If we've looped all the way around, no IDs are available. | |
| if id == startID { | |
| panic("gcrp: no available callback IDs") | |
| } | |
| } | |
| lastId = id | |
| player.callbackId = id | |
| callbacks[id] = callback | |
| C.crp_play(player.handle, curl, &coption, C.crp_callback(C.goCallback), unsafe.Pointer(uintptr(id))) |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
binding/go/gcrp/gcrp.go
Outdated
| func (player Player) Destroy() { | ||
| C.crp_destroy(player.handle) | ||
| player.handle = nil | ||
|
|
||
| mutex.Lock() | ||
| delete(callbacks, player.callbackId) | ||
| mutex.Unlock() | ||
| } |
There was a problem hiding this comment.
Play/Destroy mutate player (callbackId, handle), but they use a value receiver. The updated callbackId/handle = nil won’t persist on the caller, so Destroy can delete the wrong callback entry (or leak entries) when there are multiple Play calls/players. Use pointer receivers for methods that mutate Player state (at least Play and Destroy).
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
binding/go/gcrp/gcrp.go
Outdated
| mutex.Lock() | ||
| defer mutex.Unlock() |
|
其实有一个疑问,为什么要使用一个全局的map存储回调事件,而不是在userData中传入callback这个interface,是因为c层面会对user_data操作吗 |
|
还有一点就是虽然Create返回的是*Player更符合golang的直觉,但是内部的C.crp_handle本身就是一个指针,两次解应用是否会有性能问题 |
interface{}没法转成32位的整数或者用32位的整数存下, 尤其是在js或者py这种脚本语言里, 回调可能是一个复杂的对象, 就只能存到map里, 把索引作为userData了 当然如果有更优雅的方法欢迎pr, 毕竟不用map就不用加锁了 |
其实是我不知道struct传的是值, 我以为是引用( |