Conversation
Summary of ChangesHello @koosco, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 이 PR은 포즈 상세 조회 시 조회수를 추적하고 표시하는 새로운 기능을 도입합니다. Redis 캐싱을 활용하여 사용자별 일일 고유 조회수 제한을 적용하며, 조회수 업데이트 시 발생할 수 있는 동시성 문제를 처리합니다. 또한, 이 기능에 필요한 데이터베이스 스키마 변경 사항도 포함되어 있습니다. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Code Format Check ✅
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new feature to track pose view counts, ensuring that each user's view is counted only once per day using Redis caching. However, the Redis-based viewer tracking has a significant logic flaw and potential resource exhaustion issue. The current implementation uses a single Redis Set per pose and resets its TTL every time a new viewer is added, which can prevent users from ever incrementing the count again for popular poses and lead to unbounded memory growth in Redis. A more granular key structure is recommended to address these issues. The PR also includes necessary database schema changes, updates to the GetPoseUseCase to integrate view count logic, and corresponding test cases, with the implementation correctly handling concurrency for view count updates and providing a fail-open mechanism for Redis failures.
| val key = PoseViewRedisCacheKey.viewKey(poseId) | ||
| return try { | ||
| val added = redisTemplate.opsForSet().add(key, userId) ?: 0 | ||
| if (added > 0) { | ||
| redisTemplate.expire(key, VIEW_TTL) | ||
| true | ||
| } else { | ||
| false | ||
| } | ||
| } catch (e: Exception) { | ||
| log.warn("[PoseViewCache] Failed to check viewer for pose: $poseId, user: $userId", e) | ||
| true // fail-open: Redis 장애 시 신규 조회로 처리 | ||
| } |
There was a problem hiding this comment.
The current implementation of addViewer uses a single Redis Set per pose (pose:view:{poseId}) to track unique viewers. The TTL of this set is reset to 24 hours every time a new viewer is added. This introduces two significant issues:
- Logic Flaw: The requirement is to allow a user to increment the view count once per day. However, because the TTL is reset for the entire set whenever any new user views the pose, an existing user's ID will remain in the set as long as the pose remains popular. This prevents the user from ever incrementing the view count again during that period, even if more than 24 hours have passed since their last view.
- Resource Exhaustion (DoS): For popular poses, the set of user IDs can grow indefinitely as long as the TTL keeps being reset, leading to significant memory consumption in Redis and potential Denial of Service.
Remediation: Change the Redis key structure to be per-user and per-pose, for example:pose:view:{poseId}:{userId}. Set a fixed 24-hour TTL on this key when it is first created, and do not reset it.
val key = "${PoseViewRedisCacheKey.viewKey(poseId)}:$userId"
return try {
redisTemplate.opsForValue().setIfAbsent(key, true, VIEW_TTL) ?: false
} catch (e: Exception) {
log.warn("[PoseViewCache] Failed to check viewer for pose: $poseId, user: $userId", e)
true // fail-open: Redis 장애 시 신규 조회로 처리
}| ?: throw BusinessException(ResultCode.NOT_FOUND) | ||
|
|
||
| if (poseViewCache.addViewer(command.poseId, command.userId)) { | ||
| transactionRunner.run { poseRepository.incrementViewCount(command.poseId) } |
There was a problem hiding this comment.
The transactionRunner.run block is used here to increment the view count. While this ensures the database update is transactional, it's important to consider if incrementViewCount itself needs to be part of a larger business transaction or if it can be an independent operation. Given that addViewer is called outside this transaction, there's a potential for a view to be recorded in Redis but not in the database if the transaction fails later, or vice-versa if addViewer fails after the transaction commits. However, the current implementation's 'fail-open' for Redis suggests a preference for eventual consistency in view counts, so this might be acceptable. If strict consistency is required, the addViewer call should also be part of the transaction or handled with compensating actions.
| log.warn("[PoseViewCache] Failed to check viewer for pose: $poseId, user: $userId", e) | ||
| true // fail-open: Redis 장애 시 신규 조회로 처리 |
There was a problem hiding this comment.
The fail-open strategy (true // fail-open: Redis 장애 시 신규 조회로 처리) means that if Redis is down or an exception occurs, the view count will still be incremented in the database. This prioritizes view count incrementation over preventing duplicate views during Redis outages. This is a valid design choice, but it's important to acknowledge the trade-off: in case of Redis failure, a single user might increment the view count multiple times within 24 hours. Ensure this behavior is acceptable for the business requirements.
Test ✅
|
src/main/kotlin/com/neki/pose/infra/cache/redis/RedisPoseViewCacheAdapter.kt
Outdated
Show resolved
Hide resolved
Code Format Check ✅
|
Test ✅
|
summary
details