Skip to content

Cache abstraction layer with Redis support#19

Open
gigablah wants to merge 2 commits intomogilefs:masterfrom
gigablah:cache
Open

Cache abstraction layer with Redis support#19
gigablah wants to merge 2 commits intomogilefs:masterfrom
gigablah:cache

Conversation

@gigablah
Copy link
Copy Markdown

@gigablah gigablah commented Jul 1, 2012

Changes MogileFS::Config->memcache_client to Mgd::get_cache. Cache options are now configured in mogilefsd.conf (cache_type, cache_servers and cache_ttl). Preserves functionality of the previous cache implementation (using server_settings). Adapters supplied for Memcache and Redis.

@gigablah
Copy link
Copy Markdown
Author

gigablah commented Jul 1, 2012

And yes, I did read memcache-support.txt :p
This would be convenient for those who point, say, nginx directly to the tracker (using nginx-mogilefs-module), so it bypasses the app completely.

For Redis, device IDs are stored using sets (SADD, SMEMBERS). Socket addresses are supported (e.g. /tmp/redis.sock).

I pretty much followed the structure of Mgd::get_store.

@dormando
Copy link
Copy Markdown
Member

Thanks a lot for doing this. I'm still trying to get enough round-tuits to properly review, but you've fixed a few good things so far. Not sure if it'll make 2.65, but if not, 2.66 for sure!

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like it'd be more helpful to either default to something here or print a warning that cache_type must be specified with cache_servers.

@dormando
Copy link
Copy Markdown
Member

Hi again! I'm a terrible person: Are you around to help deal with a few more comments on the patch series so we can toss it in?

Thanks!

@gigablah
Copy link
Copy Markdown
Author

Sorry about that, I'll try to get to it in these couple of days.

@dormando
Copy link
Copy Markdown
Member

awesome, thanks! Though all apologies are mine!

@dormando
Copy link
Copy Markdown
Member

ping again! If it's fixed up within the next week it could go out in the next cut

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants