Skip to content

Why use redfly

redfly.ai edited this page Jun 21, 2025 · 5 revisions

Here are some of the advantages of using a caching system like ours. These words are from our customers:

  • redfly is used to share cache when there are so many active users that multiple web servers come into play

  • Without a distributed cache, when a deployment is done, each server would hammer the database

  • redfly comes to mind for a CTO because it is what gives you scalability. Sometimes you have to prove it - just saying you are scalable is not adequate.

  • A transparent cache would save the money spent on code maintenance. The extra time gained would be used to build the core product instead of inventing the cache layer.

  • You can write a lot of indexes, queues and views. But for larger loads, a caching strategy is necessary to scale. Everyone uses Redis for mitigating database performance issues.

  • This is a serious problem in the transactional space.

  • This would be an amazing use case for an IoT app.

  • We use MongoDB, Postgres and MySQL in almost every project. That's where we need something like this.

  • This can help me by making database storage potentially cheaper. Because, I could have one fixed cost server, with no variable Disk IO costs. This is cheaper than databases with read replicas or bigger/ more nodes.

  • redfly solves the problem of reactive UI apps very well.

  • redfly provides the ability to stay < 200 ms when loading a page.

  • I would spend the extra saved time on tailoring the cache strategy rather than developing the page itself (manually implementing the caching).

  • redfly solves the problem of repeated access to an endpoint by not hitting the database for it everytime.

  • If Power BI can be accelerated with caching, we won't have to worry about day-to-day operations. We can spend more time in Business Intelligence & Analytics. Everyone is asking us for AI. It's hard to do when we currently spend 2-3 hours everyday on operations.

  • I would like to tune things based on the type of load. Right now, we throw everything into one basket & try to separate a few things like write once, read many, transactions and not much else.

  • Not having to worry about caching query plans or data.

  • In terms of database scaling, the fewer transactions I need, the less I need Postgres. The more concurrency I have, I need to start offloading it to somewhere else like for atomic counters. I don't want to have a bunch of different database threads all trying to update a counter in Postgres - this works better with Redis.

  • If you can slice and dice your database in such a way that you can split a monolith into multiple microservices sharded by domains, that makes it a lot more easier to manage because you don't have a single ball of FUD that anybody needs to maintain. That could help a lot of other companies as well, especially the older sector companies.

Clone this wiki locally