Redis hits back at Dragonfly: Redis’ architecture is still best in class after 13 years
- An American company made 0.7nm chips: EUV lithography machines can’t do it
- CVE-2007-4559 Python vulnerability ignored for 15 years puts 350,000 projects at risk of code execution
- RISC-V only takes 12 years to achieve the milestone of 10 billion cores
- 14000 cores + 450W: RTX 4080 graphics card perfectly replaces the RTX 3080
- Big upgrade: The difference between Bluetooth 5.0 and 5.2
- Geeks Disappointed that RTX 4080/4090 doesn’t come with PCIe 5.0
- What are advantages and disadvantages of different load balancing?
Redis hits back at Dragonfly: Redis’ architecture is still best in class after 13 years.
Redis co-founder and CTO Yiftach Shoolman, Redis Labs chief architect Yossi Gottlieb, and Redis Labs performance engineer Filipe Oliveira recently jointly released a paper titled ” 13 years later — does Redis need a new architecture?” ” article, which aims to share some perspectives and thoughts on Redis architecture to support “why Redis architecture is still the best architecture for in-memory real-time data storage (caches, databases , and everything in between )” .
The article points out that Redis is a fundamental technology, so it is inevitable to occasionally see some people considering some alternative architectures.
Examples include KeyDB from a few years ago , and the latest crop of Dragonfly , which claims to be the fastest Redis-compatible in-memory data store.
“We believe these projects bring a lot of interesting technologies and ideas to discuss and debate. At Redis, we like this challenge because it requires us to reiterate the architectural principles that Redis was originally designed for (a tribute to Salvatore Sanfilippo aka antirez).”
The blog mainly shares some perspectives on speed and architectural differences, and provides details of benchmarks and performance comparisons with the Dragonfly project at the end of the article. details as follows:
First of all, they believe that the comparison of independent single-process Redis instances (which can only utilize a single core) and multi-threaded Dragonfly instances (which can use all available cores on the VM/server) in the previous Dragonflybenchmarkisinaccurate and cannot represent Redis in How it works in the real world.
So they did what they thought was a fairer comparison, comparing a 40-shard Redis 7.0 Cluster (which can utilize most of the instance cores) to Dragonfly, to AWS c6gn.16xlarge , the largest instance type used by the Dragonfly team in their benchmarks A set of performance tests.
The results of this experiment show that Redis has 18% – 40% higher throughput than Dragonfly (even though only 40 of the 64 vCores are utilized).
“We believe that many of the architectural decisions made by the creators of these multithreaded projects were influenced by pain points they experienced in their previous work.
We agree that running a single Redis process (sometimes dozens of cores and Hundreds of gigabytes of memory) can’t take advantage of the obviously available resources.
But that’s not what Redis was designed for; it’s just how many Redis vendors choose to run their services.”
Redis scales horizontally by running multiple processes (using Redis Cluster), even in the context of a single cloud instance.
According to the presentation, the company took the concept a step further and built Redis Enterprise, which provides a management layer that allows users to run Redis at scale, with high availability, instant failover, data persistence, and backup enabled by default.
So, Yiftach decided to share some of the principles they use behind the scenes to help the public understand what is officially considered good engineering practice for running Redis in production. include:
- Run multiple Redis instances per VM
- Limit each Redis process to a reasonable size: Do not allow a single Redis process to exceed 25 GB in size ( 50 GB when running Redis on Flash )
- Flexibility to run an in-memory data store with horizontal scaling is important
In general, they say they appreciate fresh, interesting ideas and technologies from the community; noting that some of them may even make their way into Redis in the future (such as io_uring, which is already being researched, more modern dictionaries, more strategic use of thread, etc. ). But for the foreseeable future, the basic principles of the shared-nothing, multi-process architecture provided by Redis will not be abandoned. Because they believe that this design provides the best performance, scalability and elasticity, while also supporting the various deployment architectures required for an in-memory, real-time data platform.
More details and Redis 7.0 vs. Dragonfly benchmark details can be found on the official blog .
It is worth mentioning that regarding the claim pointed out by Redis that the Dragonfly benchmark “cannot represent how Redis operates in the real world”, some netizens on Reddit retorted:
It definitely represents how the average user runs Redis in the real world. “running the cluster on a single machine to be able to use more than 1 core” is extra complexity, people only do it if they have no choice, if the competitors can “just works” no matter how many cores they have “, then it would be nice to have an easier setup.
Others said that this article belongs to:
Lovely and very polite “actually, no” response from the Redis team.
- DIY a PBX (Phone System) on Raspberry Pi
- How to host multiple websites on Raspberry Pi 3/4?
- A Free Intercom/Paging system with Raspberry pi and old Android phones
- DIY project: How to use Raspberry Pi to build DNS server?
- Raspberry Pi project : How to use Raspberry Pi to build git server?