php清除扩展缓存_通过流量扩展缓存

php清除扩展缓存

历史的教训 (A Lesson from History)

Have you ever felt like you just needed another “You” to get everything done in time? That’s how we, at Carnot, felt when our customer base increased at a rate, way beyond our expectation. It was like taking on a sea swimming challenge just after finishing your first lessons with floaters.

您是否曾经想过只需要另一个“您”来及时完成所有工作? 当我们的客户群以超出我们预期的速度增长时,我们在卡诺(Carnot)就是这样。 就像在完成了第一批上浮船课程之后,就面临着一场海上游泳挑战。

For the backend team, it meant having to support 20 times more traffic without impacting customer experience in any way. It was a challenge and it did not help the fact that our entire backend team consisted of only three people. But you know what they say — “No pressure, no diamonds!”

对于后端团队而言,这意味着必须支持20倍以上的流量,而又不以任何方式影响客户体验。 这是一个挑战,这并没有帮助我们整个后端团队只有三个人的事实。 但是您知道他们在说什么: “没有压力,没有钻石!”

We learned a lot of empowering lessons from those demanding times. One such lesson was — Auto Scaling; which is arguably the most important cloud management tool.

从那些苛刻的时代中,我们学到了很多授权课程。 此类课程之一是– Auto Scaling; 这可以说是最重要的云管理工具。

为什么 (The Why)

In the early days, when we knew little about handling traffic peaks, we had a monolithic architecture. A socket server would allow our IoT devices to connect & send data, which was then queued in a Redis cache until it got processed by our main server.

早期,当我们对如何处理流量高峰知之甚少时,我们采用了整体式架构。 套接字服务器将允许我们的Io​​T设备连接并发送数据,然后将数据排队在Redis缓存中,直到它被我们的主服务器处理为止。

Image for post
Earlier days — monolithic structure
早期-整体结构

As our clients increased, the entire cloud pipeline was affected by the slowest component. The Redis cache which was used only for queuing soon had a large queue and ran out of memory.

随着客户数量的增加,整个云管道受到最慢组件的影响。 仅用于排队的Redis缓存很快就排满了队列,并用尽了内存。

At that time, not upgrading the cache for better memory specification would have meant losing valuable customer data. Hence, we selected the Redis plan so as to support data retention in the queue even during the peak traffic time.

那时,不升级高速缓存以获得更好的内存规格将意味着丢失宝贵的客户数据。 因此,我们选择Redis计划,以便即使在高峰流量时间内也可以将数据保留在队列中。

The system was stable with this. The only problem was that we were paying a lot more than what we were utilizing. On deeper analysis, we found that the so-called “peak traffic” would occur only during certain hours of the day. 75% of the time, the extra memory opted, was unused. But, we were still paying for it.

系统与此保持稳定。 唯一的问题是,我们所付出的成本比我们所利用的要高得多。 通过更深入的分析,我们发现所谓的“高峰流量”只会在一天中的某些时段发生。 75%的时间没有使用多余的内存。 但是,我们仍然为此付出代价。

什么 (The What)

Image for post

Frankly, “auto-scaling” was a buzz word for us at that time. We had read a lot about it and understood that it helps to automatically detect your traffic & scale your cloud components. We knew that we absolutely needed this. But we could not find an easy plug and play solution for our scenario — Redis scaling on the Heroku platform.

坦率地说,“自动缩放”在当时对我们来说是一个时髦的词。 我们已经阅读了很多有关它的内容,并了解它有助于自动检测您的流量并扩展您的云组件。 我们知道我们绝对需要这个。 但是我们找不到适合我们场景的简单即插即用解决方案-在Heroku平台上进行Redis扩展。

So, we decided to do what we do best at Carnot — Start from basics and build your own in-house system

因此,我们决定在卡诺上做自己最擅长的事情-从基础开始,构建自己的内部系统

What we needed was a mechanism to

我们需要的是一种机制

  • Monitor the key metrics for Redis health check

    监控Redis运行状况检查的关键指标
  • Detect incoming traffic (or load) on servers

    检测服务器上的传入流量(或负载)
  • Create a cost-function to get the cheapest possible plan for current traffic

    创建成本函数以获得当前流量的最便宜的计划
  • Change the plan as detected by the cost-function

    更改成本功能检测到的计划
Image for post
Beginnings of Redis Auto Scaling @ Carnot
Redis Auto Scaling @ Carnot的起点

Well, that’s what we built for one Redis, then the other and another; until we created a standalone plug-and-play solution of our own — for any Redis on Heroku.

好吧,这就是我们为一个Redis构建的,然后为另一个Redis构建的。 直到我们为Heroku上的任何Redis创建了自己的独立即插即用解决方案。

如何 (The How)

We have recently made this entire system open source. If you are facing a similar issue, feel free to set it up in your account and share your experience. Any issues/suggestions to improve the system are most welcome.

我们最近使整个系统开源 。 如果您遇到类似的问题,请随时在您的帐户中进行设置并分享您的经验。 任何欢迎改进系统的问题/建议。

Broadly, this is how it works: We selected the two most important metrics for monitoring the Redis health — Memory Consumed & Clients Connected. A scheduled cron at a pre-defined frequency collects and stores these metrics for all enrolled Redis caches. The traffic detector indicates how close are we to exhausting the limits of our Redis. The cost function, then, takes in the key metrics along with traffic indication and provides the cheapest Redis plan to support the traffic. Finally, we use the all-powerful Heroku platform APIs to change the Redis plan dynamically, whenever required.

从广义上讲,它是这样工作的:我们选择了两个最重要的指标来监控Redis的运行状况- 消耗的内存和已连接的客户端 。 预定时间的预定cron会收集并存储所有已注册Redis缓存的这些指标。 流量检测器表明我们距离耗尽Redis的限制有多近。 然后,成本函数将关键指标与流量指示信息结合在一起,并提供最便宜的Redis计划来支持流量。 最后,在需要时,我们使用功能强大的Heroku平台API动态更改Redis计划。

以后的生活 (Life After)

Image for post

After sugar, spice & everything nice and an accidental chemical X, we are now capable of supporting over a million IoT nodes. We have definitely come a long way from those early times and learned a lot along the way.

除了糖,香料和一切好东西以及意外的X化学品,我们现在能够支持超过一百万个IoT节点。 从早期开始,我们无疑已经走了很长一段路,并且在此过程中学到了很多东西。

As we grew, most of our systems moved away from Heroku (to AWS) in order to save costs. But, for any start-ups/developers out there who are dealing with around thousand client nodes and use Heroku as their preferred choice of platform (for its obvious ease of set-up), we hope this blog would be helpful.

随着我们的成长,为了节省成本,我们的大多数系统都从Heroku迁移到了AWS。 但是,对于那些正在与大约数千个客户端节点打交道并使用Heroku作为他们首选的平台选择(因为其明显的易于设置)的新兴公司/开发人员,我们希望该博客对您有所帮助。

We are trying to fix some broken benches in the Indian agriculture ecosystem through technology , to improve farmers’ income. If you share the same passion join us in the pursuit, or simply drop us a line on report@carnot.co.in

我们正在尝试通过技术修复印度农业生态系统中一些破烂的长凳 ,以提高农民的收入。 如果您有同样的热情,请加入我们的行列或者直接给我们写信至report@carnot.co.in

翻译自: https://medium.com/tech-carnot/scale-your-caches-with-your-traffic-798c709ca853

php清除扩展缓存

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值