What’s new with Amazon ElastiCache?

I'm excited to talk about Elasticache Services. Uh, if you haven't already heard, we announced this. Peter announced this in his keynote on Monday night. Uh, today, we're just gonna talk through quickly about what Elasticache Service is and then we'll do a quick demo to show you how it all works.

So, uh, who here is a Redis user? Alright, half of this. And anyone use Memcached? Alright, one. Ok, cool, cool. Uh, so, uh, I'm excited. Uh, well, this is Yon, he's a senior engineering manager on the team and my name is Abhay. I'm a product manager uh on Elasticache Services.

Uh, so we are excited to introduce Elasticache Services, uh, with Elasticache Services. Uh, we are introducing dramatic simplification of how you run a cache, right? So you can create a cache with just a name and it creates in a minute and we will do that demo in a second.

We're getting rid of capacity management completely. So you don't have to worry about how much capacity or cache needs to have at any given time. You get the same performance, microsecond performance and 99.99% SLA. You also get pay-per-use pricing with the Elasticache Services.

So just let's jump right in. It's super easy to create a ElastiCache cache with Elasticache. You simply give us a name and we will create a cache in your default VPC. You can also specify if you want to create it in a different VPC. You can also specify for example, which availability zones you want to access your cache from.

So if you have your application living in, let's say us-east-1a and us-east-1b, you can tell us that hey, I wanted my cache accessible from us-east-1a and us-east-1b as well. So you avoid cross AZ hops, right?

So there are some configuration options but the idea is that we take on a lot of the default configuration behavior. So encryption at rest is always on, encryption in transit is always on. You can always have automatic backups, etc. But the basic thing you need to create a cache is simply give us a name and you can have a cache ready to go.

There's no capacity planning with Elasticache Services, you simply start the cache and you start executing Redis commands on it. You can start writing data, scale up your throughput. And as you do, Elasticache will automatically scale. It uses, it monitors your memory, your CPU and network utilization behind the scenes and automatically scales. It allows you to scale up as well as scale out in place.

So if you haven't already done so, look at Peter's keynote, you can also look at our breakout session, which was DEV342 where we go into a lot more detail about how scaling works on Elasticache. But in summary, the way it works is as you see microbursts in your traffic patterns, it can scale up in place. And at the same time, you're looking forward as to what your traffic might look like in the next few minutes and initiate scale out operations. So you have just the right capacity at the right time.

We offer pay-per-use pricing. So you pay for two dimensions, the first is the data stored. So you will pay for the data that is stored in your cache at any given time. That is built in GB-hours and you pay for the requests that you execute on Elasticache Service. And you pay for requests in a new unit called Elasticache Processing Units or ECPUs.

You can be, if you're interested in what an ECPU is, we can go into the details later, but you should think of them as essentially compute units that your requests consume. And as you might know if you're a Redis user, depending on the type of Redis command, the Redis data structures you use, the amount of CPU time consumed might differ. And so the total requests that you execute will consume ECPUs and you'll be billed for those.

Just a few quick other features before we jump into the demo. You know, you get four nines availability SLA on all caches, you get automatic and transparent software updates. So your cache is always at the latest minor and patch versions. You don't have to worry about upgrading your cache software at all. We do that behind the scenes for you, make sure your cache is on the latest software version.

Of course, if there's a major version which can result in backwards incompatibility, we will send you a notification and you will be able to upgrade it when you want to.

Um, so that's one thing, the single endpoint experience, we offer a single endpoint experience with Elasticache Service, which means your client, whether it's a Redis client or the cache client is talking to a single endpoint. So it looks like it's talking to a single cache node rather than having to discover and talk to the entire cluster topology behind the scenes.

And the reason we're doing this is because the single endpoint actually handles cluster topology changes behind the scenes. So if something changes, let's say a scale operation happens, cache nodes go in and out of commission because the EC2 nodes have failed or have been replaced, etc, all of that is hidden from the client completely. Your client is establishing connections with our single endpoint. And behind the scenes, the endpoint is handling all the connections to the Redis cache nodes or the Memcached cache nodes. And as and when a cache node fails, it re-establishes the connection, but your connection from the client is not dropped, right? So you see dramatically lower number of connection disconnects, etc.

Some more details here on the performance. But let's go right into the demo here. Give me a second.

Alright. So here's the Elasticache console. As you can see, I'm gonna show you how easy it is to create a cache. Let's go ahead and create a new Redis cache. As you can see here, you have a new option here for ECS, all you have to do is give us a cache name and that's it. You hit create and you get a new cache, right? It will take a few seconds, usually up to a minute, to create a cache.

While that happens, I'm gonna just log in to my EC2 instance here. Oh, there it is. I'm already logged in. Great. So I'm gonna just copy the endpoint from here, paste it in. As I said, encryption in transit is always on. So I'm gonna use the TLS flag here and here's the port.

There it is. So now I'm connected to my Redis cache, all I can and now I can start executing Redis commands and there it is, it's just as easy as that. I don't need to do any other configuration and this Redis cache is ready for use. And as we, in this case it's a t2.small so obviously I can't scale this too much. But if you had an application running, you could just get this going right away and it will automatically scale and adjust capacity.

So I'm gonna just hand it over to Yon, who's gonna just show us a few more details on how the scaling works. I think so. Alright. Give me a moment. I will connect my device.

Ok. So while Abhay was speaking, I already started my demo so we can save time. What I did, what the demo is doing is that I'm using a Redis benchmark from three different clients. I'm pushing data workloads to our Elasticache cluster node. And so what's happening behind the scenes? I will in a moment dive deep into the three metrics.

We build a proxy to provide the single endpoint experience. So it encapsulates all the changes, all the scaling process behind the scenes and the scaling has started to progress. So first, the system detects that it needs to do the scaling. After that, we do the provisioning from the EC2 pool and then we start the data rebalancing across the shards.

So what we can see here, let me refresh that. We see that we jumped to almost 2000k of trip put. We also see that the latency is below the sub millisecond latency we get for the GET and for the SET, we have overall 1.5, 1.3 milliseconds and we don't have any throttled commands because we didn't, we didn't push above the double of the capacity. Otherwise, we might see the throttled commands. But overall, we see that we're able to scale our capacity, no need for capacity planning. Everything is done behind the scenes, we still can get sub millisecond latency and everything is done automatically and seamlessly.

So I think this is the demo. Yeah, thank you. So, as you can see, as you increase your throughput on the left, the latency remains the same, it automatically moves its capacity and you have the right resources you need on your cache without having to manage your capacity at all.

I'm gonna switch back here. Ok. So yeah, Elasticache Service is available today. I encourage you all to go into the Elasticache console and try it out. You can, as we just showed, just give us a name, create a cache, plug it into your Redis client and it's ready to go.

At this point, I'm happy to take any questions.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值