node redis集群_如何使用集群扩展Node.js服务器

node redis集群

by Michele Riva

由Michele Riva

如何使用集群扩展Node.js服务器 (How to scale your Node.js server using clustering)

Scalability is a hot topic in tech, and every programming language or framework provides its own way of handling high loads of traffic.

可伸缩性是技术上的热门话题,每种编程语言或框架都提供了自己的方式来处理高流量。

Today, we’re going to see an easy and straightforward example about Node.js clustering. This is a programming technique which will help you parallelize your code and speed up performance.

今天,我们将看到一个有关Node.js集群的简单明了的示例。 这是一种编程技术,可帮助您并行执行代码并提高性能。

“A single instance of Node.js runs in a single thread. To take advantage of multi-core systems, the user will sometimes want to launch a cluster of Node.js processes to handle the load.”

“ Node.js的单个实例在单个线程中运行。 为了利用多核系统,用户有时会希望启动Node.js进程集群来处理负载。”

- Node.js Documentation

-Node.js文档

We’re gonna create a simple web server using Koa, which is really similar to Express in terms of use.

我们将使用Koa创建一个简单的Web服务器,它在使用方面确实类似于Express

The complete example is available in this Github repository.

Github存储库中提供了完整的示例。

我们要建立的 (What we’re gonna build)

We’ll build a simple web server which will act as follows:

我们将构建一个简单的Web服务器,其作用如下:

  1. Our server will receive a POST request, we’ll pretend that user is sending us a picture.

    我们的服务器将收到POST请求,并假装用户正在向我们发送图片。

  2. We’ll copy an image from the filesystem into a temporary directory.

    我们将图像从文件系统复制到一个临时目录中。
  3. We’ll flip it vertically using Jimp, an image processing library for Node.js.

    我们将使用Node.js的图像处理库Jimp垂直翻转它。
  4. We’ll save it to the file system.

    我们将其保存到文件系统中。
  5. We’ll delete it and we’ll send a response to the user.

    我们将其删除,并将响应发送给用户。

Of course, this is not a real world application, but is pretty close to one. We just want to measure the benefits of using clustering.

当然,这不是真实的应用程序,但是非常接近一个应用程序。 我们只想衡量使用群集的好处。

设置项目 (Setting up the project)

I’m gonna use yarn to install my dependencies and initialize my project:

我将使用yarn来安装我的依赖项并初始化我的项目:

Since Node.js is single threaded, if our web server crashes, it will remain down until some other process will restarts it. So we’re gonna install forever, a simple daemon which will restart our web server if it ever crashes.

由于Node.js是单线程的,因此如果我们的Web服务器崩溃,它将保持关闭状态,直到其他进程将其重新启动为止。 因此,我们将永久安装一个简单的守护程序,该守护程序将在崩溃时重新启动我们的Web服务器。

We’ll also install Jimp, Koa and Koa Router.

我们还将安装Jimp ,Koa和Koa Router。

Koa入门 (Getting started with Koa)

This is the folder structure we need to create:

这是我们需要创建的文件夹结构:

We’ll have an src folder which contains two JavaScript files: cluster.js and standard.js .

我们将有一个src文件夹,其中包含两个JavaScript文件: cluster.jsstandard.js

The first one will be the file where we’ll experiment with the cluster module. The second is a simple Koa server which will work without any clustering.

第一个将是我们将在cluster模块上进行实验的文件。 第二个是简单的Koa服务器,它将在没有任何群集的情况下运行。

In the module directory, we’re gonna create two files: job.js and log.js.

module目录中,我们将创建两个文件: job.jslog.js

job.js will perform the image manipulation work. log.js will log every event that occurs during that process.

job.js将执行图像处理工作。 log.js将记录该过程中发生的每个事件。

日志模块 (The Log module)

Log module will be a simple function which will take an argument and will write it to the stdout (similar to console.log).

日志模块将是一个简单的函数,它将接受一个参数并将其写入stdout (类似于console.log )。

It will also append the current timestamp at the beginning of the log. This will allow us to check when a process started and to measure its performance.

它还会将当前时间戳附加在日志的开头。 这将使我们能够检查流程何时开始并评估其性能。

作业模块 (The Job module)

I’ll be honest, this is not a beautiful and super-optimized script. It’s just an easy job which will allow us to stress our machine.

老实说,这不是一个漂亮且超级优化的脚本。 这是一项轻松的工作,它将使我们能够对机器施加压力。

Koa Web服务器 (The Koa Webserver)

We’re gonna create a very simple webserver. It will respond on two routes with two different HTTP methods.

我们将创建一个非常简单的Web服务器。 它将使用两种不同的HTTP方法在两条路由上做出响应。

We’ll be able to perform a GET request on http://localhost:3000/. Koa will respond with a simple text which will show us the current PID (process id).

我们将能够在http://localhost:3000/上执行GET请求。 Koa将以简单的文本作为响应,该文本将向我们显示当前的PID(进程ID)。

The second route will only accept POST requests on the /flip path, and will perform the job that we just created.

第二条路由将仅接受/flip路径上的POST请求,并将执行我们刚刚创建的作业。

We’ll also create a simple middleware which will set an X-Response-Time header. This will allow us to measure the performance.

我们还将创建一个简单的中间件,该中间件将设置X-Response-Time标头。 这将使我们能够评估性能。

Great! We can now start our server typing node ./src/standard.js and test our routes.

大! 现在,我们可以启动服务器键入node ./src/standard.js并测试我们的路由。

问题 (The problem)

Let’s use my machine as a server:

让我们将我的机器用作服务器:

  • Macbook Pro 15-inch 2016

    Macbook Pro 15英寸2016
  • 2.7GHz Intel Core i7

    2.7GHz英特尔酷睿i7
  • 16GB RAM

    16GB RAM

If I make a POST request, the script above will send me a response in ~3800 milliseconds. Not so bad, given that the image I am currently working on is about 6.7MB.

如果我发出POST请求,上面的脚本将在3800毫秒内给我发送响应。 考虑到我当前正在处理的映像约为6.7MB,还不错。

I can try making more requests, but the response time won’t decrease too much. This is because the requests will be performed sequentially.

我可以尝试发出更多请求,但是响应时间不会减少太多。 这是因为请求将顺序执行。

So, what would happen if I tried to make 10, 100, 1000 concurrent requests?

那么,如果我尝试发出10、100、1000个并发请求,将会发生什么?

I made a simple Elixir script which performs multiple concurrent HTTP requests:

我制作了一个简单的Elixir脚本,该脚本执行多个并发HTTP请求:

I chose Elixir because it’s really easy to create parallel processes, but you can use whatever you prefer!

我选择Elixir是因为创建并行流程非常容易,但是您可以使用自己喜欢的任何东西!

测试十个并发请求-无需集群 (Testing ten concurrent requests — without clustering)

As you can see, we spawn 10 concurrent processes from our iex (an Elixir REPL).

如您所见,我们从iex(Elixir REPL)生成10个并发进程。

The Node.js server will immediately copy our image and start to flip it.The first response will be logged after 16 seconds and the last one after 40 seconds.

Node.js服务器将立即复制我们的图像并开始翻转它。第一个响应将在16秒后记录下来,最后一个响应将在40秒后记录下来。

Such a dramatic performance decrease! With just 10 concurrent requests, we decreased the webserver performance by 950%!

如此惊人的性能下降! 只需10个并发请求, 我们将网络服务器性能降低了950%!

集群介绍 (Introducing clustering)

Remember what I mentioned at the beginning of the article?

还记得我在文章开头提到的内容吗?

To take advantage of multi-core systems, the user will sometimes want to launch a cluster of Node.js processes to handle the load.
为了利用多核系统,用户有时会希望启动Node.js进程集群来处理负载。

Depending on which server we’re gonna run our Koa application, we could have a different number of cores.

根据我们要运行Koa应用程序的服务器,我们可以具有不同数量的内核。

Every core will be responsible for handling the load individually. Basically, each HTTP request will be satisfied by a single core.

每个核心将负责单独处理负载。 基本上,每个HTTP请求都将由一个内核满足。

So for example — my machine, which has eight cores, will handle eight concurrent requests.

例如,我的机器有八个内核,将处理八个并发请求。

We can now count how many CPUs we have thanks to the os module:

现在,我们可以计算出由于os模块而拥有的CPU数量:

The cpus() method will return an array of objects that describe our CPUs. We can bind its length to a constant which will be called numWorkers, ’cause that’s the number of workers that we’re gonna use.

cpus()方法将返回描述我们CPU的对象数组。 我们可以将其长度绑定到一个常量,该常量将被称为numWorkers ,因为这就是我们要使用的worker数量。

We’re now ready to require the cluster module.

现在,我们已经准备好需要cluster模块了。

We now need a way of splitting our main process into N distinct processes.We’ll call our main process master and the other processes workers.

我们现在需要分裂的方式我们的主要过程分为N不同processes.We'll拨打我们的主要Craft.iomaster和其他进程workers

Node.js cluster module offers a method called isMaster. It will return a boolean value that will tell us if the current process is directed by a worker or master:

Node.js cluster模块提供了一种称为isMaster的方法。 它将返回一个布尔值,该值将告诉我们当前进程是由工作程序还是主程序控制:

Great. The golden rule here is that we don’t want to serve our Koa application under the master process.

大。 这里的黄金法则是,我们不想在主流程下为Koa应用程序提供服务。

We want to create a Koa application for each worker, so when a request comes in, the first free worker will take care of it.

我们想为每个工作人员创建一个Koa应用程序,因此,当有请求进入时,第一个免费工作人员将处理它。

The cluster.fork() method will fit our purpose:

cluster.fork()方法将符合我们的目的:

Ok, at first that may be a little tricky.

好的,起初可能有些棘手。

As you can see in the script above, if our script has been executed by the master process, we’re gonna declare a constant called workers. This will create a worker for each core of our CPU, and will store all the information about them.

正如您在上面的脚本中看到的那样,如果我们的脚本已由主进程执行,我们将声明一个名为workers的常量。 这将为我们CPU的每个核心创建一个工作线程,并将存储有关它们的所有信息。

If you feel unsure about the adopted syntax, using […Array(x)].map() is just the same as:

如果您不确定采用的语法,则使用[…Array(x)].map()与以下方法相同:

I just prefer to use immutable values while developing a high-concurrency app.

我只喜欢在开发高并发应用程序时使用不可变值。

添加Koa (Adding Koa)

As we said before, we don’t want to serve our Koa application under the master process.

如前所述,我们不想在主流程下提供Koa应用程序。

Let’s copy our Koa app structure into the else statement, so we will be sure that it will be served by a worker:

让我们将我们的Koa应用程序结构复制到else语句中,以便确保它可以由工作人员提供:

As you can see, we also added a couple of event listeners in the isMaster statement:

如您所见,我们还在isMaster语句中添加了几个事件侦听器:

The first one will tell us that a new worker has been spawned. The second one will create a new worker when one other worker crashes.

第一个告诉我们,已经产生了一个新工人。 第二个将在其他工人崩溃时创建一个新工人。

That way, the master process will only be responsible for creating new workers and orchestrating them. Every worker will serve an instance of Koa which will be accessible on the :3000 port.

这样,主流程将仅负责创建新工作人员并进行编排。 每个工作人员都会提供一个Koa实例,该实例可以在:3000端口上访问。

测试十个并发请求-使用集群 (Testing ten concurrent requests — with clustering)

As you can see, we got our first response after about 10 seconds, and the last one after about 14 seconds. It’s an amazing improvement over the previous 40 second response time!

如您所见,大约10秒钟后,我们得到了第一个响应,大约14秒钟后,我们得到了最后一个响应。 与之前的40秒响应时间相比,这是一个了不起的改进!

We made ten concurrent requests, and the Koa server took eight of them immediately. When the first worker has sent its response to the client, it took one of the remaining requests and processed it!

我们发出了十个并发请求,而Koa服务器立即接受了其中八个请求。 当第一个工作人员将响应发送给客户端时,它接受了其余请求中的一个并进行了处理!

结论 (Conclusion)

Node.js has an amazing capacity of handling high loads, but it wouldn’t be wise to stop a request until the server finishes its process.

Node.js具有处理高负载的惊人能力,但是在服务器完成其过程之前停止请求不是明智的。

In fact, Node.js webservers can handle thousands of concurrent requests only if you immediately send a response to the client.

实际上,仅当您立即向客户端发送响应时,Node.js Web服务器才能处理数千个并发请求。

A best practice would be to add a pub/sub messaging interface using Redis or any other amazing tool. When the client sends a request, the server starts a realtime communication with other services. This takes charge of expensive jobs.

最佳实践是使用Redis或任何其他出色的工具添加发布/订阅消息传递界面。 当客户端发送请求时,服务器开始与其他服务的实时通信。 这负责昂贵的工作。

Load balancers would also help a lot splitting out high traffic loads.

负载平衡器还有助于将高流量负载分开。

Once again, technology is giving us endless possibilities, and we’re sure to find the right solution to scale our application to infinity and beyond!

再一次,技术为我们提供了无限的可能性,我们一定会找到正确的解决方案,以将我们的应用范围扩展到无限甚至更大!

翻译自: https://www.freecodecamp.org/news/how-to-scale-your-node-js-server-using-clustering-c8d43c656e8f/

node redis集群

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值