最终的Node.js生产清单

Are you doing this Node thing right on production? Let's see some common mistakes people make running Node on production (coming straight from my own projects - like codedamn) and how they can be mitigated.

您是否正在生产中执行此Node事情? 让我们看看人们在生产环境上运行Node 时常犯的一些错误(直接来自我自己的项目,如codedamn ),以及如何减轻它们。

You can use this as your checklist on production when you're deploying Node apps. Since this is a production-ready-practices article, a lot of them won't apply when you're developing apps on your local system.

部署Node应用程序时,可以将其用作生产中的清单。 由于这是一篇准备就绪的产品文章,因此当您在本地系统上开发应用程序时,其中许多内容将不适用。

在群集模式下运行节点/单独的节点进程 (Run node in cluster mode/separate node processes)

Remember that Node is single threaded. It can delegate a lot of things (like HTTP requests and filesystem read/writes) to the OS which handles it in a multithreaded environment. But still, the code YOU write, the application logic, always runs in a single thread.

请记住,Node是单线程的。 它可以将很多事情(例如HTTP请求和文件系统读/写)委派给在多线程环境中进行处理的OS。 但是,您编写的代码,应用程序逻辑始终在单个线程中运行。

By running in a single thread, your Node process is always limited to only a single core on your machine. So if you have a server with multiple cores, you're wasting computation power running Node just once on your server.

通过在单个线程中运行,您的Node进程始终仅限于计算机上的单个内核。 因此,如果您有一台具有多个内核的服务器,那么您浪费了仅在服务器上运行一次Node的计算能力。

What does "running Node just once" mean? You see, operating systems have a scheduler built into them which is responsible for how the execution of processes is distributed across the CPUs of the machine. When you run only 2 processes on a 2-core machine, the OS determines it is best to run both of the processes on separate cores to squeeze out maximum performance.

“仅运行节点一次”是什么意思? 您会看到,操作系统中内置了一个调度程序,该调度程序负责如何在计算机的CPU之间分配进程的执行。 当您在2核计算机上仅运行2个进程时,操作系统会确定最好在单独的内核上运行这两个进程,以最大程度地发挥性能。

A similar thing needs to be done with Node. You have two options at this point:

Node需要做类似的事情。 此时,您有两个选择:

  1. Run Node in cluster mode - Cluster mode is an architecture which comes baked into Node itself. In simple words, Node forks more processes of its own and distributes load through a single master process.

    在集群模式下运行Node-集群模式是一种结成Node本身的体系结构。 简而言之,Node可以分叉更多的进程,并通过单个主进程分配负载。

  2. Run Node processes independently - This option is slightly different from the above in the sense that you now do not have a master process controlling the child Node processes. This means that when you spawn different Node processes, they'll run completely independent of each other. No shared memory, no IPC, no communication, nada.

    独立运行节点进程-在您现在没有控制子节点进程的主进程的意义上,此选项与上面的略有不同。 这意味着当您生成不同的Node进程时,它们将完全彼此独立地运行。 没有共享内存,没有IPC,没有通信,nada。

According to a stackoverflow answer, the latter (point 2) performs far better than the former (point 1) but is a little tricker to setup.

根据stackoverflow的答案 ,后者(点2)的性能远优于前者(点1),但设置起来有些麻烦。

Why? Because in a Node app, not only is there application logic, but almost always when you're setting up servers in Node code you need to bind ports. And a single application codebase cannot bind the same port twice on the same OS.

为什么? 因为在Node应用程序中,不仅存在应用程序逻辑,而且在使用Node Code设置服务器时几乎总是需要绑定端口。 而且,单个应用程序代码库无法在同一OS上两次绑定同一端口。

This problem is, however, easily fixable. Environment variables, Docker containers, NGiNX frontend proxy, and so on are some of the solutions for this.

但是,此问题很容易解决。 解决方案包括环境变量,Docker容器,NGiNX前端代理等等。

速率限制端点 (Rate Limiting your endpoints)

Let's face it. Not everybody in the world has best intentions for your architecture. Sure, attacks like DDoS are simply very complicated to mitigate, and even giants like GitHub go down when something like that happens.

面对现实吧。 并非世界上每个人对您的体系结构都有最好的意图。 当然,缓解DDoS之类的攻击非常复杂,甚至当GitHub之类的巨头发生此类事件时也会崩溃。

But the least you can do is prevent a script-kiddie from taking down your server just because you have an expensive API endpoint exposed from your server without any rate-limiting in place.

但是,您至少可以做的是防止脚本小子因为您的服务器暴露了昂贵的API端点而没有任何速率限制,从而关闭了服务器。

If you use Express with Node, there are 2 beautiful packages which work seamlessly together to rate limit traffic on Layer 7:

如果您将Express与Node结合使用,则有2个精美的软件包可以无缝协作以对第7层上的流量进行速率限制:

  1. Express Rate Limit - https://www.npmjs.com/package/express-rate-limit

    Express Rate Limit- https://www.npmjs.com/package/express-rate-limit

  2. Express Slow Down - https://www.npmjs.com/package/express-slow-down

    Express Slow Down- https://www.npmjs.com/package/express-slow-down

Express Slow Down actually adds incremental delay to your requests instead of dropping them. This way legit users, if they DDoS by accident (super activity of clicking buttons here and there), are simply slowed down and are not rate limited.

Express Slow Down实际上会为您的请求增加增量延迟,而不是丢弃请求。 这样,合法用户(如果他们偶然地DDoS(在此处和那里单击按钮的超级活动)就可以放慢速度,并且不受速率的限制。

On the other hand, if there's a script-kiddie running scripts to take down the server, Express rate limiter monitors and rate limits that particular user, depending on the user IP, user account, or anything else you want.

另一方面,如果有运行该脚本的脚本小子来关闭服务器,则Express rate limiter会监视和限制该特定用户的速度,具体取决于用户IP,用户帐户或您想要的其他任何内容。

Rate limiting could (should!) be applied on Layer 4 as well (Layer 4 means blocking traffic before discovering the contents of it - HTTP) through IP address. If you want, you can setup an NGiNX rule which blocks traffic on layer 4 and rejects the flood of traffic coming from a single IP, thus saving your server processes from overwhelming.

速率限制也可以(应该!)通过IP地址应用于第4层(第4层表示在发现流量的内容之前阻止流量-HTTP)。 如果需要,您可以设置NGiNX规则,该规则阻止第4层上的流量并拒绝来自单个IP的大量流量,从而避免服务器进程不堪重负。

使用前端服务器进行SSL终止 (Use a frontend server for SSL termination)

Node provides out of the box support for SSL handshakes with the browser using the https server module combined with the required SSL certs.

Node使用https服务器模块结合必需的SSL证书,为浏览器提供了SSL握手的开箱即用支持。

But let's be honest here, your application should not be concerned with SSL in the first place anyway. This is not something the application logic should do. Your Node code should only be responsible for what happens with the request, not the pre-processing and post-processing of data coming in and out of your server.

但坦白讲,您的应用程序首先不应该与SSL有关。 这不是应用程序逻辑应做的事情。 您的Node代码仅应负责处理请求,而不是对进出服务器的数据进行预处理和后处理。

SSL termination refers to converting traffic from HTTPS to HTTP. And there are much better tools available than Node for that. I recommend NGiNX or HAProxy for it. Both have free versions available which get the job done and offload SSL termination from Node.

SSL终止是指将流量从HTTPS转换为HTTP。 并且有比Node更好的工具。 我建议使用NGiNX或HAProxy。 两者都有可用的免费版本,可以完成工作并从Node卸载SSL终止。

使用前端服务器进行静态文件服务 (Use a frontend server for static file serving)

Again, instead of using built in methods like express.static to serve static files, use frontend reverse proxy servers like NGiNX to serve static files from disk.

同样,不要使用诸如express.static类的内置方法来提供静态文件,而应使用诸如NGiNX之类的前端反向代理服务器来提供磁盘上的静态文件。

First of all, NGiNX can do that faster than Node (because it is built from scratch down to do only that). But it also offloads file serving from a single-threaded Node process which could use its clock cycles on something better.

首先,NGiNX的执行速度比Node快(因为它是从头开始构建的,因此只能执行此操作)。 但是它也会从单线程Node进程中卸载文件服务,而该进程可以将时钟周期用于更好的事情。

Not only this – frontend proxy servers like NGiNX can also help you deliver content faster using GZIP compression. You can also set expiry headers, cache data, and much more, which is not something we should expect Node to do (however, Node can still do it).

不仅如此-NGiNX等前端代理服务器还可以帮助您使用GZIP压缩更快地交付内容。 您还可以设置到期标头,缓存数据等等,这不是我们应该期望Node要做的事情(但是,Node仍然可以做到)。

配置错误处理 (Configure error handling)

Proper error handling can save you from hours of debugging and trying to reproduce difficult bugs. On server, it is especially easy to setup architecture for error handling because you're the one running it. I recommend tools like Sentry with Node which records, reports, and emails you whenever the server crashes due to an error in the source code.

正确的错误处理可以使您从调试和尝试重现困难的bug的工作中省去数小时。 在服务器上,设置体系结构以进行错误处理特别容易,因为您是在运行它。 我建议使用带有节点的Sentry之类的工具,该工具会在服务器由于源代码错误而崩溃时向您记录,报告并通过电子邮件发送给您。

Once that is in place, now it is time to restart the server when it crashes so the whole site doesn't just go down for hours until you manually take it up again.

一旦安装妥当,现在是时候在服务器崩溃时重新启动服务器,这样,整个站点不仅会停机几个小时,直到您再次手动启动它。

For this, you can use a process manager like PM2. Or even better, use a dockerized container environment with policies like restart: always with proper memory and disk limits setup.

为此,您可以使用PM2之类的流程管理器。 甚至更好的是,使用具有restart: always策略的dockerized容器环境restart: always设置适当的内存和磁盘限制。

Docker setup ensures that even if your container runs in OME, the process spins up again (which might not happen on a PM2 environment, as the OS might kill PM2 if there's a memory leak somewhere in a running process).

Docker设置可确保即使您的容器在OME中运行,该过程也会再次启动(在PM2环境中可能不会发生,因为如果正在运行的进程中存在内存泄漏,则操作系统可能会杀死PM2)。

正确配置日志 (Configure logs properly)

All the answers lie in logs. Server hacks, server crashes, suspicious user behavior, etc. For that, you have to make sure that:

所有的答案都在日志中。 服务器黑客攻击,服务器崩溃,可疑用户行为等。为此,您必须确保:

  1. Each and every request attempt is logged with the IP address/method of request/path accessed, basically as much information as you can log (except for private information like passwords and credit card information, of course)

    每次请求尝试都会记录所访问的IP地址/请求/路径的方法,基本上是您可以记录的信息(当然,诸如密码和信用卡信息之类的私人信息除外)
  2. This can be achieved through the morgan package

    这可以通过morgan包来实现

  3. Setup file stream logs on production instead of console output. This is faster, easier to see and allows you to export logs to online log viewing services.

    安装程序文件流登录生产而不是控制台输出。 这是更快,更容易看到的,并允许您将日志导出到联机日志查看服务。

  4. Not all log messages have equal weight. Some logs are just there for debugging, while if some are present, it might indicate a pants-on-fire situation (like a server hack or unauthorized access). Use winston-logger for logging different levels of logs.

    并非所有日志消息都具有相同的权重。 一些日志仅用于调试,而如果存在某些日志,则可能表明出现紧急情况(例如服务器被黑客入侵或未经授权的访问)。 使用winston-logger记录不同级别的日志。
  5. Setup log rotation so that you don't get a log size in GBs after a month or so, when you see the server.

    设置日志轮换,以使您在看到服务器后一个月左右就不会得到以GB为单位的日志大小。

  6. GZIP your log files after rotation. Text is cheap, and is highly compressible and easy to store. You should never face problem with text logs as long as they are compressed and you're running a server with a decent disk space (25GB+).

    轮换后GZIP您的日志文件。 文本很便宜,并且高度可压缩并且易于存储。 只要文本日志已压缩并且您正在运行具有良好磁盘空间(25GB +)的服务器,就永远不会遇到问题。

结论 (Conclusion)

It is easy to take note of a few practices in production which could save you tears and hours of debugging later on. Make sure you follow these best practices and let me know what you think by saying Hi on my twitter handle.

可以很容易地注意到生产中的一些实践,这些实践可以节省您的眼泪和以后的调试时间。 确保您遵循这些最佳做法,并在我的Twitter句柄上说“嗨”,让我知道您的想法。

If you liked this article, let's meet on social media. Here's my Instagram and Twitter. I'm super active, and would love to have a chat! Let's connect.

如果您喜欢这篇文章,让我们在社交媒体上见面。 这是我的InstagramTwitter 。 我非常活跃,很想聊天! 让我们连接。

Peace!Mehul

和平!

翻译自: https://www.freecodecamp.org/news/node-js-production-checklist/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值