慢速性能在AWS上运行的Node.js慢速、性能、AWS、Node

2023-09-11 08:50:35 作者:青春有张相似的脸

我正在使用node.js中AWS一个非常简单的REST的API该API需要在'/休息/用户/ JDOE形式的请求,并返回以下(它在内存中的所有完成,不涉及数据库):

I am running a very simple RESTful API on AWS using Node.js. The API takes a request in the form of '/rest/users/jdoe' and returns the following (it's all done in memory, no database involved):

{
    username: 'jdoe',
    firstName: 'John',
    lastName: 'Doe'
}

上的Node.js + AWS这个API的性能可怕相比,本地网络 - 仅9请求/秒与2214请求/秒的本地网络上。 AWS运行的是m1.medium的实例,而本地节点服务器是台式机采用英特尔i7-950处理器。想弄清楚为什么这样在性能上的巨大差异。

The performance of this API on Node.js + AWS is horrible compared to the local network - only 9 requests/sec vs. 2,214 requests/sec on a local network. AWS is running a m1.medium instance whereas the local Node server is a desktop machine with an Intel i7-950 processor. Trying to figure out why such a huge difference in performance.

基准使用Apache台如下:

Benchmarks using Apache Bench are as follows:

本地网

10,000个请求,100 /组并发

10,000 requests with concurrency of 100/group

> ab -n 10000 -c 100 http://192.168.1.100:8080/rest/users/jdoe

Document Path:          /rest/users/jdoe
Document Length:        70 bytes

Concurrency Level:      100
Time taken for tests:   4.516 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      2350000 bytes
HTML transferred:       700000 bytes
Requests per second:    2214.22 [#/sec] (mean)
Time per request:       45.163 [ms] (mean)
Time per request:       0.452 [ms] (mean, across all concurrent requests)
Transfer rate:          508.15 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.4      0       2
Processing:    28   45   7.2     44      74
Waiting:       22   43   7.5     42      74
Total:         28   45   7.2     44      74

Percentage of the requests served within a certain time (ms)
  50%     44
  66%     46
  75%     49
  80%     51
  90%     54
  95%     59
  98%     65
  99%     67
 100%     74 (longest request)

AWS

1000请求100 /组并发 (10,000个请求将采取太长)

1,000 requests with concurrency of 100/group (10,000 requests would have taken too long)

C:\apps\apache-2.2.21\bin>ab -n 1000 -c 100 http://54.200.x.xxx:8080/rest/users/jdoe
Document Path:          /rest/users/jdoe
Document Length:        70 bytes

Concurrency Level:      100
Time taken for tests:   105.693 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      235000 bytes
HTML transferred:       70000 bytes
Requests per second:    9.46 [#/sec] (mean)
Time per request:       10569.305 [ms] (mean)
Time per request:       105.693 [ms] (mean, across all concurrent requests)
Transfer rate:          2.17 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       98  105   3.8    106     122
Processing:   103 9934 1844.8  10434   10633
Waiting:      103 5252 3026.5   5253   10606
Total:        204 10040 1844.9  10540   10736

Percentage of the requests served within a certain time (ms)
  50%  10540
  66%  10564
  75%  10588
  80%  10596
  90%  10659
  95%  10691
  98%  10710
  99%  10726
 100%  10736 (longest request)

问题:

连接时间AWS为105毫秒(平均)相比,0毫秒本地网络上。我认为,这是因为它需要更多的时间来打开一个套接字AWS然后到服务器的本地网络上。有什么要在这里完成了在负荷下假设的要求更好的性能,是来自世界各地的多台机器进入。 在更严重的是服务器的处理时间 - 45毫秒本地服务器相比,9.9秒AWS!我无法弄清楚什么在这里发生了。该服务器是唯一的pressing 9.46请求/秒。这是花生!

任何深入了解这些问题,很多AP preciated。我担心把一个重要的应用程序在节点+ AWS如果不能在这样一个简单的应用程序执行超快速的。

Any insight into these issues much appreciated. I am nervous about putting a serious application on Node+AWS if it can't perform super fast on such a simple application.

有关参考这里是我的服务器code:

For reference here's my server code:

var express = require('express');

var app = express();

app.get('/rest/users/:id', function(req, res) {
    var user = {
        username: req.params.id,
        firstName: 'John',
        lastName: 'Doe'
    };
    res.json(user);
});

app.listen(8080);
console.log('Listening on port 8080');

修改

隔离单发送请求(-n 1 -C 1)的

Requests per second:    4.67 [#/sec] (mean)
Time per request:       214.013 [ms] (mean)
Time per request:       214.013 [ms] (mean, across all concurrent requests)
Transfer rate:          1.07 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      104  104   0.0    104     104
Processing:   110  110   0.0    110     110
Waiting:      110  110   0.0    110     110
Total:        214  214   0.0    214     214

10要求所有并发送到(-n 10 -C 10)的

Requests per second:    8.81 [#/sec] (mean)
Time per request:       1135.066 [ms] (mean)
Time per request:       113.507 [ms] (mean, across all concurrent requests)
Transfer rate:          2.02 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       98  103   3.4    102     110
Processing:   102  477 296.0    520     928
Waiting:      102  477 295.9    520     928
Total:        205  580 295.6    621    1033

使用

结果WRK 的

所建议的安德烈·西多罗夫。结果要好得多 - 2821请求每秒:

As suggested by Andrey Sidorov. The results are MUCH better - 2821 requests per second:

Running 30s test @ http://54.200.x.xxx:8080/rest/users/jdoe
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   137.04ms   48.12ms   2.66s    98.89%
    Req/Sec   238.11     27.97   303.00     88.91%
  84659 requests in 30.01s, 19.38MB read
  Socket errors: connect 0, read 0, write 0, timeout 53
Requests/sec:   2821.41
Transfer/sec:    661.27KB

所以,它肯定看起来像罪魁祸首是ApacheBench!难以置信!

So it certainly looks like the culprit is ApacheBench! Unbelievable!

推荐答案

这可能是AB的问题(另见的这个问题)。没有什么错在你的服务器code。我建议您使用 WRK 的负载测试工具来尝试基准。你对我的t1.micro例如:

It's probably ab issue (see also this question). There is nothing wrong in your server code. I suggest to try to benchmark using wrk load testing tool. Your example on my t1.micro:

wrk git:master ❯ ./wrk -t12 -c400 -d30s http://some-amazon-hostname.com/rest/users/10                                                                                                                                                                                          ✭
Running 30s test @ http://some-amazon-hostname.com/rest/users/10
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   333.42ms  322.01ms   3.20s    91.33%
    Req/Sec   135.02     59.20   283.00     65.32%
  48965 requests in 30.00s, 11.95MB read
Requests/sec:   1631.98
Transfer/sec:    407.99KB