API网关Kong学习笔记(十): Kong在生产环境中的部署与性能测试方法

Tags: kong 

目录

说明

生产环境中,将kong的数据平面单独部署比较好,也方便进行压力测试,以及用火焰图进行性能分析,这里将kong的管理平面和数据平面分开,数据平面独占一台机器,负责请求转发。

下面是一个大体可用的性能测试方法,测试的场景单一,只比较了直接访问Pod,和通过Kong访问Pod的效果,没有测试不同类型应用、对比Kong与Nginx-Ingress,以及对比开启不同插件时的结果。

相关笔记

2019-05-06 16:28:56:kong 1.1.x有了一个重大变换,实现了db-less模式,可以不使用数据库了,见笔记二十六:查看全部笔记如果是刚开始学习kong,直接从1.x开始,0.x已经不再维护,0.15是0.x的最后一个版本。

前19篇笔记是刚开始接触kong时记录的,使用的版本是0.14.1,当时对kong一知半解,笔记比较杂乱。第二十篇开始是再次折腾时的笔记,使用的版本是1.0.3,笔记相对条理一些。

从0.x到1.x需要关注的变化有:

  1. 插件全部使用pdk
  2. 0.x中不鼓励使用的特性都被移除了;
  3. 全部使用kong.db,以前独立的dao彻底清除,代码简洁清晰了。

订正

下面的测试方法中,使用的压测工具是ab,ab不支持http 1.1,kong不支持http 1.0,ab向kong发送的是http 1.0请求,即使使用了keep-alive,依然会被kong断开连接。

原生的nginx不存在这种情况,因此开启keep-alive时,对比nginx和kong的ab压测结果是没有意义的,因为压测kong时keep-alive没有生效。

kong为什么会断开使用了keep-alive的http 1.0请求,这是一个疑问,需要调查一下是不是设置的原因。

在此之前,可以换用支持http 1.1的siege进行压测。

用siege进行测试

测试在位于同一台机器上的两个虚拟机之间进行的,测试数据可能不准,重要的是方法。

iperf测试结果如下:

[[email protected] vagrant]# iperf -c 192.168.33.12
------------------------------------------------------------
Client connecting to 192.168.33.12, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.33.11 port 42278 connected with 192.168.33.12 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  3.29 GBytes  2.82 Gbits/sec

nginx主要配置:

worker_processes 2;
worker_rlimit_nofile 10000;

events {
    worker_connections 10000;
}

upstream echo_upstream{
    server 172.16.128.20:8080;
    keepalive 10000;
}

server {
    listen       7000 ;
    listen       [::]:7000 ;
    server_name  echo.com;                         # 在本地host配置域名
    keepalive_requests  100000000;

    location / {
        proxy_pass  http://echo_upstream;
    }
}

kong的主要配置:

worker_processes 2;
daemon off;

pid pids/nginx.pid;
error_log logs/error.log info;

worker_rlimit_nofile 10000;

events {
    worker_connections 10000;
}

http {
    keepalive_requests 100000000;
    include 'nginx-kong.conf';
}

测试nginx:

[[email protected] vagrant]# siege -c 100 -b -t 1M -H "host: echo.com"  192.168.33.12:7000
** SIEGE 4.0.2
** Preparing 100 concurrent users for battle.
The server is now under siege...
Lifting the server siege...
Transactions:		      289018 hits
Availability:		      100.00 %
Elapsed time:		       59.49 secs
Data transferred:	      133.40 MB
Response time:		        0.02 secs
Transaction rate:	     4858.26 trans/sec
Throughput:		        2.24 MB/sec
Concurrency:		       99.77
Successful transactions:      289018
Failed transactions:	           0
Longest transaction:	        0.18
Shortest transaction:	        0.00

测试kong:

$ siege -c 100 -b -t 1M -H "host: echo.com"  192.168.33.12:8000
[[email protected] vagrant]#  siege -c 100 -b -t 1M -H "host: echo.com"  192.168.33.12:8000
** SIEGE 4.0.2
** Preparing 100 concurrent users for battle.
The server is now under siege...
Lifting the server siege...
Transactions:		      259773 hits
Availability:		      100.00 %
Elapsed time:		       59.15 secs
Data transferred:	      155.08 MB
Response time:		        0.02 secs
Transaction rate:	     4391.77 trans/sec
Throughput:		        2.62 MB/sec
Concurrency:		       99.72
Successful transactions:      259773
Failed transactions:	           0
Longest transaction:	        1.16
Shortest transaction:	        0.00

在这个用本地虚拟机组成的环境中,100并发的时候,处理速度下降了9%~10%,这是在没有启用任何插件的情况下的结果。

测试结果

注意:这里以及后面的内容都是用ab压测的,方法是有意义的,但得出的数据没有意义,原因见前面的“订正”章节,可以按照下面的方法进行测试,但将压测工具换成siege。

测试结果:

环境                 Request/sec        Kbytes/sec       
----------------------------------------------------------------------
请求端到Kong带宽           NAN           74600.00
Kong到Pod带宽              NAN           43400.00
无并发,请求端到Pod    2760.69            1515.01 
无并发,经Kong到Pod     722.00             539.39 
100并发,请求端到Pod   7192.25            3948.62
100并发,经Kong到Pod   5812.13            4344.57 

从请求端直接访问Pod,和从Kong所在的机器上直接访问Pod,测试结果基本相同,可以确定Kong到Pod的过程中不存在额外的干扰。

在Kong所在的机器上通过Kong访问应用,单并发时,平均每秒处理985.80个请求(请求端发起经过Kong访问的结果为722.00),100并发时,平均每秒处理5339.20个请求(请求端经过Kong访问结果为5812.13)。

测试环境

请求端:8C16G,用ab等测试软件向Kong发起请求

Kong:4C8G,部署kong的数据平面0.14.1,响应请求

目标应用:mirrorgooglecontainers/echoserver:1.8

安装了17个插件:

[root@request]# kubectl get kp --all-namespaces
NAMESPACE NAME AGE
kong-test correlationid-test 21d
kong-test file-log-test 21d
kong-test hebin-test-key-auth-plu 23d
kong-test http-log-test 2d
kong-test http-repeat-test 1d
kong-test my-prometheus 21d
kong-test ratelimiting-plu-test 22d
kong-test request-size-limiting-plu-test 22d
kong-test request-terminate-plu-test 22d
kong-test request-tran-test 21d
kong-test response-terminate-plu-test 22d
kong-test response-tran-test 21d
kong-test set-path 22d
kong-test svc-virt-plu-test 21d
kong-test syslog-log-test 21d
kong-test udp-log-test 21d

带宽测试方法

请求端到Kong的带宽,Kong到Kubernetes中的容器的带宽,是不能被超越的。如果压测时,发现速率接近了这两个带宽中的一个,就说明已经达到极限。

网络带宽的测试结果和报文大小高度相关,报文很小时,测试到的带宽会明显下降,因此测试带宽时用的报文应当与访问应用时产生的报文大小接近。

通过kong返回报文和直接访问应用产生的报文大小不同,kong会在响应头中添加一些信息,如果压测时占用的带宽远远小于下面测试到的带宽,可以忽略报文大小的细微差异。

请求端到Kong的带宽

用iperf测试请求端到Kong的带宽,iperf、netperf等网络性能测试工具的使用

在Kong启动iperf:

$ iperf -p 5001 -s

在请求端发起测试,tcp报文大小为434时,请求端与Kong之间的带宽是597Mbits/sec:  

[root@request]# iperf -p 5001 -c 192.168.33.12 -t 120 -l 434
------------------------------------------------------------
Client connecting to 192.168.33.12, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[  3] local 10.10.199.154 port 40322 connected with 192.168.33.12 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-120.0 sec  8.35 GBytes   597 Mbits/sec

Kong与Kubernetes集群中的Pod之间的带宽

在kubernetes集群中创建iperf-server容器:

kubectl create -f https://raw.githubusercontent.com/introclass/kubernetes-yamls/master/all-in-one/iperf-server-all-in-one.yaml

Pod的IP地址为192.168.7.8

$ kubectl -n demo-iperf get pod -o wide
NAME                           READY     STATUS    RESTARTS   AGE       IP            NODE
iperf-server-6bd95d8bc-76v82   2/2       Running   0          2m        192.168.7.8   10.10.192.35

Service的ClusterIP为10.254.136.179

$ kubectl -n demo-iperf get svc
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
iperf-server   ClusterIP   10.254.136.179   <none>        5001/TCP,22/TCP   2m

报文大小为430时,Kong到Pod的带宽为347Mbits/sec

[root@kong ~]# iperf -p 5001 -c  192.168.7.8 -t 120 -l 434
------------------------------------------------------------
Client connecting to 192.168.7.8, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.38.0 port 35790 connected with 192.168.7.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-120.0 sec  4.85 GBytes   347 Mbits/sec

顺手测试一下通过cluster ip访问时的带宽:

[root@kong ~]# iperf -p 5001 -c 10.254.136.179  -t 120 -l 434
------------------------------------------------------------
Client connecting to 10.254.136.179, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.33.12 port 59508 connected with 10.254.136.179 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-120.0 sec  4.64 GBytes   332 Mbits/sec

通过kong访问应用与直接访问应用

在kubernetes中创建应用:

$ kubectl create -f https://raw.githubusercontent.com/introclass/kubernetes-yamls/master/all-in-one/echo-all-in-one.yaml

绑定的域名为echo.com,pod的IP为192.168.7.2

$[root@request tmp] kubectl -n demo-echo get ingress -o wide
NAME           HOSTS      ADDRESS         PORTS     AGE
ingress-echo   echo.com   10.10.173.203   80        1h

[root@request tmp]# kubectl -n demo-echo get pod -o wide
NAME                    READY     STATUS    RESTARTS   AGE       IP            NODE
echo-7f4c564c84-7pds2   2/2       Running   0          1h        192.168.7.2   10.10.192.35

无并发的情况

对比在无并发的情况下的结果,请求端发起访问的结果。

从请求端直接访问Pod

从请求端直接访问Pod,无并发时,平均每秒处理2760.69个请求,带宽为1515.01Kbytes/s,远远没有达到饱和状态。

[root@request]# ab -k -n 100000 -c 1 -H "Host: echo.com" http://192.168.7.2:8080/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.7.2 (be patient)
...
Finished 100000 requests


Server Software:        echoserver
Server Hostname:        192.168.7.2
Server Port:            8080

Document Path:          /
Document Length:        415 bytes

Concurrency Level:      1
Time taken for tests:   36.223 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    99000
Total transferred:      56195000 bytes
HTML transferred:       41500000 bytes
Requests per second:    2760.69 [#/sec] (mean)
Time per request:       0.362 [ms] (mean)
Time per request:       0.362 [ms] (mean, across all concurrent requests)
Transfer rate:          1515.01 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     0    0   0.1      0       8
Waiting:        0    0   0.1      0       8
Total:          0    0   0.2      0       8

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      0
  98%      1
  99%      1
 100%      8 (longest request)

从请求端通过Kong访问Pod

从请求端通过kong访问Pod,无并发时,每秒中处理722.00个请求,带宽为539.39Kbytes/s,远远没有达到饱和状态:

[root@request]# ab -k -n 100000 -c 1 -H "Host: echo.com" http://192.168.33.12:8000/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.33.12 (be patient)
...
Finished 100000 requests


Server Software:        echoserver
Server Hostname:        192.168.33.12
Server Port:            8000

Document Path:          /
Document Length:        558 bytes

Concurrency Level:      1
Time taken for tests:   138.504 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    0
Total transferred:      76500004 bytes
HTML transferred:       55800000 bytes
Requests per second:    722.00 [#/sec] (mean)
Time per request:       1.385 [ms] (mean)
Time per request:       1.385 [ms] (mean, across all concurrent requests)
Transfer rate:          539.39 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0      18
Processing:     1    1   0.4      1      18
Waiting:        0    1   0.4      1      17
Total:          1    1   0.5      1      19

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      1
  75%      1
  80%      1
  90%      2
  95%      2
  98%      2
  99%      3
 100%     19 (longest request)

100并发,总数10万个请求

从请求端直接访问Pod

平均每秒钟处理7192.25个请求,占用带宽为3948.62Kbytes/s,远远未达到饱和状态。

[root@request]#  ab -k -n 100000 -c 100 -H "Host: echo.com" http://192.168.7.2:8080/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.7.2 (be patient)
...
Finished 100000 requests


Server Software:        echoserver
Server Hostname:        192.168.7.2
Server Port:            8080

Document Path:          /
Document Length:        415 bytes

Concurrency Level:      100
Time taken for tests:   13.898 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    99049
Total transferred:      56195245 bytes
HTML transferred:       41500000 bytes
Requests per second:    7195.25 [#/sec] (mean)
Time per request:       13.898 [ms] (mean)
Time per request:       0.139 [ms] (mean, across all concurrent requests)
Transfer rate:          3948.62 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       5
Processing:     4   14  11.2     13     584
Waiting:        1   14  11.2     13     584
Total:          4   14  11.3     13     588

Percentage of the requests served within a certain time (ms)
  50%     13
  66%     14
  75%     14
  80%     14
  90%     15
  95%     16
  98%     17
  99%     24
 100%    588 (longest request)

从请求端通过Kong访问Pod

平均每秒钟处理5812.13个请求,占用带宽为4344.57Kbytes/s,远远未达到饱和状态。

[root@request]#  ab -k -n 100000 -c 100 -H "Host: echo.com" http://192.168.33.12:8000/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.33.12 (be patient)
...
Finished 100000 requests


Server Software:        echoserver
Server Hostname:        192.168.33.12
Server Port:            8000

Document Path:          /
Document Length:        558 bytes

Concurrency Level:      100
Time taken for tests:   17.205 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    0
Total transferred:      76544051 bytes
HTML transferred:       55800000 bytes
Requests per second:    5812.13 [#/sec] (mean)
Time per request:       17.205 [ms] (mean)
Time per request:       0.172 [ms] (mean, across all concurrent requests)
Transfer rate:          4344.57 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   2.0      2      11
Processing:     1   15   8.0     14     148
Waiting:        1   14   8.0     13     148
Total:          1   17   7.9     16     151

Percentage of the requests served within a certain time (ms)
  50%     16
  66%     18
  75%     19
  80%     19
  90%     21
  95%     24
  98%     30
  99%     46
 100%    151 (longest request)

与nginx对比

在同一个机器上部署nginx,在nginx上配置到pod的代理访问:

[root@localhost ~]# cat /etc/nginx/conf.d/echo.com.conf
server {
    listen       7000 ;
    listen       [::]:7000 ;
    server_name  echo.com;                         # 在本地host配置域名

    location / {
      proxy_pass http://172.16.128.11:8080;
    }
}

对比nginx和kong的压测结果:

//压测nginx
$ ab -k -n 100000 -c 100 -H "Host: echo.com" http://192.168.33.12:7000/

//压测kong
$ ab -k -n 100000 -c 100 -H "Host: echo.com" http://192.168.33.12:8000/

参考

  1. 怎样测试Web应用服务器性能,评估运行效率?
  2. iperf、netperf等网络性能测试工具的使用
  3. Web开发平台OpenResty(三):火焰图性能分析

kong

  1. API网关Kong学习笔记(二十六): Kong 1.1引入db-less模式,无数据库部署
  2. API网关Kong学习笔记(二十五): 重温 kong ingress controller
  3. API网关Kong学习笔记(二十四): 在kubernetes中启用kong的插件
  4. API网关Kong学习笔记(二十三): Kong 1.0.3的plugin/插件机制的实现
  5. API网关Kong学习笔记(二十二): Kong 1.0.3源代码快速走读
  6. API网关Kong学习笔记(二十一): Kong的开发环境设置(IntelliJ Idea)
  7. API网关Kong学习笔记(二十): Kong 1.0.3的安装部署和与Kubernetes的对接
  8. API网关Kong学习笔记(十九): Kong的性能测试(与Nginx对比)
  9. API网关Kong学习笔记(十八): Kong Ingress Controller的CRD详细说明
  10. API网关Kong学习笔记(十七): Kong Ingress Controller的使用
  11. API网关Kong学习笔记(十六): Kong转发请求的工作过程
  12. API网关Kong学习笔记(十五): KongIngress的定义细节
  13. API网关Kong学习笔记(十四): Kong的Admin API概览和使用
  14. API网关Kong学习笔记(十三): 向数据库中插入记录的过程分析
  15. API网关Kong学习笔记(十二): 插件的目录中schema分析
  16. API网关Kong学习笔记(十一): 自己动手写一个插件
  17. API网关Kong学习笔记(十): Kong在生产环境中的部署与性能测试方法
  18. API网关Kong学习笔记(九): Kong对WebSocket的支持
  19. API网关Kong学习笔记(八): Kong Ingress Controller的实现
  20. API网关Kong学习笔记(七): Kong数据平面Plugin的调用与实现
  21. API网关Kong学习笔记(六): Kong数据平面的事件、初始化与插件加载
  22. API网关Kong学习笔记(五): 功能梳理和插件使用-安全插件使用
  23. API网关Kong学习笔记(四): 功能梳理和插件使用-认证插件使用
  24. API网关Kong学习笔记(三): 功能梳理和插件使用-基本使用过程
  25. API网关Kong学习笔记(二): Kong与Kubernetes集成的方法
  26. API网关Kong学习笔记(一): Nginx、OpenResty和Kong入门,基础概念和安装部署
  27. API网关Kong学习笔记(零): 使用过程中遇到的问题以及解决方法

推荐阅读

Copyright @2011-2019 All rights reserved. 转载请添加原文连接,合作请加微信lijiaocn或者发送邮件: [email protected],备注网站合作

友情链接:  系统软件  程序语言  运营经验  水库文集  网络课程  微信网文  发现知识星球