DPDK patches and discussions
 help / color / mirror / Atom feed
From: zimeiw <zimeiw@163.com>
To: dev@dpdk.org
Subject: [dpdk-dev] tcp/ip stack based on dpdk is ready
Date: Wed, 2 Mar 2016 11:54:23 +0800 (CST)	[thread overview]
Message-ID: <22e2c9a4.6533.15335765ead.Coremail.zimeiw@163.com> (raw)

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=GBK, Size: 4905 bytes --]



hi,


The tcp/ip stack is developed based on dpdk.
tcp/ip stack and APP deployment.
         |-------|       |-------|       |-------|
         |  APP  |       |  APP  |       |  APP  |
         |       |       |       |       |       |
         |       |       |       |       |       |
         |-------|       |-------|       |-------|
             |               |               |
--------------------------------------------------
netdpsock    |               |               |          
             fd              fd              fd
             |               |               |
--------------------------------------------------
netdp        |               |               |
         |-------|       |-------|       |-------|
         | TCP   |       |  TCP  |       | TCP   |
         |       |       |       |       |       |
         |       |       |       |       |       |
         |       |       |       |       |       |
         |---------------------------------------|       
         |               IP/ARP/ICMP             |
         |---------------------------------------|       
         |       |       |       |       |       |
         |LCORE0 |       |LCORE1 |       |LCORE2 |
         |-------|       |-------|       |-------|
             |               |               |
             ---------------RSS---------------
                             | 
         |---------------------------------------| 
         |                  NIC                  | 
         |---------------------------------------| 
NIC distribute packets to different lcore based on RSS, so same TCP flow are handled in the same lcore.
Each lcore has own TCP stack. so no share data between lcores, free lock.
IP/ARP/ICMP are shared between lcores.
APP process runs as a tcp server, only listens on one lcore and accept tcp connections from the lcore, so the APP process number shall large than the lcore number. The APP processes are deployed on each lcore automaticly and averagely.
APP process runs as a tcp client, app process can communicate with each lcore. The tcp connection can be located in specified lcore automaticly.
APP process can bind the same port if enable reuseport, APP process could accept tcp connection by round robin.
If NIC don't support multi queue or RSS, shall enhance opendp_main.c, reserve one lcore to receive and send packets from NIC, and distribute packets to lcores of netdp tcp stack by software RSS.
   2. netdpsock are compatible with BSD socket, so it is easy to porting app to run in netdp stack.
nginx is already porting to run in netdp, a few code are changed.  link: https://github.com/opendp/dpdk-nginx
redis is also porting. link: https://github.com/opendp/dpdk-redis


   3. Performance.
one lcore, one http server, ab testing 

Concurrency Level:      500
Time taken for tests:   0.642 seconds
Complete requests:      30000
Failed requests:        0
Total transferred:      4530000 bytes
HTML transferred:       1890000 bytes
Requests per second:    46695.59 [#/sec] (mean)
Time per request:       10.708 [ms] (mean)
Time per request:       0.021 [ms] (mean, across all concurrent requests)
Transfer rate:          6885.78 [Kbytes/sec] received
one lcore, one nginx server, ab testing 
Concurrency Level:      500
Time taken for tests:   0.965 seconds
Complete requests:      30000
Failed requests:        0
Total transferred:      25320000 bytes
HTML transferred:       18360000 bytes
Requests per second:    31092.43 [#/sec] (mean)
Time per request:       16.081 [ms] (mean)
Time per request:       0.032 [ms] (mean, across all concurrent requests)
Transfer rate:          25626.97 [Kbytes/sec] received
one lcore, one redis server, redis-bench testing 
root@h163:~/dpdk-redis# ./src/redis-benchmark -h 2.2.2.2  -p 6379 -n 100000 -c 50 -q
PING_INLINE: 86655.11 requests per second
PING_BULK: 90497.73 requests per second
SET: 84317.03 requests per second
GET: 85106.38 requests per second
INCR: 86580.09 requests per second
LPUSH: 83263.95 requests per second
LPOP: 83612.04 requests per second
SADD: 85034.02 requests per second
SPOP: 86430.43 requests per second
LPUSH (needed to benchmark LRANGE): 84245.99 requests per second
LRANGE_100 (first 100 elements): 46948.36 requests per second
LRANGE_300 (first 300 elements): 19615.54 requests per second
LRANGE_500 (first 450 elements): 11584.80 requests per second
LRANGE_600 (first 600 elements): 10324.18 requests per second
MSET (10 keys): 66401.06 requests per second
Still didn't test multicore tcp performance because lack test tools and env.


For detail test result, please refer to https://github.com/opendp/dpdk-odp


--
Best Regards,
zimeiw\x16º&²×©…éặ[®†r‰œ¢ežtÆ«ÛN9뎶Ó^‘zÛ«œö­†Ë^¦\x17§†êÝnº\x1auÊ&r‰‘yÇ¢½ç_®‰¦j)bƒGõëØ(¢	^r‰¦j)bƒGõëØ(¢	^r‰¶Óß9Û]õëÖòv—d¢¸\x0f¢Ë_‹\x1c"¶\x11\x1213âwÑ5\x13N€ïgè­×¯v—d¢¸\x16yÝŒj½´×­8ç®:ûMtÐ!\x13Eç\x1eŠ÷o)šŠX Ñýzö
(‚Wœ¢l"¶\x14ŒLø…½ì£}øÓM|ò(^[Ñú+uëÝ¥Ù(®\x04î{MLj½´×­}ç®:ÓÍ4=$Ã(ƒ\x12Š	Ú¶êÞ

                 reply	other threads:[~2016-03-02  3:54 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=22e2c9a4.6533.15335765ead.Coremail.zimeiw@163.com \
    --to=zimeiw@163.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).