DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC PATCH 0/2] performance utility in testpmd
@ 2016-04-20 22:43 Zhihong Wang
  2016-04-20 22:43 ` [dpdk-dev] [RFC PATCH 1/2] testpmd: add portfwd engine Zhihong Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Zhihong Wang @ 2016-04-20 22:43 UTC (permalink / raw)
  To: dev

This RFC patch proposes a general purpose forwarding engine in testpmd
namely "portfwd", to enable performance analysis and tuning for poll mode
drivers in vSwitching scenarios.


Problem statement
-----------------

vSwitching is more I/O bound in a lot of cases since there are a lot of
LLC/cross-core memory accesses.

In order to reveal memory/cache behavior in real usage scenarios and enable
efficient performance analysis and tuning for vSwitching, DPDK needs a
sample application that supports traffic flow close to real deployment,
e.g. multi-tenancy, service chaining.

There is a vhost sample application currently to enable simple vSwitching
scenarios, it comes with several limitations:

   1) Traffic flow is too simple and not flexible

   2) Switching based on MAC/VLAN only

   3) Not enough performance metrics


Proposed solution
-----------------

The testpmd sample application is a good choice, it's a powerful poll mode
driver management framework hosts various forwarding engine.

Now with the vhost pmd feature, it can also handle vhost devices, only a
new forwarding engine is needed to make use of it.

portfwd is implemented to this end.

Features of portfwd:

   1) Build up traffic from simple rx/tx to complex scenarios easily

   2) Rich performance statistics for all ports

   3) Core affinity manipulation

   4) Commands for run time configuration

Notice that portfwd has fair performance, but it's not for getting the
"maximum" numbers:

   1) It buffers packets for burst send efficiency analysis, which increase
      latency

   2) It touches the packet header and collect performance statistics which
      adds overheads

These "extra" overheads are actually what happens in real applications.

Several new commands are implemented for portfwd:

   1) set fwd port

      switch forwarding engine to portfwd

   2) show route

      show port info and forwarding rules for portfwd

   3) set route <srcport> <dstport>

      packets from <srcport> will be dispatched to <dstport>

   4) set route <srcport> ip

      packets from <srcport> will be dispatched based on dst ip

   5) set ip <srcport> <num0> <num1> <num2> <num3>

      set ip addr for <srcport>, portfwd will use this ip addr to do ip
      route

   6) set affinity <fsid> <lcid>

      forwarding stream <fsid> will be handled by core <lcid>
      (info can be read from "show route")

   7) show port perf all

      show perf stats (rx/tx cycles, burst size distribution, tx pktloss)
      of each port

   8) set drain <ns>

      set drain interval to drain buffered packets which is not sent
      because buffer not full (0 to disable)


Implementation details
----------------------

To enable flexible traffic flow setup, each port has 2 ways to forward
packets in portfwd:

   1) Forward based on dst ip

      For ip based forwarding, portfwd scans each packet to get the dst ip
      for dst port mapping.

      A simple suffix mapping method is used for dst ip based forwarding,
      a macro IPV4_ROUTE_MASK is used to specify how many (last) bits of
      dst ip will be used for hashing.

      It is recommended to make sure there's no conflict by setting proper
      IPV4_ROUTE_MASK and/or different ip ends for each port, otherwise it
      may hurt performance.

   2) Forward to a fixed port

      For fixed port forwarding, portfwd still scans each packet on purpose
      to simulate the impact of packet analysis behavior in real scenarios.

After dst ports are identified, packets are enqueued to a buffer which will
be burst sent when full. Packet buffers are built at each src port, so no
contention at enqueue stage.

There is a timeout interval to drain all buffers, which can be configured
or disabled.

Spinlock is used at dst port & queue to solve conflicts.


Use case examples
-----------------

Below are 3 examples to show how to use portfwd to build traffic flow in
the host (Guest traffic can be built likewise):

   1) PVP test: NIC-VM-NIC
      *  2 VMs each with 2 vhost ports: port 0, 1 and 2, 3
      *  1 NIC with 2 ports: port 4, 5
      *  Traffic from 4 goes to 0 and 2, and back from 1, 3 to 5
      Commands:
         set fwd port
         set ip 0 192 168 1 1
         set ip 2 192 168 1 2
         set route 4 ip (Make sure traffic has the right dst ip)
         set route 1 5
         set route 3 5
         set drain 0
         show route

   2) PVVP test: NIC-VM-VM-NIC
      *  2 VMs each with 2 vhost ports: port 0, 1 and 2, 3
      *  1 NIC with 2 ports: port 4, 5
      *  Traffic from 4 goes to 0, and 1 to 2, finally 3 to 5
      Commands:
         set fwd port
         set route 4 0
         set route 1 2
         set route 3 5
         set drain 0
         show route

   3) PVP bi-directional test: NIC-VM-NIC
      *  1 VM with 2 vhost ports: port 0, 1
      *  1 NIC with 2 ports: port 2, 3
      *  Traffic from 0 to 2, 1 to 3, and 2 to 0, 3 to 1
      Commands:
         set fwd port
         set route 0 2
         set route 2 0
         set route 1 3
         set route 3 1
         set drain 0
         show route

For the PVP bi-directional test, host testpmd can be launched like:

./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf0 -n 4 --socket-mem 4096,0
   --vdev 'eth_vhost0,iface=/tmp/sock0,queues=2'
   --vdev 'eth_vhost1,iface=/tmp/sock1,queues=2'
   -- -i --rxq=2 --txq=2 --rss-ip --nb-cores=2


Zhihong Wang (2):
  testpmd: add portfwd engine
  testpmd: add porfwd commands

 app/test-pmd/Makefile  |   1 +
 app/test-pmd/cmdline.c | 279 ++++++++++++++++++++++++++++++++-
 app/test-pmd/config.c  | 408 ++++++++++++++++++++++++++++++++++++++++++++++-
 app/test-pmd/portfwd.c | 418 +++++++++++++++++++++++++++++++++++++++++++++++++
 app/test-pmd/testpmd.c |  19 +++
 app/test-pmd/testpmd.h |  62 ++++++++
 6 files changed, 1177 insertions(+), 10 deletions(-)
 create mode 100644 app/test-pmd/portfwd.c

-- 
2.5.0

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-04-22  5:52 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-20 22:43 [dpdk-dev] [RFC PATCH 0/2] performance utility in testpmd Zhihong Wang
2016-04-20 22:43 ` [dpdk-dev] [RFC PATCH 1/2] testpmd: add portfwd engine Zhihong Wang
2016-04-20 22:43 ` [dpdk-dev] [RFC PATCH 2/2] testpmd: add portfwd commands Zhihong Wang
2016-04-21  9:54 ` [dpdk-dev] [RFC PATCH 0/2] performance utility in testpmd Thomas Monjalon
2016-04-21 11:00   ` Bruce Richardson
2016-04-22  5:51     ` Wang, Zhihong
2016-04-22  5:24   ` Wang, Zhihong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).