DPDK patches and discussions
 help / color / mirror / Atom feed
From: fengchengwen <fengchengwen@huawei.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: thomas Monjalon <thomas@monjalon.net>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: add one example of DPI ?
Date: Tue, 29 Apr 2025 09:14:51 +0800	[thread overview]
Message-ID: <15eea4ec-08d0-4b1a-92dc-a8dd7945d539@huawei.com> (raw)
In-Reply-To: <20250428084901.0aeeee04@hermes.local>

On 2025/4/28 23:49, Stephen Hemminger wrote:
> On Mon, 28 Apr 2025 16:20:22 +0800
> fengchengwen <fengchengwen@huawei.com> wrote:
> 
>> Hi all,
>>
>> Currently, we supported several DPI application scenarios performance tuning,
>> in these scenarios, the DPDK library ethdev, ring, mbuf and hash APIs are used.
>>
>> One of the scenarios is:
>>
>>     ------------------------                           -------------------------
>>     |                      |       rte_ring-0          |                       |
>>     |  packet-recv-process | ===>  rte_ring-1  ===>    | packet-detect-process |
>>     |                      |         ...               |                       |
>>     |                      |       rte_ring-n          |                       |
>>     ------------------------                           -------------------------
>>
>>     packet-recv-process dispatch flow to different rings by such 'rte_hash_crc' function.
>>     packet-detect-process build flow context based on rte_hash library.
>>
>> I think it is necessary to add a DPI example to show that DPDK has the basic
>> capability of building DPI applications and provides best performance practices.
>>
>> Hope to listen to the community's opinions.
>>
>> Thanks
>>
> 
> Did you consider the impact of CPU cache on this scenario.
> When you process the packet in two different threads, it ends up adding
> an additional data cache miss which can cut performance in half.

Yes, this model has the cache problem (it did has performance loss, but not so much). Let assume that this model is A.

Another scenarios (this model is cache friendly) is:
     --------------------------
     |                        |
     |  packet-detect-process |
     |                        |
     |  packet-recv-process   |
     |                        |
     --------------------------
         |  multi-queues  |
     --------------------------
     |   hardware-RSS         |
     --------------------------
   The NIC hardware dispatch flow to different hardware queues by RSS.
   Each threads in the process then receive and detect.
   Let assume this model is B.

Currently, there are many traffic encapsulation formats, but the NIC RSS hash always very simple (more complex
can be implemented throught flow matching, but many NICs do not support this feature). In this case, the model
A could be used, the software could dispatch flow by user-defined.

I think this example could both support the above two model.


      reply	other threads:[~2025-04-29  1:14 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-28  8:20 fengchengwen
2025-04-28 15:49 ` Stephen Hemminger
2025-04-29  1:14   ` fengchengwen [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=15eea4ec-08d0-4b1a-92dc-a8dd7945d539@huawei.com \
    --to=fengchengwen@huawei.com \
    --cc=dev@dpdk.org \
    --cc=stephen@networkplumber.org \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).