DPDK patches and discussions
 help / color / mirror / Atom feed
From: Zoltan Kiss <zoltan.kiss@linaro.org>
To: "Richardson, Bruce" <bruce.richardson@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
Date: Mon, 7 Sep 2015 15:15:25 +0100	[thread overview]
Message-ID: <55ED9BFD.7040009@linaro.org> (raw)
In-Reply-To: <59AF69C657FD0841A61C55336867B5B0359227DF@IRSMSX103.ger.corp.intel.com>



On 07/09/15 13:57, Richardson, Bruce wrote:
>
>
>> -----Original Message-----
>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>> Sent: Monday, September 7, 2015 1:26 PM
>> To: dev@dpdk.org
>> Cc: Ananyev, Konstantin; Richardson, Bruce
>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive
>> function
>>
>> Hi,
>>
>> I just realized I've missed the "[PATCH]" tag from the subject. Did anyone
>> had time to review this?
>>
>
> Hi Zoltan,
>
> the big thing that concerns me with this is the addition of new instructions for
> each packet in the fast path. Ideally, this prefetching would be better handled
> in the application itself, as for some apps, e.g. those using pipelining, the
> core doing the RX from the NIC may not touch the packet data at all, and the
> prefetches will instead cause a performance slowdown.
>
> Is it possible to get the same performance increase - or something close to it -
> by making changes in OVS?

OVS already does a prefetch when it's processing the previous packet, 
but apparently it's not early enough. At least for my test scenario, 
where I'm forwarding UDP packets with the least possible overhead. I 
guess in tests where OVS does more complex processing it should be fine.
I'll try to move the prefetch earlier in OVS codebase, but I'm not sure 
if it'll help.
Also, I've checked the PMD receive functions, and generally it's quite 
mixed whether they prefetch the header or not. All the other 3 ixgbe 
receive functions do that for example, as well as the following drivers:

bnx2x
e1000
fm10k (scattered)
i40e
igb
virtio

While these drivers don't do that:

cxgbe
enic
fm10k (non-scattered)
mlx4

I think it would be better to add rte_packet_prefetch() everywhere, 
because then applications can turn that off with 
CONFIG_RTE_PMD_PACKET_PREFETCH.

>
> Regards,
> /Bruce
>
>> Regards,
>>
>> Zoltan
>>
>> On 01/09/15 20:17, Zoltan Kiss wrote:
>>> The lack of this prefetch causes a significant performance drop in
>>> OVS-DPDK: 13.3 Mpps instead of 14 when forwarding 64 byte packets.
>>> Even though OVS prefetches the next packet's header before it starts
>>> processing the current one, it doesn't get there fast enough. This
>>> aligns with the behaviour of other receive functions.
>>>
>>> Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org>
>>> ---
>>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c
>>> b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
>>> index cf25a53..51299fa 100644
>>> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c
>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
>>> @@ -502,6 +502,15 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq,
>> struct rte_mbuf **rx_pkts,
>>>                   _mm_storeu_si128((void *)&rx_pkts[pos]-
>>> rx_descriptor_fields1,
>>>                                   pkt_mb1);
>>>
>>> +               rte_packet_prefetch((char*)(rx_pkts[pos]->buf_addr) +
>>> +                                   RTE_PKTMBUF_HEADROOM);
>>> +               rte_packet_prefetch((char*)(rx_pkts[pos + 1]->buf_addr)
>> +
>>> +                                   RTE_PKTMBUF_HEADROOM);
>>> +               rte_packet_prefetch((char*)(rx_pkts[pos + 2]->buf_addr)
>> +
>>> +                                   RTE_PKTMBUF_HEADROOM);
>>> +               rte_packet_prefetch((char*)(rx_pkts[pos + 3]->buf_addr)
>> +
>>> +                                   RTE_PKTMBUF_HEADROOM);
>>> +
>>>                   /* C.4 calc avaialbe number of desc */
>>>                   var = __builtin_popcountll(_mm_cvtsi128_si64(staterr));
>>>                   nb_pkts_recd += var;
>>>

  reply	other threads:[~2015-09-07 14:15 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-01 19:17 [dpdk-dev] " Zoltan Kiss
2015-09-07 12:25 ` [dpdk-dev] [PATCH] " Zoltan Kiss
2015-09-07 12:57   ` Richardson, Bruce
2015-09-07 14:15     ` Zoltan Kiss [this message]
2015-09-07 14:41       ` Richardson, Bruce
2015-09-25 18:28         ` Zoltan Kiss
2015-09-27 23:19           ` Ananyev, Konstantin
2015-10-14 16:10             ` Zoltan Kiss
2015-10-14 23:23               ` Ananyev, Konstantin
2015-10-15 10:32                 ` Zoltan Kiss
2015-10-15 15:43                   ` Ananyev, Konstantin
2015-10-19 16:30                     ` Zoltan Kiss
2015-10-19 18:57                       ` Ananyev, Konstantin
2015-10-20  9:58                         ` Zoltan Kiss

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55ED9BFD.7040009@linaro.org \
    --to=zoltan.kiss@linaro.org \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).