From: "Xie, Huawei" <huawei.xie@intel.com>
To: Kyle Larose <eomereadig@gmail.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] vhost-net stops sending to virito pmd -- already fixed?
Date: Fri, 18 Sep 2015 05:58:30 +0000 [thread overview]
Message-ID: <C37D651A908B024F974696C65296B57B40F1F758@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <CAMFWN9kSVxTqVw-hW0Q1bn-D7bsjpJRk-xNi5M3NE8yk7M-kwA@mail.gmail.com>
On 9/17/2015 1:25 AM, Kyle Larose wrote:
> Hi Huawei,
>
>> Kyle:
>> Could you tell us how did you produce this issue, very small pool size
>> or you are using pipeline model?
> If I understand correctly, by pipeline model you mean a model whereby
> multiple threads handle a given packet, with some sort IPC (e.g. dpdk
> rings) between them? If so, yes: we are using such a model. And I
> suspect that this model is where we run into issues: the length of the
> pipeline, combined with the queuing between stages, can lead to us
> exhausting the mbufs, particularly when a stage's load causes queuing.
Yes, exactly.
>
> When I initially ran into this issue, I had a fairly large mbuf pool
> (32K entries), with 3 stages in the pipeline: rx, worker, tx. There
> were two worker threads, with a total of 6 rings. I was sending some
> fairly bursty traffic, at a high packet rate (it was bursting up to
> around 1Mpkt/s). There was a low chance that this actually caused the
> problem. However, when I decreased the mbuf pool to 1000 entries, it
> *always* happened.
>
> In summary: the pipeline model is important here, and a small pool
> size definitely exacerbates the problem.
>
> I was able to reproduce the problem using the load_balancer sample
> application, though it required some modification to get it to run
> with virtio. I'm not sure if this is because I'm using DPDK 1.8, or
> something else. Either way, I made the number of mbufs configurable
> via an environment variable, and was able to show that decreasing it
> from the default of 32K to 1K would cause the problem to always happen
> when using the same traffic as with my application. Applying the below
> patch fixed the problem.
>
> The following patch seems to fix the problem for me, though I'm not
> sure it's the optimal solution. It does so by removing the early exit
> which prevents us from allocating mbufs. After we skip over the packet
> processing loop since there are no packets, the mbuf allocation loop
> runs. Note that the patch is on dpdk 1.8.
Yes, it will fix your problem. We could try to do the refill each time
we enter the loop no matter there is avail packets or not.
> diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c
> b/lib/librte_pmd_virtio/virtio_rxtx.c
> index c013f97..7cadf52 100644
> --- a/lib/librte_pmd_virtio/virtio_rxtx.c
> +++ b/lib/librte_pmd_virtio/virtio_rxtx.c
> @@ -463,9 +463,6 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf
> **rx_pkts, uint16_t nb_pkts)
> if (likely(num > DESC_PER_CACHELINE))
> num = num - ((rxvq->vq_used_cons_idx + num) %
> DESC_PER_CACHELINE);
>
> - if (num == 0)
> - return 0;
> -
> num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num);
> PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num);
> for (i = 0; i < num ; i++) {
> @@ -549,9 +546,6 @@ virtio_recv_mergeable_pkts(void *rx_queue,
>
> rmb();
>
> - if (nb_used == 0)
> - return 0;
> -
> PMD_RX_LOG(DEBUG, "used:%d\n", nb_used);
>
> while (i < nb_used) {
>
> Thanks,
>
> Kyle
>
next prev parent reply other threads:[~2015-09-18 5:58 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-11 16:32 Kyle Larose
2015-09-13 21:43 ` Thomas Monjalon
2015-09-15 21:04 ` Kyle Larose
2015-09-16 1:37 ` Xie, Huawei
2015-09-16 17:24 ` Kyle Larose
2015-09-18 5:58 ` Xie, Huawei [this message]
2015-09-17 0:45 ` Ouyang, Changchun
2015-09-18 6:28 ` Xie, Huawei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=C37D651A908B024F974696C65296B57B40F1F758@SHSMSX101.ccr.corp.intel.com \
--to=huawei.xie@intel.com \
--cc=dev@dpdk.org \
--cc=eomereadig@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).