From: "Wiles, Keith" <keith.wiles@intel.com>
To: "Du, Fan" <fan.du@intel.com>, "Loftus, Ciara" <ciara.loftus@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"dev@openvswitch.org" <dev@openvswitch.org>
Subject: Re: [dpdk-dev] dpdkvhostuser fail to alloc memory when receive packet from other host
Date: Wed, 17 Jun 2015 14:58:07 +0000 [thread overview]
Message-ID: <D1A6F42B.22BBE%keith.wiles@intel.com> (raw)
On 6/17/15, 4:49 AM, "Du, Fan" <fan.du@intel.com> wrote:
>Hi,
>
>I'm playing dpdkvhostuser ports with latest DPDK and ovs master tree with
>iperf benchmarking.
>When kvm guest1(backed up dpdkvhostuser port)siting on HOST1 is receiving
>packets from either other physical HOST2,
>or similar kvm guest2 with dpdkvhostuser port siting on HOST2. The
>connectivity will break, iperf show no bandwidth and stall finally.
>
>Other test scenario like, two kvm guest sitting on one host, or a single
>kvm guest send packets to a physical host works like a charm.
>
>Swiitch debug option on, dpdk lib spit as below:
>VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
>VHOST_CONFIG: vring call idx:0 file:62
>VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
>VHOST_CONFIG: vring call idx:0 file:58
>
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>
>After some tweaks of logging code, and looks like bad things happens
>within below code snippet:
>In lib/librte_vhost/vhost_rxtx.c function: rte_vhost_dequeue_burst
>
>612 vb_offset = 0;
>613 vb_avail = desc->len;
>614 /* Allocate an mbuf and populate the structure. */
>615 m = rte_pktmbuf_alloc(mbuf_pool);
>616 if (unlikely(m == NULL)) {
>617 RTE_LOG(ERR, VHOST_DATA,
>618 "F0 Failed to allocate memory for
>mbuf. mbuf_pool:%p\n", mbuf_pool);
>619 break;
>620 }
>621 seg_offset = 0;
>622 seg_avail = m->buf_len - RTE_PKTMBUF_HEADROOM;
>623 cpy_len = RTE_MIN(vb_avail, seg_avail);
To me this code is only reporting the mbuf_pool does not have any more
mbufs, not that this code has some type of error. It seems the number of
mbufs allocated to the mbuf_pool is not enough or someplace in the code is
not freeing the mbufs after being consumed.
You need to find out the reason for why you have run out of mbufs. It is
also possible the message should not have been an error, but
informational/warning instead as it maybe under some high volume loads
this may occur and no amount of mbufs may resolve the condition.
Regards,
++Keith
>
>
>
next reply other threads:[~2015-06-17 14:58 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-17 14:58 Wiles, Keith [this message]
-- strict thread matches above, loose matches on Subject: below --
2015-06-17 9:49 Du, Fan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=D1A6F42B.22BBE%keith.wiles@intel.com \
--to=keith.wiles@intel.com \
--cc=ciara.loftus@intel.com \
--cc=dev@dpdk.org \
--cc=dev@openvswitch.org \
--cc=fan.du@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).