DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] dpdkvhostuser fail to alloc memory when receive packet from other host
@ 2015-06-17  9:49 Du, Fan
  2015-06-17 11:54 ` [dpdk-dev] [ovs-dev] " gowrishankar
  0 siblings, 1 reply; 4+ messages in thread
From: Du, Fan @ 2015-06-17  9:49 UTC (permalink / raw)
  To: Loftus, Ciara; +Cc: dev, dev

Hi,

I'm playing dpdkvhostuser ports with latest DPDK and ovs master tree with iperf benchmarking.
When kvm guest1(backed up dpdkvhostuser port)siting on HOST1 is receiving packets from either other physical HOST2,
or similar kvm guest2 with dpdkvhostuser port siting on HOST2. The connectivity will break, iperf show no bandwidth and stall finally.

Other test scenario like, two kvm guest sitting on one host, or a single kvm guest send packets to a physical host works like a charm.

Swiitch debug option on, dpdk lib spit as below:
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:62
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:58

VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0

After some tweaks of logging code, and looks like bad things happens within below code snippet:
In lib/librte_vhost/vhost_rxtx.c function: rte_vhost_dequeue_burst

612                 vb_offset = 0;
613                 vb_avail = desc->len;
614                 /* Allocate an mbuf and populate the structure. */
615                 m = rte_pktmbuf_alloc(mbuf_pool);
616                 if (unlikely(m == NULL)) {
617                         RTE_LOG(ERR, VHOST_DATA,
618                                 "F0 Failed to allocate memory for mbuf. mbuf_pool:%p\n", mbuf_pool);
619                         break;
620                 }
621                 seg_offset = 0;
622                 seg_avail = m->buf_len - RTE_PKTMBUF_HEADROOM;
623                 cpy_len = RTE_MIN(vb_avail, seg_avail);

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] [ovs-dev] dpdkvhostuser fail to alloc memory when receive packet from other host
  2015-06-17  9:49 [dpdk-dev] dpdkvhostuser fail to alloc memory when receive packet from other host Du, Fan
@ 2015-06-17 11:54 ` gowrishankar
  2015-06-18  8:47   ` Du, Fan
  0 siblings, 1 reply; 4+ messages in thread
From: gowrishankar @ 2015-06-17 11:54 UTC (permalink / raw)
  To: Du, Fan; +Cc: dev, dev

On Wednesday 17 June 2015 03:19 PM, Du, Fan wrote:
> Hi,
>
> I'm playing dpdkvhostuser ports with latest DPDK and ovs master tree with iperf benchmarking.
> When kvm guest1(backed up dpdkvhostuser port)siting on HOST1 is receiving packets from either other physical HOST2,
> or similar kvm guest2 with dpdkvhostuser port siting on HOST2. The connectivity will break, iperf show no bandwidth and stall finally.

In my setup where kvm guest1 receives packets from phy host through ovs 
switch (vhost-user),
I do not find this problem. I am on top of below commit fyi.

commit 7d1ced01772de541d6692c7d5604210e274bcd37 (ovs)

Btw, I checked tx case for guest as well. qemu I am using is of version 
2.3.0. Is your qemu of version above 2.2
if allotting more than 1GB guest memory.

Could you also share hugepages params passed to kernel.

Regards,
Gowri Shankar

>
> Other test scenario like, two kvm guest sitting on one host, or a single kvm guest send packets to a physical host works like a charm.
>
> Swiitch debug option on, dpdk lib spit as below:
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
> VHOST_CONFIG: vring call idx:0 file:62
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
> VHOST_CONFIG: vring call idx:0 file:58
>
> VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
> VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
> VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
> VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
> VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
> VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
> VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
> VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0
>
> After some tweaks of logging code, and looks like bad things happens within below code snippet:
> In lib/librte_vhost/vhost_rxtx.c function: rte_vhost_dequeue_burst
>
> 612                 vb_offset = 0;
> 613                 vb_avail = desc->len;
> 614                 /* Allocate an mbuf and populate the structure. */
> 615                 m = rte_pktmbuf_alloc(mbuf_pool);
> 616                 if (unlikely(m == NULL)) {
> 617                         RTE_LOG(ERR, VHOST_DATA,
> 618                                 "F0 Failed to allocate memory for mbuf. mbuf_pool:%p\n", mbuf_pool);
> 619                         break;
> 620                 }
> 621                 seg_offset = 0;
> 622                 seg_avail = m->buf_len - RTE_PKTMBUF_HEADROOM;
> 623                 cpy_len = RTE_MIN(vb_avail, seg_avail);
>
>
>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] [ovs-dev] dpdkvhostuser fail to alloc memory when receive packet from other host
  2015-06-17 11:54 ` [dpdk-dev] [ovs-dev] " gowrishankar
@ 2015-06-18  8:47   ` Du, Fan
  0 siblings, 0 replies; 4+ messages in thread
From: Du, Fan @ 2015-06-18  8:47 UTC (permalink / raw)
  To: gowrishankar; +Cc: dev, dev



>-----Original Message-----
>From: gowrishankar [mailto:gowrishankar.m@linux.vnet.ibm.com]
>Sent: Wednesday, June 17, 2015 7:54 PM
>To: Du, Fan
>Cc: Loftus, Ciara; dev@dpdk.org; dev@openvswitch.org
>Subject: Re: [ovs-dev] dpdkvhostuser fail to alloc memory when receive packet
>from other host
>
>On Wednesday 17 June 2015 03:19 PM, Du, Fan wrote:
>> Hi,
>>
>> I'm playing dpdkvhostuser ports with latest DPDK and ovs master tree with
>iperf benchmarking.
>> When kvm guest1(backed up dpdkvhostuser port)siting on HOST1 is receiving
>packets from either other physical HOST2,
>> or similar kvm guest2 with dpdkvhostuser port siting on HOST2. The
>connectivity will break, iperf show no bandwidth and stall finally.
>
>In my setup where kvm guest1 receives packets from phy host through ovs
>switch (vhost-user),
>I do not find this problem. I am on top of below commit fyi.
>
>commit 7d1ced01772de541d6692c7d5604210e274bcd37 (ovs)
>
>Btw, I checked tx case for guest as well. qemu I am using is of version
>2.3.0. Is your qemu of version above 2.2
>if allotting more than 1GB guest memory.
>
>Could you also share hugepages params passed to kernel.

Thanks for the heads up..
My env:
dpdk-2.0.0
ovs master
qemu-2.3.0

My setup:
Host kernel hugepage config: 
default_hugepagesz=1GB hugepagesz=1G hugepages=8
ovs-vsctl add-br ovs-usw0 -- set bridge ovs-usw0 datapath_type=netdev
ovs-vsctl add-port ovs-usw0 dpdk0 -- set Interface dpdk0 type=dpdk
ovs-vsctl add-port ovs-usw0 vhost-user-1 -- set Interface vhost-user-1 type=dpdkvhostuser
ovs-vsctl add-port ovs-usw0 vhost-user-2 -- set Interface vhost-user-2 type=dpdkvhostuser

qemu-system-x86_64 -smp 4 -m 2048 -hda centos7-2.img -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user-2 -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device virtio-net-pci,mac=00:16:3d:22:33:56,netdev=mynet1 -object memory-backend-file,id=mem,size=2048M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc

I switched from dpdk master tree to dpdk-2.0.0 official release, it seems the memory allocation failure disappeared
and kvm guest on host1 could receive packets from other phy host2 as expect. 
And I'm doing more test on it and best with other test scenario like kvm guest on host1 <-> kvm guest on host2.



>Regards,
>Gowri Shankar
>
>>
>> Other test scenario like, two kvm guest sitting on one host, or a single kvm
>guest send packets to a physical host works like a charm.
>>
>> Swiitch debug option on, dpdk lib spit as below:
>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
>> VHOST_CONFIG: vring call idx:0 file:62
>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
>> VHOST_CONFIG: vring call idx:0 file:58
>>
>> VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>> VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>> VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>> VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>> VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>> VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>> VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>> VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>>
>> After some tweaks of logging code, and looks like bad things happens within
>below code snippet:
>> In lib/librte_vhost/vhost_rxtx.c function: rte_vhost_dequeue_burst
>>
>> 612                 vb_offset = 0;
>> 613                 vb_avail = desc->len;
>> 614                 /* Allocate an mbuf and populate the structure. */
>> 615                 m = rte_pktmbuf_alloc(mbuf_pool);
>> 616                 if (unlikely(m == NULL)) {
>> 617                         RTE_LOG(ERR, VHOST_DATA,
>> 618                                 "F0 Failed to allocate memory for
>mbuf. mbuf_pool:%p\n", mbuf_pool);
>> 619                         break;
>> 620                 }
>> 621                 seg_offset = 0;
>> 622                 seg_avail = m->buf_len -
>RTE_PKTMBUF_HEADROOM;
>> 623                 cpy_len = RTE_MIN(vb_avail, seg_avail);
>>
>>
>>
>> _______________________________________________
>> dev mailing list
>> dev@openvswitch.org
>> http://openvswitch.org/mailman/listinfo/dev
>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] dpdkvhostuser fail to alloc memory when receive packet from other host
@ 2015-06-17 14:58 Wiles, Keith
  0 siblings, 0 replies; 4+ messages in thread
From: Wiles, Keith @ 2015-06-17 14:58 UTC (permalink / raw)
  To: Du, Fan, Loftus, Ciara; +Cc: dev, dev



On 6/17/15, 4:49 AM, "Du, Fan" <fan.du@intel.com> wrote:

>Hi,
>
>I'm playing dpdkvhostuser ports with latest DPDK and ovs master tree with
>iperf benchmarking.
>When kvm guest1(backed up dpdkvhostuser port)siting on HOST1 is receiving
>packets from either other physical HOST2,
>or similar kvm guest2 with dpdkvhostuser port siting on HOST2. The
>connectivity will break, iperf show no bandwidth and stall finally.
>
>Other test scenario like, two kvm guest sitting on one host, or a single
>kvm guest send packets to a physical host works like a charm.
>
>Swiitch debug option on, dpdk lib spit as below:
>VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
>VHOST_CONFIG: vring call idx:0 file:62
>VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
>VHOST_CONFIG: vring call idx:0 file:58
>
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>VHOST_DATA: F0 Failed to allocate memory for mbuf.
>mbuf_pool:0x7fc7411ab5c0
>
>After some tweaks of logging code, and looks like bad things happens
>within below code snippet:
>In lib/librte_vhost/vhost_rxtx.c function: rte_vhost_dequeue_burst
>
>612                 vb_offset = 0;
>613                 vb_avail = desc->len;
>614                 /* Allocate an mbuf and populate the structure. */
>615                 m = rte_pktmbuf_alloc(mbuf_pool);
>616                 if (unlikely(m == NULL)) {
>617                         RTE_LOG(ERR, VHOST_DATA,
>618                                 "F0 Failed to allocate memory for
>mbuf. mbuf_pool:%p\n", mbuf_pool);
>619                         break;
>620                 }
>621                 seg_offset = 0;
>622                 seg_avail = m->buf_len - RTE_PKTMBUF_HEADROOM;
>623                 cpy_len = RTE_MIN(vb_avail, seg_avail);

To me this code is only reporting the mbuf_pool does not have any more
mbufs, not that this code has some type of error. It seems the number of
mbufs allocated to the mbuf_pool is not enough or someplace in the code is
not freeing the mbufs after being consumed.

You need to find out the reason for why you have run out of mbufs. It is
also possible the message should not have been an error, but
informational/warning instead as it maybe under some high volume loads
this may occur and no amount of mbufs may resolve the condition.

Regards,
++Keith
>
>
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-06-18  8:48 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-17  9:49 [dpdk-dev] dpdkvhostuser fail to alloc memory when receive packet from other host Du, Fan
2015-06-17 11:54 ` [dpdk-dev] [ovs-dev] " gowrishankar
2015-06-18  8:47   ` Du, Fan
2015-06-17 14:58 [dpdk-dev] " Wiles, Keith

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).