From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from e28smtp03.in.ibm.com (e28smtp03.in.ibm.com [122.248.162.3]) by dpdk.org (Postfix) with ESMTP id 47F38C422 for ; Wed, 17 Jun 2015 13:54:18 +0200 (CEST) Received: from /spool/local by e28smtp03.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 17 Jun 2015 17:24:15 +0530 Received: from d28dlp01.in.ibm.com (9.184.220.126) by e28smtp03.in.ibm.com (192.168.1.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 17 Jun 2015 17:24:14 +0530 X-Helo: d28dlp01.in.ibm.com X-MailFrom: gowrishankar.m@linux.vnet.ibm.com X-RcptTo: dev@dpdk.org Received: from d28relay02.in.ibm.com (d28relay02.in.ibm.com [9.184.220.59]) by d28dlp01.in.ibm.com (Postfix) with ESMTP id 4524AE0054 for ; Wed, 17 Jun 2015 17:27:43 +0530 (IST) Received: from d28av05.in.ibm.com (d28av05.in.ibm.com [9.184.220.67]) by d28relay02.in.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id t5HBsCgL54657056 for ; Wed, 17 Jun 2015 17:24:13 +0530 Received: from d28av05.in.ibm.com (localhost [127.0.0.1]) by d28av05.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id t5HBsCZx000624 for ; Wed, 17 Jun 2015 17:24:12 +0530 Received: from [9.79.180.200] ([9.79.180.200]) by d28av05.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id t5HBsAHP000573; Wed, 17 Jun 2015 17:24:11 +0530 Message-ID: <55815FE2.9060301@linux.vnet.ibm.com> Date: Wed, 17 Jun 2015 17:24:10 +0530 From: gowrishankar User-Agent: Mozilla/5.0 (X11; Linux i686; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: "Du, Fan" References: <5A90DA2E42F8AE43BC4A093BF0678848E6C4D8@SHSMSX104.ccr.corp.intel.com> In-Reply-To: <5A90DA2E42F8AE43BC4A093BF0678848E6C4D8@SHSMSX104.ccr.corp.intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15061711-0009-0000-0000-0000060AA057 X-Mailman-Approved-At: Wed, 17 Jun 2015 13:56:05 +0200 Cc: "dev@dpdk.org" , "dev@openvswitch.org" Subject: Re: [dpdk-dev] [ovs-dev] dpdkvhostuser fail to alloc memory when receive packet from other host X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jun 2015 11:54:20 -0000 On Wednesday 17 June 2015 03:19 PM, Du, Fan wrote: > Hi, > > I'm playing dpdkvhostuser ports with latest DPDK and ovs master tree with iperf benchmarking. > When kvm guest1(backed up dpdkvhostuser port)siting on HOST1 is receiving packets from either other physical HOST2, > or similar kvm guest2 with dpdkvhostuser port siting on HOST2. The connectivity will break, iperf show no bandwidth and stall finally. In my setup where kvm guest1 receives packets from phy host through ovs switch (vhost-user), I do not find this problem. I am on top of below commit fyi. commit 7d1ced01772de541d6692c7d5604210e274bcd37 (ovs) Btw, I checked tx case for guest as well. qemu I am using is of version 2.3.0. Is your qemu of version above 2.2 if allotting more than 1GB guest memory. Could you also share hugepages params passed to kernel. Regards, Gowri Shankar > > Other test scenario like, two kvm guest sitting on one host, or a single kvm guest send packets to a physical host works like a charm. > > Swiitch debug option on, dpdk lib spit as below: > VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL > VHOST_CONFIG: vring call idx:0 file:62 > VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL > VHOST_CONFIG: vring call idx:0 file:58 > > VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 > VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 > VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 > VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 > VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 > VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 > VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 > VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 > > After some tweaks of logging code, and looks like bad things happens within below code snippet: > In lib/librte_vhost/vhost_rxtx.c function: rte_vhost_dequeue_burst > > 612 vb_offset = 0; > 613 vb_avail = desc->len; > 614 /* Allocate an mbuf and populate the structure. */ > 615 m = rte_pktmbuf_alloc(mbuf_pool); > 616 if (unlikely(m == NULL)) { > 617 RTE_LOG(ERR, VHOST_DATA, > 618 "F0 Failed to allocate memory for mbuf. mbuf_pool:%p\n", mbuf_pool); > 619 break; > 620 } > 621 seg_offset = 0; > 622 seg_avail = m->buf_len - RTE_PKTMBUF_HEADROOM; > 623 cpy_len = RTE_MIN(vb_avail, seg_avail); > > > > _______________________________________________ > dev mailing list > dev@openvswitch.org > http://openvswitch.org/mailman/listinfo/dev