From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 69E22C3E8 for ; Wed, 17 Jun 2015 11:52:03 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP; 17 Jun 2015 02:52:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,632,1427785200"; d="scan'208,217";a="729092892" Received: from pgsmsx103.gar.corp.intel.com ([10.221.44.82]) by fmsmga001.fm.intel.com with ESMTP; 17 Jun 2015 02:52:01 -0700 Received: from kmsmsx154.gar.corp.intel.com (172.21.73.14) by PGSMSX103.gar.corp.intel.com (10.221.44.82) with Microsoft SMTP Server (TLS) id 14.3.224.2; Wed, 17 Jun 2015 17:49:20 +0800 Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by KMSMSX154.gar.corp.intel.com (172.21.73.14) with Microsoft SMTP Server (TLS) id 14.3.224.2; Wed, 17 Jun 2015 17:49:20 +0800 Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.129]) by shsmsx102.ccr.corp.intel.com ([169.254.2.165]) with mapi id 14.03.0224.002; Wed, 17 Jun 2015 17:49:19 +0800 From: "Du, Fan" To: "Loftus, Ciara" Thread-Topic: dpdkvhostuser fail to alloc memory when receive packet from other host Thread-Index: AdCo4DRmqnncmta8T/O9hhj5MPRL4w== Date: Wed, 17 Jun 2015 09:49:18 +0000 Message-ID: <5A90DA2E42F8AE43BC4A093BF0678848E6C4D8@SHSMSX104.ccr.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" , "dev@openvswitch.org" Subject: [dpdk-dev] dpdkvhostuser fail to alloc memory when receive packet from other host X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jun 2015 09:52:04 -0000 Hi, I'm playing dpdkvhostuser ports with latest DPDK and ovs master tree with i= perf benchmarking. When kvm guest1(backed up dpdkvhostuser port)siting on HOST1 is receiving p= ackets from either other physical HOST2, or similar kvm guest2 with dpdkvhostuser port siting on HOST2. The connecti= vity will break, iperf show no bandwidth and stall finally. Other test scenario like, two kvm guest sitting on one host, or a single kv= m guest send packets to a physical host works like a charm. Swiitch debug option on, dpdk lib spit as below: VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL VHOST_CONFIG: vring call idx:0 file:62 VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL VHOST_CONFIG: vring call idx:0 file:58 VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 VHOST_DATA: F0 Failed to allocate memory for mbuf. mbuf_pool:0x7fc7411ab5c0 After some tweaks of logging code, and looks like bad things happens within= below code snippet: In lib/librte_vhost/vhost_rxtx.c function: rte_vhost_dequeue_burst 612 vb_offset =3D 0; 613 vb_avail =3D desc->len; 614 /* Allocate an mbuf and populate the structure. */ 615 m =3D rte_pktmbuf_alloc(mbuf_pool); 616 if (unlikely(m =3D=3D NULL)) { 617 RTE_LOG(ERR, VHOST_DATA, 618 "F0 Failed to allocate memory for mbuf.= mbuf_pool:%p\n", mbuf_pool); 619 break; 620 } 621 seg_offset =3D 0; 622 seg_avail =3D m->buf_len - RTE_PKTMBUF_HEADROOM; 623 cpy_len =3D RTE_MIN(vb_avail, seg_avail);