From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 40BCAC3F0 for ; Wed, 17 Jun 2015 16:58:10 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP; 17 Jun 2015 07:58:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,633,1427785200"; d="scan'208";a="509816636" Received: from orsmsx110.amr.corp.intel.com ([10.22.240.8]) by FMSMGA003.fm.intel.com with ESMTP; 17 Jun 2015 07:58:09 -0700 Received: from orsmsx158.amr.corp.intel.com (10.22.240.20) by ORSMSX110.amr.corp.intel.com (10.22.240.8) with Microsoft SMTP Server (TLS) id 14.3.224.2; Wed, 17 Jun 2015 07:58:08 -0700 Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by ORSMSX158.amr.corp.intel.com (10.22.240.20) with Microsoft SMTP Server (TLS) id 14.3.224.2; Wed, 17 Jun 2015 07:58:08 -0700 Received: from fmsmsx113.amr.corp.intel.com ([169.254.13.28]) by fmsmsx116.amr.corp.intel.com ([169.254.2.229]) with mapi id 14.03.0224.002; Wed, 17 Jun 2015 07:58:08 -0700 From: "Wiles, Keith" To: "Du, Fan" , "Loftus, Ciara" Thread-Topic: [dpdk-dev] dpdkvhostuser fail to alloc memory when receive packet from other host Thread-Index: AQHQqQ4Fv0vwYoZQYUS+YOsPUrfa6A== Date: Wed, 17 Jun 2015 14:58:07 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.252.140.83] Content-Type: text/plain; charset="us-ascii" Content-ID: <363CE1C60A5AF742898BCBFCE0D4E01B@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" , "dev@openvswitch.org" Subject: Re: [dpdk-dev] dpdkvhostuser fail to alloc memory when receive packet from other host X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jun 2015 14:58:10 -0000 On 6/17/15, 4:49 AM, "Du, Fan" wrote: >Hi, > >I'm playing dpdkvhostuser ports with latest DPDK and ovs master tree with >iperf benchmarking. >When kvm guest1(backed up dpdkvhostuser port)siting on HOST1 is receiving >packets from either other physical HOST2, >or similar kvm guest2 with dpdkvhostuser port siting on HOST2. The >connectivity will break, iperf show no bandwidth and stall finally. > >Other test scenario like, two kvm guest sitting on one host, or a single >kvm guest send packets to a physical host works like a charm. > >Swiitch debug option on, dpdk lib spit as below: >VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL >VHOST_CONFIG: vring call idx:0 file:62 >VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL >VHOST_CONFIG: vring call idx:0 file:58 > >VHOST_DATA: F0 Failed to allocate memory for mbuf. >mbuf_pool:0x7fc7411ab5c0 >VHOST_DATA: F0 Failed to allocate memory for mbuf. >mbuf_pool:0x7fc7411ab5c0 >VHOST_DATA: F0 Failed to allocate memory for mbuf. >mbuf_pool:0x7fc7411ab5c0 >VHOST_DATA: F0 Failed to allocate memory for mbuf. >mbuf_pool:0x7fc7411ab5c0 >VHOST_DATA: F0 Failed to allocate memory for mbuf. >mbuf_pool:0x7fc7411ab5c0 >VHOST_DATA: F0 Failed to allocate memory for mbuf. >mbuf_pool:0x7fc7411ab5c0 >VHOST_DATA: F0 Failed to allocate memory for mbuf. >mbuf_pool:0x7fc7411ab5c0 >VHOST_DATA: F0 Failed to allocate memory for mbuf. >mbuf_pool:0x7fc7411ab5c0 > >After some tweaks of logging code, and looks like bad things happens >within below code snippet: >In lib/librte_vhost/vhost_rxtx.c function: rte_vhost_dequeue_burst > >612 vb_offset =3D 0; >613 vb_avail =3D desc->len; >614 /* Allocate an mbuf and populate the structure. */ >615 m =3D rte_pktmbuf_alloc(mbuf_pool); >616 if (unlikely(m =3D=3D NULL)) { >617 RTE_LOG(ERR, VHOST_DATA, >618 "F0 Failed to allocate memory for >mbuf. mbuf_pool:%p\n", mbuf_pool); >619 break; >620 } >621 seg_offset =3D 0; >622 seg_avail =3D m->buf_len - RTE_PKTMBUF_HEADROOM; >623 cpy_len =3D RTE_MIN(vb_avail, seg_avail); To me this code is only reporting the mbuf_pool does not have any more mbufs, not that this code has some type of error. It seems the number of mbufs allocated to the mbuf_pool is not enough or someplace in the code is not freeing the mbufs after being consumed. You need to find out the reason for why you have run out of mbufs. It is also possible the message should not have been an error, but informational/warning instead as it maybe under some high volume loads this may occur and no amount of mbufs may resolve the condition. Regards, ++Keith > > >