From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id D23E81B39A for ; Fri, 12 Oct 2018 13:40:57 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Oct 2018 04:40:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,371,1534834800"; d="scan'208";a="270772022" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by fmsmga005.fm.intel.com with ESMTP; 12 Oct 2018 04:40:56 -0700 Received: from fmsmsx114.amr.corp.intel.com (10.18.116.8) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.319.2; Fri, 12 Oct 2018 04:40:56 -0700 Received: from fmsmsx117.amr.corp.intel.com ([169.254.3.34]) by FMSMSX114.amr.corp.intel.com ([10.18.116.8]) with mapi id 14.03.0319.002; Fri, 12 Oct 2018 04:40:56 -0700 From: "Wiles, Keith" To: Wajeeha Javed CC: users Thread-Topic: [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool Thread-Index: AQHUYebkYs8KUSwdVUeenRKrv/0wMqUb8mgA Date: Fri, 12 Oct 2018 11:40:54 +0000 Message-ID: <795A46FC-C3A8-4DA9-A2D6-A5377F3B0D37@intel.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.252.128.143] Content-Type: text/plain; charset="us-ascii" Content-ID: <5637BD1622D6AA49AD1B23BA29BC7EE4@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Oct 2018 11:40:58 -0000 Stupid email program I tell it to reply in the format as received (text for= mat) and it still sends in rich text format :-( hope this is more readable. > On Oct 11, 2018, at 11:48 PM, Wajeeha Javed = wrote: >=20 > Hi, >=20 > I am in the process of developing DPDK based Application where I would > like to delay the packets for about 2 secs. There are two ports connected > to DPDK App and sending traffic of 64 bytes size packets at a line rate o= f > 10GB/s. Within 2 secs, I will have 28 Million packets for each of the por= t > in delay application. The maximum RX Descriptor size is 16384. I am unabl= e > to increase the number of Rx descriptors more than 16384 value. Is it > possible to increase the number of Rx descriptors to a large value. e.g. > 65536. This is most likely a limitation of the NIC being used and increasing beyon= d that value will not be possible, please check the NIC being used programm= er guide. > Therefore I copied the mbufs using the pktmbuf copy code(shown > below) and free the packet received. Now the issue is that I can not copy > more than 5 million packets because the nb_mbufs of the mempool can't be > more than 5 Million (#define NB_MBUF 5000000). If I increase the NB_MBUF > macro from more than 5 Million, the error is returned unable to init mbuf > pool. Is there a possible way to increase the mempool size? The mempool uses uint32_t for most sizes and the number of mempool items is= uint32_t so ok with the number of entries in a can be ~4G as stated be mak= e sure you have enough memory as the over head for mbufs is not just the he= ader + the packet size. My question is why are you copying the mbuf and not just linking the mbufs = into a link list? Maybe I do not understand the reason. I would try to make= sure you do not do a copy of the data and just link the mbufs together usi= ng the next pointer in the mbuf header unless you have chained mbufs alread= y. The other question is can you drop any packets if not then you only have th= e linking option IMO. If you can drop packets then you can just start dropp= ing them when the ring is getting full. Holding onto 28m packets for two se= conds can cause other protocol related problems and TCP could be sending re= transmitted packets and now you have caused a bunch of work on the RX side = at the end point. >=20 > Furthermore, kindly guide me if this is the appropriate mailing list for > asking this type of questions. You are on the correct email list, dev@dpdk.org is for= DPDK developers normally. Hope this helps. >=20 > >=20 > static inline struct rte_mbuf * >=20 > pktmbuf_copy(struct rte_mbuf *md, struct rte_mempool *mp) > { > struct rte_mbuf *mc =3D NULL; > struct rte_mbuf **prev =3D &mc; >=20 > do { > struct rte_mbuf *mi; >=20 > mi =3D rte_pktmbuf_alloc(mp); > if (unlikely(mi =3D=3D NULL)) { > rte_pktmbuf_free(mc); >=20 > rte_exit(EXIT_FAILURE, "Unable to Allocate Memory. Memory > Failure.\n"); > return NULL; > } >=20 > mi->data_off =3D md->data_off; > mi->data_len =3D md->data_len; > mi->port =3D md->port; > mi->vlan_tci =3D md->vlan_tci; > mi->tx_offload =3D md->tx_offload; > mi->hash =3D md->hash; >=20 > mi->next =3D NULL; > mi->pkt_len =3D md->pkt_len; > mi->nb_segs =3D md->nb_segs; > mi->ol_flags =3D md->ol_flags; > mi->packet_type =3D md->packet_type; >=20 > rte_memcpy(rte_pktmbuf_mtod(mi, char *), rte_pktmbuf_mtod(md, char *), > md->data_len); > *prev =3D mi; > prev =3D &mi->next; > } while ((md =3D md->next) !=3D NULL); >=20 > *prev =3D NULL; > return mc; >=20 > } >=20 > >=20 > *Reference:* http://patchwork.dpdk.org/patch/6289/ >=20 > Thanks & Best Regards, >=20 > Wajeeha Javed Regards, Keith