From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 51DC41B5E6 for ; Fri, 12 Oct 2018 13:35:03 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Oct 2018 04:35:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,371,1534834800"; d="scan'208,217";a="240781597" Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204]) by orsmga004.jf.intel.com with ESMTP; 12 Oct 2018 04:35:02 -0700 Received: from fmsmsx157.amr.corp.intel.com (10.18.116.73) by FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS) id 14.3.319.2; Fri, 12 Oct 2018 04:35:01 -0700 Received: from fmsmsx117.amr.corp.intel.com ([169.254.3.34]) by FMSMSX157.amr.corp.intel.com ([169.254.14.71]) with mapi id 14.03.0319.002; Fri, 12 Oct 2018 04:35:01 -0700 From: "Wiles, Keith" To: Wajeeha Javed CC: users Thread-Topic: [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool Thread-Index: AQHUYebkYs8KUSwdVUeenRKrv/0wMqUb8MIA Date: Fri, 12 Oct 2018 11:35:00 +0000 Message-ID: <2A2ED626-CEA7-4DD5-A29C-E515B67BF5F1@intel.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.252.128.143] MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Oct 2018 11:35:04 -0000 On Oct 11, 2018, at 11:48 PM, Wajeeha Javed > wrote: Hi, I am in the process of developing DPDK based Application where I would like to delay the packets for about 2 secs. There are two ports connected to DPDK App and sending traffic of 64 bytes size packets at a line rate of 10GB/s. Within 2 secs, I will have 28 Million packets for each of the port in delay application. The maximum RX Descriptor size is 16384. I am unable to increase the number of Rx descriptors more than 16384 value. Is it possible to increase the number of Rx descriptors to a large value. e.g. 65536. This is most likely a limitation of the NIC being used and increasing beyon= d that value will not be possible, please check the NIC being used programm= er guide. Therefore I copied the mbufs using the pktmbuf copy code(shown below) and free the packet received. Now the issue is that I can not copy more than 5 million packets because the nb_mbufs of the mempool can't be more than 5 Million (#define NB_MBUF 5000000). If I increase the NB_MBUF macro from more than 5 Million, the error is returned unable to init mbuf pool. Is there a possible way to increase the mempool size? The mempool uses uint32_t for most sizes and the number of mempool items is= uint32_t so ok with the number of entries in a can be ~4G as stated be mak= e sure you have enough memory as the over head for mbufs is not just the he= ader + the packet size. My question is why are you copying the mbuf and not just linking the mbufs = into a link list? Maybe I do not understand the reason. I would try to make= sure you do not do a copy of the data and just link the mbufs together usi= ng the next pointer in the mbuf header unless you have chained mbufs alread= y. The other question is can you drop any packets if not then you only have th= e linking option IMO. If you can drop packets then you can just start dropp= ing them when the ring is getting full. Holding onto 28m packets for two se= conds can cause other protocol related problems and TCP could be sending re= transmitted packets and now you have caused a bunch of work on the RX side = at the end point. Furthermore, kindly guide me if this is the appropriate mailing list for asking this type of questions. You are on the correct email list, dev@dpdk.org is for= DPDK developers normally. Hope this helps. static inline struct rte_mbuf * pktmbuf_copy(struct rte_mbuf *md, struct rte_mempool *mp) { struct rte_mbuf *mc =3D NULL; struct rte_mbuf **prev =3D &mc; do { struct rte_mbuf *mi; mi =3D rte_pktmbuf_alloc(mp); if (unlikely(mi =3D=3D NULL)) { rte_pktmbuf_free(mc); rte_exit(EXIT_FAILURE, "Unable to Allocate Memory. Memory Failure.\n"); return NULL; } mi->data_off =3D md->data_off; mi->data_len =3D md->data_len; mi->port =3D md->port; mi->vlan_tci =3D md->vlan_tci; mi->tx_offload =3D md->tx_offload; mi->hash =3D md->hash; mi->next =3D NULL; mi->pkt_len =3D md->pkt_len; mi->nb_segs =3D md->nb_segs; mi->ol_flags =3D md->ol_flags; mi->packet_type =3D md->packet_type; rte_memcpy(rte_pktmbuf_mtod(mi, char *), rte_pktmbuf_mtod(md, char *), md->data_len); *prev =3D mi; prev =3D &mi->next; } while ((md =3D md->next) !=3D NULL); *prev =3D NULL; return mc; } *Reference:* http://patchwork.dpdk.org/patch/6289/ Thanks & Best Regards, Wajeeha Javed Regards, Keith