From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ot1-f45.google.com (mail-ot1-f45.google.com [209.85.210.45]) by dpdk.org (Postfix) with ESMTP id 5E0871B5E8 for ; Fri, 12 Oct 2018 14:40:15 +0200 (CEST) Received: by mail-ot1-f45.google.com with SMTP id v2so5068770otk.1 for ; Fri, 12 Oct 2018 05:40:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=9fHojoq194Wi4onMK2ofmUK48TOKGa2o3tLlELHFmi8=; b=i9qZhbtFOn6hlx1v6wyI5dzpLqMfaQMbr1Pk9+SMlViamq3aG+5ZQRM37aBSufvdPP MPmFpzNMfjFclSeEsW8ZaRyt2RO9Hpt8Xrv2pijCVdmrwE27KCfraxosIPw6Tub+Jyan rONUQPriDq5ek+mTJJNfdYACrI2uOYOwcb5QeMoILZZ732SgLz63kS1jLvXn65Jg4U9f mBsxaV8zmgZnuBv8KDhcsYOuWlp8xcrxZJW2wIKBl/0L2U+2E489tC2lLQrWLgrop2Yp ZLAHlqJWsCeBsDVUSyVoWm5PcSMLBuk14wJGglAj8yZQYG4YNSsMeG3BjWj5+1T9uULY g+9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=9fHojoq194Wi4onMK2ofmUK48TOKGa2o3tLlELHFmi8=; b=C3vQcrNslohLdqC4+1Hi8djU13O+2JsP7KXC4PiG9POCnxmLxmk+Z393LE5OYofo74 dwRMvfTJ37swSbqLUu8qMR/7r5VxtEVR03umnYKjeKuWNWrY1ZD+wJSzS6k7/zghlL0h 2jUbUxsiYgIzGGd8zQsmGAeLnJxemT9+6fsY9RqsUSxV8Alk83ZquYM6n2LNwPaH33rG U+53Uw9N9V3th6ctGzmFetwg2mknNY6mJNU9gwB9sRlE4Xdx8o6VkcDTDd+5DfoJCIJD mbzzOHBbVZ65buRBWLMP6i/tZqOPrSzMe/yhoPHtMPhuv9cT9hQy6wkheoiXPFBbn0io XBFA== X-Gm-Message-State: ABuFfoh4F54krOxu0Bgj56ZLcSv/czO/HDC27OAsaZyoD7IoLJxCrL2H DGxtDayZ88Si4gYGGdj6ByLfvbbOM8ejLHSMVT2M0g== X-Google-Smtp-Source: ACcGV63FiPltHrQHQ0jpdcAAD31oar5Bqid/9WTWsp8IUhdrwjE+3J7SS3Lj4M8Ak7hRunjdzBfBlHw/jS7Ok9sai+g= X-Received: by 2002:a9d:fd6:: with SMTP id m22mr3886136otd.227.1539348014266; Fri, 12 Oct 2018 05:40:14 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: waqas ahmed Date: Fri, 12 Oct 2018 17:40:01 +0500 Message-ID: To: users@dpdk.org, wajeeha.javed123@gmail.com Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] users Digest, Vol 155, Issue 7 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Oct 2018 12:40:15 -0000 i think you are increasing descriptors to keep it like FIFO after 2 seconds, that isnt necessary. you need large amount of main memory from where you allocate appropriate number of huge pages to the dpdk app. you can do some calculation with size of mbuf and have 28 million mbufs for 2 seconds. once 512 descriptors are exhausted than old ones are replaced with new pointer every time it receives a packet and mbuf is allocated from the pool if available. On Fri, Oct 12, 2018 at 3:00 PM wrote: > Send users mailing list submissions to > users@dpdk.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mails.dpdk.org/listinfo/users > or, via email, send a message with subject or body 'help' to > users-request@dpdk.org > > You can reach the person managing the list at > users-owner@dpdk.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of users digest..." > > > Today's Topics: > > 1. Re: Problems compiling DPDK for MLX4 (Anthony Hart) > 2. Increasing the NB_MBUFs of PktMbuf MemPool (Wajeeha Javed) > 3. Re: Increasing the NB_MBUFs of PktMbuf MemPool (Andrew Rybchenko) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 11 Oct 2018 16:21:48 -0400 > From: Anthony Hart > To: Cliff Burdick > Cc: users > Subject: Re: [dpdk-users] Problems compiling DPDK for MLX4 > Message-ID: > Content-Type: text/plain; charset=utf-8 > > Thanks I will try that > > > On Oct 11, 2018, at 3:11 PM, Cliff Burdick wrote: > > > > The easy workaround is to install the mellanox OFED package with the > flags --dpdk --upstream-libs. > > > > On Thu, Oct 11, 2018 at 8:57 AM Anthony Hart > wrote: > > > > Having problems compiling DPDK for the Mellanox PMD. > > > > For dpdk-18-08 I get... > > > > CC efx_phy.o > > In file included from > /root/th/dpdk-18.08/drivers/net/mlx4/mlx4_txq.c:35:0: > > /root/th/dpdk-18.08/drivers/net/mlx4/mlx4_glue.h:16:31: fatal error: > infiniband/mlx4dv.h: No such file or directory > > #include > > ^ > > compilation terminated. > > In file included from > /root/th/dpdk-18.08/drivers/net/mlx4/mlx4_intr.c:32:0: > > /root/th/dpdk-18.08/drivers/net/mlx4/mlx4_glue.h:16:31: fatal error: > infiniband/mlx4dv.h: No such file or directory > > #include > > > > > > For dpdk-1711.3 > > > > CC mlx5.o > > /root/th/dpdk-stable-17.11.3/drivers/net/mlx5/mlx5.c: In function > ?mlx5_pci_probe?: > > /root/th/dpdk-stable-17.11.3/drivers/net/mlx5/mlx5.c:921:21: error: > ?struct ibv_device_attr_ex? has no member named ?device_cap_flags_ex? > > !!(device_attr_ex.device_cap_flags_ex & > > ^ > > /root/th/dpdk-stable-17.11.3/drivers/net/mlx5/mlx5.c:922:7: error: > ?IBV_DEVICE_RAW_IP_CSUM? undeclared (first use in this function) > > IBV_DEVICE_RAW_IP_CSUM); > > ^ > > /root/th/dpdk-stable-17.11.3/drivers/net/mlx5/mlx5.c:922:7: note: each > undeclared identifier is reported only once for each function it appears in > > /root/th/dpdk-stable-17.11.3/drivers/net/mlx5/mlx5.c:942:18: error: > ?struct ibv_device_attr_ex? has no member named ?rss_caps? > > device_attr_ex.rss_caps.max_rwq_indirection_table_size; > > ^ > > > > > > This is on Lentos 7.5 (3.10.0-862.14.4.el7.x86_64) > > > > With mlnx-en-4.4-1.0.1.0-rhel7.5-x86_64.iso installed > > > > > > Any thoughts? > > > > thanks > > ? > > tony > > > > > > > > ------------------------------ > > Message: 2 > Date: Fri, 12 Oct 2018 09:48:06 +0500 > From: Wajeeha Javed > To: users@dpdk.org > Subject: [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool > Message-ID: > < > CAAQUUHUbowa5EnTiOhsaimAXNJXxjxKgPY1GAsqR+EUtoGL_2w@mail.gmail.com> > Content-Type: text/plain; charset="UTF-8" > > Hi, > > I am in the process of developing DPDK based Application where I would > like to delay the packets for about 2 secs. There are two ports connected > to DPDK App and sending traffic of 64 bytes size packets at a line rate of > 10GB/s. Within 2 secs, I will have 28 Million packets for each of the port > in delay application. The maximum RX Descriptor size is 16384. I am unable > to increase the number of Rx descriptors more than 16384 value. Is it > possible to increase the number of Rx descriptors to a large value. e.g. > 65536. Therefore I copied the mbufs using the pktmbuf copy code(shown > below) and free the packet received. Now the issue is that I can not copy > more than 5 million packets because the nb_mbufs of the mempool can't be > more than 5 Million (#define NB_MBUF 5000000). If I increase the NB_MBUF > macro from more than 5 Million, the error is returned unable to init mbuf > pool. Is there a possible way to increase the mempool size? > > Furthermore, kindly guide me if this is the appropriate mailing list for > asking this type of questions. > > > > static inline struct rte_mbuf * > > pktmbuf_copy(struct rte_mbuf *md, struct rte_mempool *mp) > { > struct rte_mbuf *mc = NULL; > struct rte_mbuf **prev = &mc; > > do { > struct rte_mbuf *mi; > > mi = rte_pktmbuf_alloc(mp); > if (unlikely(mi == NULL)) { > rte_pktmbuf_free(mc); > > rte_exit(EXIT_FAILURE, "Unable to Allocate Memory. Memory > Failure.\n"); > return NULL; > } > > mi->data_off = md->data_off; > mi->data_len = md->data_len; > mi->port = md->port; > mi->vlan_tci = md->vlan_tci; > mi->tx_offload = md->tx_offload; > mi->hash = md->hash; > > mi->next = NULL; > mi->pkt_len = md->pkt_len; > mi->nb_segs = md->nb_segs; > mi->ol_flags = md->ol_flags; > mi->packet_type = md->packet_type; > > rte_memcpy(rte_pktmbuf_mtod(mi, char *), rte_pktmbuf_mtod(md, char *), > md->data_len); > *prev = mi; > prev = &mi->next; > } while ((md = md->next) != NULL); > > *prev = NULL; > return mc; > > } > > > > *Reference:* http://patchwork.dpdk.org/patch/6289/ > > Thanks & Best Regards, > > Wajeeha Javed > > > ------------------------------ > > Message: 3 > Date: Fri, 12 Oct 2018 11:56:48 +0300 > From: Andrew Rybchenko > To: Wajeeha Javed , > Subject: Re: [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool > Message-ID: > Content-Type: text/plain; charset="utf-8"; format=flowed > > Hi, > > On 10/12/18 7:48 AM, Wajeeha Javed wrote: > > Hi, > > > > I am in the process of developing DPDK based Application where I would > > like to delay the packets for about 2 secs. There are two ports connected > > to DPDK App and sending traffic of 64 bytes size packets at a line rate > of > > 10GB/s. Within 2 secs, I will have 28 Million packets for each of the > port > > in delay application. The maximum RX Descriptor size is 16384. I am > unable > > to increase the number of Rx descriptors more than 16384 value. Is it > > possible to increase the number of Rx descriptors to a large value. e.g. > > 65536. Therefore I copied the mbufs using the pktmbuf copy code(shown > > below) and free the packet received. Now the issue is that I can not copy > > more than 5 million packets because the nb_mbufs of the mempool can't be > > more than 5 Million (#define NB_MBUF 5000000). If I increase the NB_MBUF > > macro from more than 5 Million, the error is returned unable to init mbuf > > pool. Is there a possible way to increase the mempool size? > > I've failed to find explicit limitations from the first glance. > NB_MBUF define is typically internal to examples/apps. > The question I'd like to double-check if the host has enought > RAM and hugepages allocated? 5 million mbufs already require about > 10G. > > Andrew. > > > End of users Digest, Vol 155, Issue 7 > ************************************* >