From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from sonic307-22.consmr.mail.sg3.yahoo.com (sonic307-22.consmr.mail.sg3.yahoo.com [106.10.241.39]) by dpdk.org (Postfix) with ESMTP id B9FFA7CD5 for ; Sun, 25 Mar 2018 13:30:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com.hk; s=s2048; t=1521977414; bh=r5te6YH8kk1YTdDn+NodxLR3ys0YOrM2/jbiQWZclIQ=; h=Date:From:Reply-To:To:Subject:References:From:Subject; b=trNrNJE/pCXjTNzJxXmWJGgFxdZBoxjENnslUJDcRf7NAb3e9uHyBYbsorq84VgqaVmcuwfbY+gdvnAEdouEQisURGqtdX7y0GaP7F8TP+sSE61/5UH2VxgCV/dg2M/W+6b5l8+2BxWdVzep+ierq/ynfhgEP6bpONmq3vYaiT5aYw1ss+BjuS3H+EFHgaIQ11/xznbedZA28UJsg7vDGvYM1AZHfWHZfAzdShs9rpOPxFnRsT5v1S4TxiMb9uXrYJVFgiFwjaVVkvdP4eNKJKEjFQ8TtZ7Rc26pa3YClnCuvkLYJr3xrI0vd+0tc4PAW/6QOLCIVq+L4M0gX4wfew== X-YMail-OSG: LjiE00wVM1kiMkkwPAzcUHqeNdhHhpT9cICMx_x3t8Qt3OtOCJfXqefvw2mjXLT XHNd3cLIgQ0I1ZNYWwv1_DIOeIc26s.QtUSJem7X14ah2U1jl6D_4zvtNlHPFygK_RIZgaTDtJfm PNpFWc0x3EcstpP5pN4YWkYIbohnD7177oGGtiobED0hF529uY4ZZPLcGHx6fQgwUaK2.PXZElAc UdmGzQnMVGtOymGH6c4itDR3ion8RuaZYTsfYh5yXgQbSeHx8Raz3KRJmwzVeXvFpNJaZELpKzav zQsdG3g4X8UeJODUqondPcmvNMiTaks_BanZChKIFW6kzDwf0lNXuXi12O.WPv6SMX53JrpQJ4v8 J_LivDCnyarV4sevbnvhoJ7S6Ss1nxwjPmwVFXM2I2E_U8RAV_cGsUj4MwgWBMXGWTEVqlux40VB 01SNnvb_moTI6Krll0rw4RfQqnc05D8n.HKk7i63y1BsKFUpM.MbAauSyN9XKf1U- Received: from sonic.gate.mail.ne1.yahoo.com by sonic307.consmr.mail.sg3.yahoo.com with HTTP; Sun, 25 Mar 2018 11:30:14 +0000 Date: Sun, 25 Mar 2018 11:30:12 +0000 (UTC) From: MAC Lee To: , Filip Janiszewski Message-ID: <277260559.2895913.1521977412628@mail.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable References: <277260559.2895913.1521977412628.ref@mail.yahoo.com> X-Mailer: WebService/1.1.11631 YahooMailBasic Mozilla/5.0 (Linux; Android 8.0.0; F5121 Build/34.4.A.2.19) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.109 Mobile Safari/537.36 Subject: [dpdk-users] =?utf-8?b?5Zue6KaG77mVICBQYWNrZXRzIGRyb3Agd2hpbGUg?= =?utf-8?q?fetching_with_rte=5Feth=5Frx=5Fburst?= X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list Reply-To: MAC Lee List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 25 Mar 2018 11:30:17 -0000 Hi Filip, which dpdk version are you using? You can take a look to the source cod= e of dpdk , the rxdrop counter may be not implemented in dpdk. So you alway= s get 0 in rxdrop.=20 Thanks, Marco -------------------------------------------- 18/3/25 (=E9=80=B1=E6=97=A5)=EF=BC=8CFilip Janiszewski =E5=AF=AB=E9=81=93=EF=BC=9A =E4=B8=BB=E6=97=A8: [dpdk-users] Packets drop while fetching with rte_eth_= rx_burst =E6=94=B6=E4=BB=B6=E8=80=85: users@dpdk.org =E6=97=A5=E6=9C=9F: 2018=E5=B9=B43=E6=9C=8825=E6=97=A5,=E6=97=A5,=E4=B8=8B= =E5=8D=886:33 =20 Hi Everybody, =20 I have a weird drop problem, and to understand my question the best way is to have a look at this simple (and cleaned from all the not relevant stuff) snippet: =20 while( 1 ) { =C2=A0 =C2=A0 if( config->running =3D=3D false ) { =C2=A0 =C2=A0 =C2=A0 =C2=A0 break; =C2=A0 =C2=A0 } =C2=A0 =C2=A0 num_of_pkt =3D rte_eth_rx_burst( config->port_id, =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 config->queue_idx, =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 buffers, =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 MAX_BURST_DEQ_SIZE); =C2=A0 =C2=A0 if( unlikely( num_of_pkt =3D=3D MAX_BURST_DEQ_SIZE ) ) { =C2=A0 =C2=A0 =C2=A0 =C2=A0 rx_ring_full =3D true; //probably not the best name =C2=A0 =C2=A0 } =20 =C2=A0 =C2=A0 if( likely( num_of_pkt > 0 ) ) =C2=A0 =C2=A0 { =C2=A0 =C2=A0 =C2=A0 =C2=A0 pk_captured +=3D num_of_pkt; =20 =C2=A0 =C2=A0 =C2=A0 =C2=A0 num_of_enq_pkt =3D rte_ring_sp_enqueue_bulk(config->incoming_pkts_ring, =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 (void*)buffers, =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 num_of_pkt, =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &rx_ring_free_space); =C2=A0 =C2=A0 =C2=A0 =C2=A0 //if num_of_enq_pkt =3D=3D 0 free the mbufs.. =C2=A0 =C2=A0 } } =20 This loop is retrieving packets from the device and pushing them into a queue for further processing by another lcore. =20 When I run a test with a Mellanox card sending 20M (20878300) packets at 2.5M p/s the loop seems to miss some packets and pk_captured is always like 19M or similar. =20 rx_ring_full is never true, which means that num_of_pkt is always < MAX_BURST_DEQ_SIZE, so according to the documentation I shall not have drops at HW level. Also, num_of_enq_pkt is never 0 which means that all the packets are enqueued. =20 Now, if from that snipped I remove the rte_ring_sp_enqueue_bulk call (and make sure to release all the mbufs) then pk_captured is always exactly equal to the amount of packets I've send to the NIC. =20 So it seems (but I cant deal with this idea) that rte_ring_sp_enqueue_bulk is somehow too slow and between one call to rte_eth_rx_burst and another some packets are dropped due to full ring on the NIC, but, why num_of_pkt (from rte_eth_rx_burst) is always smaller than MAX_BURST_DEQ_SIZE (much smaller) as if there was always sufficient room for the packets? =20 Is anybody able to help me understand what's happening here? =20 Note, MAX_BURST_DEQ_SIZE is 512. =20 Thanks =20