From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 438D1201 for ; Tue, 13 Nov 2018 14:47:50 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 05:47:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,499,1534834800"; d="scan'208";a="86144496" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by fmsmga008.fm.intel.com with ESMTP; 13 Nov 2018 05:47:50 -0800 Received: from fmsmsx117.amr.corp.intel.com ([169.254.3.70]) by FMSMSX108.amr.corp.intel.com ([169.254.9.157]) with mapi id 14.03.0415.000; Tue, 13 Nov 2018 05:47:49 -0800 From: "Wiles, Keith" To: Harsh Patel CC: "users@dpdk.org" Thread-Topic: [dpdk-users] Query on handling packets Thread-Index: AQHUdzydRRkBFdv4fkKjO7j2RJcyb6VGGaEAgACGp4CAAAyagIABE0+AgAFRlQCAAnP3gIACAjmAgAC+mQA= Date: Tue, 13 Nov 2018 13:47:48 +0000 Message-ID: <76959924-D9DB-4C58-BB05-E33107AD98AC@intel.com> References: <71CBA720-633D-4CFE-805C-606DAAEDD356@intel.com> <3C60E59D-36AD-4382-8CC3-89D4EEB0140D@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.251.137.194] Content-Type: text/plain; charset="us-ascii" Content-ID: <2A437E95A640B049A2DEB32000147E16@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] Query on handling packets X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Nov 2018 13:47:52 -0000 > On Nov 12, 2018, at 8:25 PM, Harsh Patel wrote= : >=20 > Hello, > It would be really helpful if you can provide us a link (for both Tx and = Rx) to the project you mentioned earlier where you worked on a similar prob= lem, if possible.=20 >=20 At this time I can not provide a link. I will try and see what I can do, bu= t do not hold your breath it could be awhile as we have to go thru a lot of= legal stuff. If you can try vtune tool from Intel for x86 systems if you c= an get a copy for your platform as it can tell you a lot about the code and= where the performance issues are located. If you are not running Intel x86= then my code may not work for you, I do not remember if you told me which = platform. > Thanks and Regards,=20 > Harsh & Hrishikesh. >=20 > On Mon, 12 Nov 2018 at 01:15, Harsh Patel wrot= e: > Thanks a lot for all the support. We are looking into our work as of now = and will contact you once we are done checking it completely from our side.= Thanks for the help. >=20 > Regards, > Harsh and Hrishikesh >=20 > On Sat, 10 Nov 2018 at 11:47, Wiles, Keith wrote: > Please make sure to send your emails in plain text format. The Mac mail p= rogram loves to use rich-text format is the original email use it and I hav= e told it not only send plain text :-( >=20 > > On Nov 9, 2018, at 4:09 AM, Harsh Patel wrot= e: > >=20 > > We have implemented the logic for Tx/Rx as you suggested. We compared t= he obtained throughput with another version of same application that uses L= inux raw sockets.=20 > > Unfortunately, the throughput we receive in our DPDK application is les= s by a good margin. Is this any way we can optimize our implementation or a= nything that we are missing? > >=20 >=20 > The PoC code I was developing for DAPI I did not have any performance of = issues it run just as fast with my limited testing. I converted the l3fwd c= ode and I saw 10G 64byte wire rate as I remember using pktgen to generate t= he traffic. >=20 > Not sure why you would see a big performance drop, but I do not know your= application or code. >=20 > > Thanks and regards > > Harsh & Hrishikesh > >=20 > > On Thu, 8 Nov 2018 at 23:14, Wiles, Keith wrote= : > >=20 > >=20 > >> On Nov 8, 2018, at 4:58 PM, Harsh Patel wro= te: > >>=20 > >> Thanks > >> for your insight on the topic. Transmission is working with the funct= ions you mentioned. We tried to search for some similar functions for handl= ing incoming packets but could not find anything. Can you help us on that a= s well? > >>=20 > >=20 > > I do not know if a DPDK API set for RX side. But in the DAPI (DPDK API)= PoC I was working on and presented at the DPDK Summit last Sept. In the Po= C I did create a RX side version. The issues it has a bit of tangled up in = the DAPI PoC. > >=20 > > The basic concept is a call to RX a single packet does a rx_burst of N = number of packets keeping then in a mbuf list. The code would spin waiting = for mbufs to arrive or return quickly if a flag was set. When it did find R= X mbufs it would just return the single mbuf and keep the list of mbufs for= later requests until the list is empty then do another rx_burst call. > >=20 > > Sorry this is a really quick note on how it works. If you need more det= ails we can talk more later. > >>=20 > >> Regards, > >> Harsh > >> and Hrishikesh. > >>=20 > >>=20 > >> On Thu, 8 Nov 2018 at 14:26, Wiles, Keith wrot= e: > >>=20 > >>=20 > >> > On Nov 8, 2018, at 8:24 AM, Harsh Patel w= rote: > >> >=20 > >> > Hi, > >> > We are working on a project where we are trying to integrate DPDK wi= th > >> > another software. We are able to obtain packets from the other envir= onment > >> > to DPDK environment in one-by-one fashion. On the other hand DPDK al= lows to > >> > send/receive burst of data packets. We want to know if there is any > >> > functionality in DPDK to achieve this conversion of single incoming = packet > >> > to a burst of packets sent on NIC and similarly, conversion of burst= read > >> > packets from NIC to send it to other environment sequentially? > >>=20 > >>=20 > >> Search in the docs or lib/librte_ethdev directory on rte_eth_tx_buffer= _init, rte_eth_tx_buffer, ... > >>=20 > >>=20 > >>=20 > >> > Thanks and regards > >> > Harsh Patel, Hrishikesh Hiraskar > >> > NITK Surathkal > >>=20 > >> Regards, > >> Keith > >>=20 > >=20 > > Regards, > > Keith > >=20 >=20 > Regards, > Keith >=20 Regards, Keith