From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <keith.wiles@intel.com>
Received: from mga04.intel.com (mga04.intel.com [192.55.52.120])
 by dpdk.org (Postfix) with ESMTP id 1058B1B603
 for <users@dpdk.org>; Sat, 24 Nov 2018 17:01:06 +0100 (CET)
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
 24 Nov 2018 08:01:05 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.56,273,1539673200"; d="scan'208";a="102769757"
Received: from fmsmsx104.amr.corp.intel.com ([10.18.124.202])
 by orsmga003.jf.intel.com with ESMTP; 24 Nov 2018 08:01:05 -0800
Received: from fmsmsx117.amr.corp.intel.com (10.18.116.17) by
 fmsmsx104.amr.corp.intel.com (10.18.124.202) with Microsoft SMTP Server (TLS)
 id 14.3.408.0; Sat, 24 Nov 2018 08:01:05 -0800
Received: from fmsmsx118.amr.corp.intel.com ([169.254.1.160]) by
 fmsmsx117.amr.corp.intel.com ([169.254.3.100]) with mapi id 14.03.0415.000;
 Sat, 24 Nov 2018 08:01:05 -0800
From: "Wiles, Keith" <keith.wiles@intel.com>
To: Harsh Patel <thadodaharsh10@gmail.com>
CC: Kyle Larose <eomereadig@gmail.com>, "users@dpdk.org" <users@dpdk.org>
Thread-Topic: [dpdk-users] Query on handling packets
Thread-Index: AQHUdzydRRkBFdv4fkKjO7j2RJcyb6VGGaEAgACGp4CAAAyagIABE0+AgAFRlQCAAnP3gIACAjmAgAC+mQCAAZQ9gIAAFqaAgARlHYCAAMRLgIACmg8AgATaFACAAyZygA==
Date: Sat, 24 Nov 2018 16:01:04 +0000
Message-ID: <34E92C48-A90C-472C-A915-AAA4A6B5CDE8@intel.com>
References: <CAA0iYrE_OBz5dCAT4UrDNHqnR4LKeHDKVMD5+5CgGA4Va7tn+g@mail.gmail.com>
 <71CBA720-633D-4CFE-805C-606DAAEDD356@intel.com>
 <CAA0iYrHkp_UZ=GMuzG+Ti6dJk4+FWFDotuNWDpcWLCqA1T6NZg@mail.gmail.com>
 <D6A4CD43-BE09-4AFA-A82C-962650011A14@intel.com>
 <CAA0iYrFBzO_Bw2bUy46VBjpLJNzos3M57N=nfkx8FNUMgq+2bQ@mail.gmail.com>
 <3C60E59D-36AD-4382-8CC3-89D4EEB0140D@intel.com>
 <CAA0iYrErep=BitAUoj88m=4JpVDvCtgC1bs3UNCEvBd4_=7iLQ@mail.gmail.com>
 <CAA0iYrFAZYKLyZ6ZbZTy=PnHgo=tnOc1eNQA_rA-FxzGG5QSVw@mail.gmail.com>
 <76959924-D9DB-4C58-BB05-E33107AD98AC@intel.com>
 <CAA0iYrGiPeY7zSHd0ukbQu=VcSVkAJ8c4m-fJW5pzWmQi3ayxA@mail.gmail.com>
 <A74CE37E-B8B1-4069-9AAF-566DE44F92A8@intel.com>
 <CAA0iYrGYreV5eoZ-ZfO=+ab-BOhGe_Rvor5j=pFTehFh4YAQrw@mail.gmail.com>
 <CAMFWN9nik1F5L=Ffy3s43eg=C2QEUzMjzmc-edLQRxXTRznczQ@mail.gmail.com>
 <485F0372-7486-473B-ACDA-F42A2D86EF03@intel.com>
 <CAA0iYrFYZhbC-1=t37OkepbP=SNtsPMBYFR6p1CGj3rUQSnaFQ@mail.gmail.com>
In-Reply-To: <CAA0iYrFYZhbC-1=t37OkepbP=SNtsPMBYFR6p1CGj3rUQSnaFQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.255.228.125]
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C0A769C1A083C8469CF584CD1158B61B@intel.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-users] Query on handling packets
X-BeenThere: users@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK usage discussions <users.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/users>,
 <mailto:users-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/users/>
List-Post: <mailto:users@dpdk.org>
List-Help: <mailto:users-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/users>,
 <mailto:users-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Sat, 24 Nov 2018 16:01:07 -0000



> On Nov 22, 2018, at 9:54 AM, Harsh Patel <thadodaharsh10@gmail.com> wrote=
:
>=20
> Hi
>=20
> Thank you so much for the reply and for the solution.
>=20
> We used the given code. We were amazed by the pointer arithmetic you used=
, got to learn something new.
>=20
> But still we are under performing.The same bottleneck of ~2.5Mbps is seen=
.
>=20
> We also checked if the raw socket was using any extra (logical) cores tha=
n the DPDK. We found that raw socket has 2 logical threads running on 2 log=
ical CPUs. Whereas, the DPDK version has 6 logical threads on 2 logical CPU=
s. We also ran the 6 threads on 4 logical CPUs, still we see the same bottl=
eneck.
>=20
> We have updated our code (you can use the same links from previous mail).=
 It would be helpful if you could help us in finding what causes the bottle=
neck.

I looked at the code for a few seconds and noticed your TX_TIMEOUT is macro=
 that calls (rte_get_timer_hz()/2014) just to be safe I would not call rte_=
get_timer_hz() time, but grab the value and store the hz locally and use th=
at variable instead. This will not improve performance is my guess and I wo=
uld have to look at the code the that routine to see if it buys you anythin=
g to store the value locally. If the getting hz is just a simple read of a =
variable then good, but still you should should a local variable within the=
 object to hold the (rte_get_timer_hz()/2048) instead of doing the call and=
 divide each time.

>=20
> Thanks and Regards,=20
> Harsh and Hrishikesh=20
>=20
>=20
> On Mon, Nov 19, 2018, 19:19 Wiles, Keith <keith.wiles@intel.com> wrote:
>=20
>=20
> > On Nov 17, 2018, at 4:05 PM, Kyle Larose <eomereadig@gmail.com> wrote:
> >=20
> > On Sat, Nov 17, 2018 at 5:22 AM Harsh Patel <thadodaharsh10@gmail.com> =
wrote:
> >>=20
> >> Hello,
> >> Thanks a lot for going through the code and providing us with so much
> >> information.
> >> We removed all the memcpy/malloc from the data path as you suggested a=
nd
> > ...
> >> After removing this, we are able to see a performance gain but not as =
good
> >> as raw socket.
> >>=20
> >=20
> > You're using an unordered_map to map your buffer pointers back to the
> > mbufs. While it may not do a memcpy all the time, It will likely end
> > up doing a malloc arbitrarily when you insert or remove entries from
> > the map. If it needs to resize the table, it'll be even worse. You may
> > want to consider using librte_hash:
> > https://doc.dpdk.org/api/rte__hash_8h.html instead. Or, even better,
> > see if you can design the system to avoid needing to do a lookup like
> > this. Can you return a handle with the mbuf pointer and the data
> > together?
> >=20
> > You're also using floating point math where it's unnecessary (the
> > timing check). Just multiply the numerator by 1000000 prior to doing
> > the division. I doubt you'll overflow a uint64_t with that. It's not
> > as efficient as integer math, though I'm not sure offhand it'd cause a
> > major perf problem.
> >=20
> > One final thing: using a raw socket, the kernel will take over
> > transmitting and receiving to the NIC itself. that means it is free to
> > use multiple CPUs for the rx and tx. I notice that you only have one
> > rx/tx queue, meaning at most one CPU can send and receive packets.
> > When running your performance test with the raw socket, you may want
> > to see how busy the system is doing packet sends and receives. Is it
> > using more than one CPU's worth of processing? Is it using less, but
> > when combined with your main application's usage, the overall system
> > is still using more than one?
>=20
> Along with the floating point math, I would remove all floating point mat=
h and use the rte_rdtsc() function to use cycles. Using something like:
>=20
> uint64_t cur_tsc, next_tsc, timo =3D (rte_timer_get_hz() / 16);   /* One =
16th of a second use 2/4/8/16/32 power of two numbers to make the math simp=
le divide */
>=20
> cur_tsc =3D rte_rdtsc();
>=20
> next_tsc =3D cur_tsc + timo; /* Now next_tsc the next time to flush */
>=20
> while(1) {
>         cur_tsc =3D rte_rdtsc();
>         if (cur_tsc >=3D next_tsc) {
>                 flush();
>                 next_tsc +=3D timo;
>         }
>         /* Do other stuff */
> }
>=20
> For the m_bufPktMap I would use the rte_hash or do not use a hash at all =
by grabbing the buffer address and subtract the
> mbuf =3D (struct rte_mbuf *)RTE_PTR_SUB(buf, sizeof(struct rte_mbuf) + RT=
E_MAX_HEADROOM);
>=20
>=20
> DpdkNetDevice:Write(uint8_t *buffer, size_t length)
> {
>         struct rte_mbuf *pkt;
>         uint64_t cur_tsc;
>=20
>         pkt =3D (struct rte_mbuf *)RTE_PTR_SUB(buffer, sizeof(struct rte_=
mbuf) + RTE_MAX_HEADROOM);
>=20
>         /* No need to test pkt, but buffer maybe tested to make sure it i=
s not null above the math above */
>=20
>         pkt->pk_len =3D length;
>         pkt->data_len =3D length;
>=20
>         rte_eth_tx_buffer(m_portId, 0, m_txBuffer, pkt);
>=20
>         cur_tsc =3D rte_rdtsc();
>=20
>         /* next_tsc is a private variable */
>         if (cur_tsc >=3D next_tsc) {
>                 rte_eth_tx_buffer_flush(m_portId, 0, m_txBuffer);       /=
* hardcoded the queue id, should be fixed */
>                 next_tsc =3D cur_tsc + timo; /* timo is a fixed number of=
 cycles to wait */
>         }
>         return length;
> }
>=20
> DpdkNetDevice::Read()
> {
>         struct rte_mbuf *pkt;
>=20
>         if (m_rxBuffer->length =3D=3D 0) {
>                 m_rxBuffer->next =3D 0;
>                 m_rxBuffer->length =3D rte_eth_rx_burst(m_portId, 0, m_rx=
Buffer->pmts, MAX_PKT_BURST);
>=20
>                 if (m_rxBuffer->length =3D=3D 0)
>                         return std::make_pair(NULL, -1);
>         }
>=20
>         pkt =3D m_rxBuffer->pkts[m_rxBuffer->next++];
>=20
>         /* do not use rte_pktmbuf_read() as it does a copy for the comple=
te packet */
>=20
>         return std:make_pair(rte_pktmbuf_mtod(pkt, char *), pkt->pkt_len)=
;
> }
>=20
> void
> DpdkNetDevice::FreeBuf(uint8_t *buf)
> {
>         struct rte_mbuf *pkt;
>=20
>         if (!buf)
>                 return;
>         pkt =3D (struct rte_mbuf *)RTE_PKT_SUB(buf, sizeof(rte_mbuf) + RT=
E_MAX_HEADROOM);
>=20
>         rte_pktmbuf_free(pkt);
> }
>=20
> When your code is done with the buffer, then convert the buffer address b=
ack to a rte_mbuf pointer and call rte_pktmbuf_free(pkt); This should elimi=
nate the copy and floating point code. Converting my C code to C++ priceles=
s :-)
>=20
> Hopefully the buffer address passed is the original buffer address and ha=
s not be adjusted.
>=20
>=20
> Regards,
> Keith
>=20

Regards,
Keith