DPDK patches and discussions
 help / color / mirror / Atom feed
From: sabu kurian <sabu2kurian@gmail.com>
To: "Richardson, Bruce" <bruce.richardson@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Intel I350 fails to work with DPDK
Date: Wed, 28 May 2014 16:24:09 +0530	[thread overview]
Message-ID: <CAJ2bnfB4OAVqB11KRwHjXhqBm3Zsn+u8xDq++h8JAueHQppJ-A@mail.gmail.com> (raw)
In-Reply-To: <59AF69C657FD0841A61C55336867B5B01AA2F05B@IRSMSX103.ger.corp.intel.com>

Hai bruce,

Thanks for the reply.

I even tried that before. Having a burst size of 64 or 128 simply fails.
The card would send out a few packets (some 400 packets of 74 byte size)
and then freeze. For my application... I'm trying to generate the peak
traffic possible with the link speed and the NIC.



On Wed, May 28, 2014 at 4:16 PM, Richardson, Bruce <
bruce.richardson@intel.com> wrote:

> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of sabu kurian
> > Sent: Wednesday, May 28, 2014 10:42 AM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] Intel I350 fails to work with DPDK
> >
> > I have asked a similar question before, no one replied though.
> >
> > I'm crafting my own packets in mbuf's (74 byte packets all) and sending
> it
> > using
> >
> > ret = rte_eth_tx_burst(port_ids[lcore_id], 0, m_pool,burst_size);
> >
> > When burst_size is 1, it does work. Work in the sense the NIC will
> continue
> > with sending packets, at a little over
> > 50 percent of the link rate. For 1000 Mbps link rate .....The observed
> > transmit rate of the NIC is 580 Mbps (using Intel DPDK). But it should be
> > possible to achieve at least 900 Mbps transmit rate with Intel DPDK and
> > I350 on 1 Gbps link.
> >
> > Could someone help me out on this ?
> >
> > Thanks and regards
>
> Sending out a single packet at a time is going to have a very high
> overhead, as each call to tx_burst involves making PCI transactions (MMIO
> writes to the hardware ring pointer). To reduce this penalty you should
> look to send out the packets in bursts, thereby saving PCI bandwidth and
> splitting the cost of each MMIO write over multiple packets.
>
> Regards,
> /Bruce
>

  reply	other threads:[~2014-05-28 10:53 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-28  9:41 sabu kurian
2014-05-28 10:46 ` Richardson, Bruce
2014-05-28 10:54   ` sabu kurian [this message]
2014-05-28 11:18     ` Richardson, Bruce
2014-05-28 11:39       ` sabu kurian
2015-07-26  6:42 he peng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJ2bnfB4OAVqB11KRwHjXhqBm3Zsn+u8xDq++h8JAueHQppJ-A@mail.gmail.com \
    --to=sabu2kurian@gmail.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).