DPDK patches and discussions
 help / color / mirror / Atom feed
From: sabu kurian <sabu2kurian@gmail.com>
To: "Richardson, Bruce" <bruce.richardson@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Intel I350 fails to work with DPDK
Date: Wed, 28 May 2014 17:09:03 +0530	[thread overview]
Message-ID: <CAJ2bnfDnKXkEVnHihnKcP_9G_F9RV6gD-OnCyEA-eOvkWtPszw@mail.gmail.com> (raw)
In-Reply-To: <59AF69C657FD0841A61C55336867B5B01AA2F0A8@IRSMSX103.ger.corp.intel.com>

Hai bruce,


I changed the burst size to 16. The code crafts 54 byte TCP packets. It
sends a few packets and shows a segmentation fault.

Below is the portion of the code that sends the packet.

ret = rte_eth_tx_burst(1, 0, m_pool, burst_size);


                if (ret < 16)
                {
                    for(i=(int)burst_size-ret;i<(int)burst_size;i++)

                    {
                    rte_pktmbuf_free(m_pool[i]);
                    printf("\n Packet dropped %d",i);
                    }

                }
                else
                {

                    lcore_stats[lcore_id].tx += (uint64_t)burst_size;
                }

The above code is being run inside an infinite for loop.
m_pool is an array (size 16) of mbuf's allocated using rte_pktmbuf_alloc.

I'm trying to achieve maximum transfer rate. Is there any other way to do
this with Intel DPDK or am I missing something ?
The code works perfectly inside a virtual machine (VMWare) with emulated
NIC's, but as expected the host kernel drops 99% of the packets.

I'm using Intel® Core™ i7-3770 CPU @ 3.40GHz



On Wed, May 28, 2014 at 4:48 PM, Richardson, Bruce <
bruce.richardson@intel.com> wrote:

>
> > From: sabu kurian [mailto:sabu2kurian@gmail.com]
> > Sent: Wednesday, May 28, 2014 11:54 AM
> > To: Richardson, Bruce
> > Cc: dev@dpdk.org
> > Subject: Re: [dpdk-dev] Intel I350 fails to work with DPDK
> >
> > Hai bruce,
> > Thanks for the reply.
> > I even tried that before. Having a burst size of 64 or 128 simply fails.
> The card would send out a few packets
> > (some 400 packets of 74 byte size) and then freeze. For my
> application... I'm trying to generate the peak
> > traffic possible with the link speed and the NIC.
>
> Bursts of 64 and 128 are rather large, can you perhaps try using bursts of
> 16 and 32 and see what the result is? The drivers are generally tuned for a
> max burst size of about 32 packets.
>
>

  reply	other threads:[~2014-05-28 11:38 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-28  9:41 sabu kurian
2014-05-28 10:46 ` Richardson, Bruce
2014-05-28 10:54   ` sabu kurian
2014-05-28 11:18     ` Richardson, Bruce
2014-05-28 11:39       ` sabu kurian [this message]
2015-07-26  6:42 he peng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJ2bnfDnKXkEVnHihnKcP_9G_F9RV6gD-OnCyEA-eOvkWtPszw@mail.gmail.com \
    --to=sabu2kurian@gmail.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).