DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Intel I350 fails to work with DPDK
@ 2014-05-28  9:41 sabu kurian
  2014-05-28 10:46 ` Richardson, Bruce
  0 siblings, 1 reply; 6+ messages in thread
From: sabu kurian @ 2014-05-28  9:41 UTC (permalink / raw)
  To: dev

I have asked a similar question before, no one replied though.

I'm crafting my own packets in mbuf's (74 byte packets all) and sending it
using

ret = rte_eth_tx_burst(port_ids[lcore_id], 0, m_pool,burst_size);

When burst_size is 1, it does work. Work in the sense the NIC will continue
with sending packets, at a little over
50 percent of the link rate. For 1000 Mbps link rate .....The observed
transmit rate of the NIC is 580 Mbps (using Intel DPDK). But it should be
possible to achieve at least 900 Mbps transmit rate with Intel DPDK and
I350 on 1 Gbps link.

Could someone help me out on this ?

Thanks and regards

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] Intel I350 fails to work with DPDK
  2014-05-28  9:41 [dpdk-dev] Intel I350 fails to work with DPDK sabu kurian
@ 2014-05-28 10:46 ` Richardson, Bruce
  2014-05-28 10:54   ` sabu kurian
  0 siblings, 1 reply; 6+ messages in thread
From: Richardson, Bruce @ 2014-05-28 10:46 UTC (permalink / raw)
  To: sabu kurian, dev

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of sabu kurian
> Sent: Wednesday, May 28, 2014 10:42 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] Intel I350 fails to work with DPDK
> 
> I have asked a similar question before, no one replied though.
> 
> I'm crafting my own packets in mbuf's (74 byte packets all) and sending it
> using
> 
> ret = rte_eth_tx_burst(port_ids[lcore_id], 0, m_pool,burst_size);
> 
> When burst_size is 1, it does work. Work in the sense the NIC will continue
> with sending packets, at a little over
> 50 percent of the link rate. For 1000 Mbps link rate .....The observed
> transmit rate of the NIC is 580 Mbps (using Intel DPDK). But it should be
> possible to achieve at least 900 Mbps transmit rate with Intel DPDK and
> I350 on 1 Gbps link.
> 
> Could someone help me out on this ?
> 
> Thanks and regards

Sending out a single packet at a time is going to have a very high overhead, as each call to tx_burst involves making PCI transactions (MMIO writes to the hardware ring pointer). To reduce this penalty you should look to send out the packets in bursts, thereby saving PCI bandwidth and splitting the cost of each MMIO write over multiple packets.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] Intel I350 fails to work with DPDK
  2014-05-28 10:46 ` Richardson, Bruce
@ 2014-05-28 10:54   ` sabu kurian
  2014-05-28 11:18     ` Richardson, Bruce
  0 siblings, 1 reply; 6+ messages in thread
From: sabu kurian @ 2014-05-28 10:54 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: dev

Hai bruce,

Thanks for the reply.

I even tried that before. Having a burst size of 64 or 128 simply fails.
The card would send out a few packets (some 400 packets of 74 byte size)
and then freeze. For my application... I'm trying to generate the peak
traffic possible with the link speed and the NIC.



On Wed, May 28, 2014 at 4:16 PM, Richardson, Bruce <
bruce.richardson@intel.com> wrote:

> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of sabu kurian
> > Sent: Wednesday, May 28, 2014 10:42 AM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] Intel I350 fails to work with DPDK
> >
> > I have asked a similar question before, no one replied though.
> >
> > I'm crafting my own packets in mbuf's (74 byte packets all) and sending
> it
> > using
> >
> > ret = rte_eth_tx_burst(port_ids[lcore_id], 0, m_pool,burst_size);
> >
> > When burst_size is 1, it does work. Work in the sense the NIC will
> continue
> > with sending packets, at a little over
> > 50 percent of the link rate. For 1000 Mbps link rate .....The observed
> > transmit rate of the NIC is 580 Mbps (using Intel DPDK). But it should be
> > possible to achieve at least 900 Mbps transmit rate with Intel DPDK and
> > I350 on 1 Gbps link.
> >
> > Could someone help me out on this ?
> >
> > Thanks and regards
>
> Sending out a single packet at a time is going to have a very high
> overhead, as each call to tx_burst involves making PCI transactions (MMIO
> writes to the hardware ring pointer). To reduce this penalty you should
> look to send out the packets in bursts, thereby saving PCI bandwidth and
> splitting the cost of each MMIO write over multiple packets.
>
> Regards,
> /Bruce
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] Intel I350 fails to work with DPDK
  2014-05-28 10:54   ` sabu kurian
@ 2014-05-28 11:18     ` Richardson, Bruce
  2014-05-28 11:39       ` sabu kurian
  0 siblings, 1 reply; 6+ messages in thread
From: Richardson, Bruce @ 2014-05-28 11:18 UTC (permalink / raw)
  To: sabu kurian; +Cc: dev


> From: sabu kurian [mailto:sabu2kurian@gmail.com] 
> Sent: Wednesday, May 28, 2014 11:54 AM
> To: Richardson, Bruce
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] Intel I350 fails to work with DPDK
>
> Hai bruce,
> Thanks for the reply.
> I even tried that before. Having a burst size of 64 or 128 simply fails. The card would send out a few packets 
> (some 400 packets of 74 byte size) and then freeze. For my application... I'm trying to generate the peak 
> traffic possible with the link speed and the NIC.

Bursts of 64 and 128 are rather large, can you perhaps try using bursts of 16 and 32 and see what the result is? The drivers are generally tuned for a max burst size of about 32 packets.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] Intel I350 fails to work with DPDK
  2014-05-28 11:18     ` Richardson, Bruce
@ 2014-05-28 11:39       ` sabu kurian
  0 siblings, 0 replies; 6+ messages in thread
From: sabu kurian @ 2014-05-28 11:39 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: dev

Hai bruce,


I changed the burst size to 16. The code crafts 54 byte TCP packets. It
sends a few packets and shows a segmentation fault.

Below is the portion of the code that sends the packet.

ret = rte_eth_tx_burst(1, 0, m_pool, burst_size);


                if (ret < 16)
                {
                    for(i=(int)burst_size-ret;i<(int)burst_size;i++)

                    {
                    rte_pktmbuf_free(m_pool[i]);
                    printf("\n Packet dropped %d",i);
                    }

                }
                else
                {

                    lcore_stats[lcore_id].tx += (uint64_t)burst_size;
                }

The above code is being run inside an infinite for loop.
m_pool is an array (size 16) of mbuf's allocated using rte_pktmbuf_alloc.

I'm trying to achieve maximum transfer rate. Is there any other way to do
this with Intel DPDK or am I missing something ?
The code works perfectly inside a virtual machine (VMWare) with emulated
NIC's, but as expected the host kernel drops 99% of the packets.

I'm using Intel® Core™ i7-3770 CPU @ 3.40GHz



On Wed, May 28, 2014 at 4:48 PM, Richardson, Bruce <
bruce.richardson@intel.com> wrote:

>
> > From: sabu kurian [mailto:sabu2kurian@gmail.com]
> > Sent: Wednesday, May 28, 2014 11:54 AM
> > To: Richardson, Bruce
> > Cc: dev@dpdk.org
> > Subject: Re: [dpdk-dev] Intel I350 fails to work with DPDK
> >
> > Hai bruce,
> > Thanks for the reply.
> > I even tried that before. Having a burst size of 64 or 128 simply fails.
> The card would send out a few packets
> > (some 400 packets of 74 byte size) and then freeze. For my
> application... I'm trying to generate the peak
> > traffic possible with the link speed and the NIC.
>
> Bursts of 64 and 128 are rather large, can you perhaps try using bursts of
> 16 and 32 and see what the result is? The drivers are generally tuned for a
> max burst size of about 32 packets.
>
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] Intel I350 fails to work with DPDK
@ 2015-07-26  6:42 he peng
  0 siblings, 0 replies; 6+ messages in thread
From: he peng @ 2015-07-26  6:42 UTC (permalink / raw)
  To: dev, sabu2kurian, bruce.richardson

Hi, Sabu and Bruce:
     I saw your post in the mailing list about I350 fails to send packets, however it is posted about one year ago. 

     Now we have encountered the same issue. 
     We are now building a forwarding device which forwards packets between 2 I350 ports, and we observe that the program will 
transmit a few hundreds of packets then the I350 seems freeze: it fails to send all the packets. Sometimes one port and sometimes both ports fail 
to send any packets.

     After some code investigation, we find out that the program fails to send packets because there is one packet descriptor’s DD bit is not set by the hardware DMA, so the driver thinks that the TX ring is full then it drops all the packets. Below is the code (eth_igb_xmit_pkts in igb_rxtx.c) where the rte_eth_tx_burst returns:


if (! (txr[tx_end].wb.status & E1000_TXD_STAT_DD)) {
			if (nb_tx == 0)
				return (0);
			goto end_of_tx;
}


We have checked the corresponding sw_ring[tx_end]->mbuf , the packet content seems fine, it is a normal 64 bytes packet. Our code is quite simple, just adding/removing tunnel tags in the packets. The total length of  packet tags is 28 bytes. Maybe it is because there is some align requirements on the memory addresses where the packet content begins? I do not know. Below is the output of l2fwd.

/home/dpdk-1.8.0/examples/l2fwd/build/l2fwd -c 0x6 -n 2 -- -p 0x6

Port statistics ====================================
Statistics for port 1 ------------------------------
Packets sent:              13585277499
Packets received:           6792638878
Packets dropped:                     0
Statistics for port 2 ------------------------------
Packets sent:                      649
Packets received:          13585277549
Packets dropped:            6792638229
Aggregate statistics ===============================
Total packets sent:        13585278180
Total packets received:    20377916457
Total packets dropped:      6792638229
====================================================

     After the card goes freeze, we run the l2fwd and find out that it encounters the same issues. The program forward only around 600 packets, then it begins to drop all the other packets. We now start to doubt this is a problem of the network card, but we are not sure that if it is because the hardware has already been messed up, after so many times of restarting the programs and testing.  

     Any help is appreciated ! Thanks. 

     

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2015-07-26  6:42 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-28  9:41 [dpdk-dev] Intel I350 fails to work with DPDK sabu kurian
2014-05-28 10:46 ` Richardson, Bruce
2014-05-28 10:54   ` sabu kurian
2014-05-28 11:18     ` Richardson, Bruce
2014-05-28 11:39       ` sabu kurian
2015-07-26  6:42 he peng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).