DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] pktgen rx errors with intel 82599
@ 2015-03-13 20:49 Matt Smith
  2015-03-14 13:47 ` Wiles, Keith
  0 siblings, 1 reply; 4+ messages in thread
From: Matt Smith @ 2015-03-13 20:49 UTC (permalink / raw)
  To: dev


Hi,

I’ve been using DPDK pktgen 2.8.0 (built against DPDK 1.8.0 libraries) to send traffic on a server using an Intel 82599 (X520-2). Traffic gets sent out port 1 through another server which also an Intel 82599 installed and is forwarded back into port 0. When I send using a single source and destination IP address, this works fine and packets arrive on port 0 at close to the maximum line rate. 

If I change port 1 to range mode and send traffic from a range of source IP addresses to a single destination IP address, for a second or two the display indicates that some packets were received on port 0 but then the rate of received packets on the display goes to 0 and all incoming packets on port 0 are registered as rx errors.

The server that traffic is being forwarded through is running the ip_pipeline example app. I ruled this out as the source of the problem by sending directly from port 1 to port 0 of the pktgen box. The issue still occurs when the traffic is not being forwarded through the other box. Since ip_pipeline is able to receive the packets and forward them without getting rx errors and it’s running with the same model of NIC as pktgen is using, I checked to see if there were any differences in initialization of the rx port between ip_pipeline and pktgen. I noticed that pktgen has a setting that ip_pipeline doesn't:

const struct rte_eth_conf port_conf = {
    .rxmode = {
    .mq_mode = ETH_MQ_RX_RSS,

If I comment out the .mq_mode setting and rebuild pktgen, the problem no longer occurs and I now receive packets on port 0 at near line rate when testing from a range of source addresses.

I recall reading in the past that if a receive queue fills up on an 82599 , that receiving stalls for all of the other queues and no more packets can be received. Could that be happening with pktgen? Is there any debugging I can do to help track it down?

The command line I have been launching pktgen with is: 

pktgen -c f -n 3 -m 512 -- -p 0x3 -P -m 1.0,2.1

Thanks,

-Matt Smith

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] pktgen rx errors with intel 82599
  2015-03-13 20:49 [dpdk-dev] pktgen rx errors with intel 82599 Matt Smith
@ 2015-03-14 13:47 ` Wiles, Keith
  2015-03-14 18:33   ` Wiles, Keith
  0 siblings, 1 reply; 4+ messages in thread
From: Wiles, Keith @ 2015-03-14 13:47 UTC (permalink / raw)
  To: Matt Smith, dev

Hi Matt

On 3/13/15, 3:49 PM, "Matt Smith" <mgsmith@netgate.com> wrote:

>
>Hi,
>
>I¹ve been using DPDK pktgen 2.8.0 (built against DPDK 1.8.0 libraries) to
>send traffic on a server using an Intel 82599 (X520-2). Traffic gets sent
>out port 1 through another server which also an Intel 82599 installed and
>is forwarded back into port 0. When I send using a single source and
>destination IP address, this works fine and packets arrive on port 0 at
>close to the maximum line rate.
>
>If I change port 1 to range mode and send traffic from a range of source
>IP addresses to a single destination IP address, for a second or two the
>display indicates that some packets were received on port 0 but then the
>rate of received packets on the display goes to 0 and all incoming
>packets on port 0 are registered as rx errors.
>
>The server that traffic is being forwarded through is running the
>ip_pipeline example app. I ruled this out as the source of the problem by
>sending directly from port 1 to port 0 of the pktgen box. The issue still
>occurs when the traffic is not being forwarded through the other box.
>Since ip_pipeline is able to receive the packets and forward them without
>getting rx errors and it¹s running with the same model of NIC as pktgen
>is using, I checked to see if there were any differences in
>initialization of the rx port between ip_pipeline and pktgen. I noticed
>that pktgen has a setting that ip_pipeline doesn't:
>
>const struct rte_eth_conf port_conf = {
>    .rxmode = {
>    .mq_mode = ETH_MQ_RX_RSS,
>
>If I comment out the .mq_mode setting and rebuild pktgen, the problem no
>longer occurs and I now receive packets on port 0 at near line rate when
>testing from a range of source addresses.
>
>I recall reading in the past that if a receive queue fills up on an 82599
>, that receiving stalls for all of the other queues and no more packets
>can be received. Could that be happening with pktgen? Is there any
>debugging I can do to help track it down?

I have seen this problem on some platforms a few times and it looks like
you may have found a possible solution to the problem. I will have to look
into the change and see if this is the problem, but it does seem to
suggest this may be the issue. When the port gets into this state the port
receives the number mbufs matching the number of descriptors and the rest
are Œmissed¹ frames at the wire. The RX counter is the number of missed
frames.

Thanks for the input
++Keith
>
>The command line I have been launching pktgen with is:
>
>pktgen -c f -n 3 -m 512 -- -p 0x3 -P -m 1.0,2.1
>
>Thanks,
>
>-Matt Smith
>
>
>
>
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] pktgen rx errors with intel 82599
  2015-03-14 13:47 ` Wiles, Keith
@ 2015-03-14 18:33   ` Wiles, Keith
  2015-03-23 15:51     ` Matt Smith
  0 siblings, 1 reply; 4+ messages in thread
From: Wiles, Keith @ 2015-03-14 18:33 UTC (permalink / raw)
  To: Matt Smith, dev

Hi Matt,

On 3/14/15, 8:47 AM, "Wiles, Keith" <keith.wiles@intel.com> wrote:

>Hi Matt
>
>On 3/13/15, 3:49 PM, "Matt Smith" <mgsmith@netgate.com> wrote:
>
>>
>>Hi,
>>
>>I¹ve been using DPDK pktgen 2.8.0 (built against DPDK 1.8.0 libraries) to
>>send traffic on a server using an Intel 82599 (X520-2). Traffic gets sent
>>out port 1 through another server which also an Intel 82599 installed and
>>is forwarded back into port 0. When I send using a single source and
>>destination IP address, this works fine and packets arrive on port 0 at
>>close to the maximum line rate.
>>
>>If I change port 1 to range mode and send traffic from a range of source
>>IP addresses to a single destination IP address, for a second or two the
>>display indicates that some packets were received on port 0 but then the
>>rate of received packets on the display goes to 0 and all incoming
>>packets on port 0 are registered as rx errors.
>>
>>The server that traffic is being forwarded through is running the
>>ip_pipeline example app. I ruled this out as the source of the problem by
>>sending directly from port 1 to port 0 of the pktgen box. The issue still
>>occurs when the traffic is not being forwarded through the other box.
>>Since ip_pipeline is able to receive the packets and forward them without
>>getting rx errors and it¹s running with the same model of NIC as pktgen
>>is using, I checked to see if there were any differences in
>>initialization of the rx port between ip_pipeline and pktgen. I noticed
>>that pktgen has a setting that ip_pipeline doesn't:
>>
>>const struct rte_eth_conf port_conf = {
>>    .rxmode = {
>>    .mq_mode = ETH_MQ_RX_RSS,
>>
>>If I comment out the .mq_mode setting and rebuild pktgen, the problem no
>>longer occurs and I now receive packets on port 0 at near line rate when
>>testing from a range of source addresses.
>>
>>I recall reading in the past that if a receive queue fills up on an 82599
>>, that receiving stalls for all of the other queues and no more packets
>>can be received. Could that be happening with pktgen? Is there any
>>debugging I can do to help track it down?
>
>I have seen this problem on some platforms a few times and it looks like
>you may have found a possible solution to the problem. I will have to look
>into the change and see if this is the problem, but it does seem to
>suggest this may be the issue. When the port gets into this state the port
>receives the number mbufs matching the number of descriptors and the rest
>are Œmissed¹ frames at the wire. The RX counter is the number of missed
>frames.
>
>Thanks for the input
>++Keith

I added code to hopefully setup the correct RX/TX conf values. The HEAD of
the Pktgen-DPDK v2.8.4 should build and work with DPDK 1.8.0 or 2.0.0-rc1.
I did still see some RX errors and reduced bit rate, but the traffic does
not stop on my machine. Please give version 2.8.4 a try and let me know if
you still see problems.

Regards,
++Keith
>>
>>The command line I have been launching pktgen with is:
>>
>>pktgen -c f -n 3 -m 512 -- -p 0x3 -P -m 1.0,2.1
>>
>>Thanks,
>>
>>-Matt Smith
>>
>>
>>
>>
>>
>
>
>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] pktgen rx errors with intel 82599
  2015-03-14 18:33   ` Wiles, Keith
@ 2015-03-23 15:51     ` Matt Smith
  0 siblings, 0 replies; 4+ messages in thread
From: Matt Smith @ 2015-03-23 15:51 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: dev


> On Mar 14, 2015, at 1:33 PM, Wiles, Keith <keith.wiles@intel.com> wrote:
> 
> Hi Matt,
> 
> On 3/14/15, 8:47 AM, "Wiles, Keith" <keith.wiles@intel.com <mailto:keith.wiles@intel.com>> wrote:
> 
>> Hi Matt
>> 
>> On 3/13/15, 3:49 PM, "Matt Smith" <mgsmith@netgate.com> wrote:
>> 
>>> 
>>> Hi,
>>> 
>>> I¹ve been using DPDK pktgen 2.8.0 (built against DPDK 1.8.0 libraries) to
>>> send traffic on a server using an Intel 82599 (X520-2). Traffic gets sent
>>> out port 1 through another server which also an Intel 82599 installed and
>>> is forwarded back into port 0. When I send using a single source and
>>> destination IP address, this works fine and packets arrive on port 0 at
>>> close to the maximum line rate.
>>> 
>>> If I change port 1 to range mode and send traffic from a range of source
>>> IP addresses to a single destination IP address, for a second or two the
>>> display indicates that some packets were received on port 0 but then the
>>> rate of received packets on the display goes to 0 and all incoming
>>> packets on port 0 are registered as rx errors.
>>> 
>>> The server that traffic is being forwarded through is running the
>>> ip_pipeline example app. I ruled this out as the source of the problem by
>>> sending directly from port 1 to port 0 of the pktgen box. The issue still
>>> occurs when the traffic is not being forwarded through the other box.
>>> Since ip_pipeline is able to receive the packets and forward them without
>>> getting rx errors and it¹s running with the same model of NIC as pktgen
>>> is using, I checked to see if there were any differences in
>>> initialization of the rx port between ip_pipeline and pktgen. I noticed
>>> that pktgen has a setting that ip_pipeline doesn't:
>>> 
>>> const struct rte_eth_conf port_conf = {
>>>   .rxmode = {
>>>   .mq_mode = ETH_MQ_RX_RSS,
>>> 
>>> If I comment out the .mq_mode setting and rebuild pktgen, the problem no
>>> longer occurs and I now receive packets on port 0 at near line rate when
>>> testing from a range of source addresses.
>>> 
>>> I recall reading in the past that if a receive queue fills up on an 82599
>>> , that receiving stalls for all of the other queues and no more packets
>>> can be received. Could that be happening with pktgen? Is there any
>>> debugging I can do to help track it down?
>> 
>> I have seen this problem on some platforms a few times and it looks like
>> you may have found a possible solution to the problem. I will have to look
>> into the change and see if this is the problem, but it does seem to
>> suggest this may be the issue. When the port gets into this state the port
>> receives the number mbufs matching the number of descriptors and the rest
>> are Œmissed¹ frames at the wire. The RX counter is the number of missed
>> frames.
>> 
>> Thanks for the input
>> ++Keith
> 
> I added code to hopefully setup the correct RX/TX conf values. The HEAD of
> the Pktgen-DPDK v2.8.4 should build and work with DPDK 1.8.0 or 2.0.0-rc1.
> I did still see some RX errors and reduced bit rate, but the traffic does
> not stop on my machine. Please give version 2.8.4 a try and let me know if
> you still see problems.
> 
> Regards,
> ++Keith

Hi Keith,

Sorry for the delay in responding, I have been out of town.

Thanks for your attention to the problem. I pulled the latest code from git and moved to the pktgen-2.8.4 tag. I had one issue building:

  CC pktgen-port-cfg.o
/root/dpdk/pktgen-dpdk/app/pktgen-port-cfg.c: In function ‘pktgen_config_ports’:
/root/dpdk/pktgen-dpdk/app/pktgen-port-cfg.c:300:11: error: variable ‘k’ set but not used [-Werror=unused-but-set-variable]
  uint64_t k;
           ^
cc1: all warnings being treated as errors
make[2]: *** [pktgen-port-cfg.o] Error 1
make[1]: *** [all] Error 2
make: *** [app] Error 2


I prepended '__attribute__((unused))’ to the declaration of k and then I was able to build successfully. I did not see any receive errors running the updated binary. So once I got past the initial build problem, the issue seems to be resolved.

Thanks,
-Matt

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-03-23 15:51 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-13 20:49 [dpdk-dev] pktgen rx errors with intel 82599 Matt Smith
2015-03-14 13:47 ` Wiles, Keith
2015-03-14 18:33   ` Wiles, Keith
2015-03-23 15:51     ` Matt Smith

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).