DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4
@ 2017-09-25 17:19 Damien Clabaut
  2017-09-26  0:46 ` Yongseok Koh
  2017-09-26  1:46 ` Wiles, Keith
  0 siblings, 2 replies; 8+ messages in thread
From: Damien Clabaut @ 2017-09-25 17:19 UTC (permalink / raw)
  To: dev

Hello DPDK devs,

I am sending this message here as I did not find a bugtracker on the 
website.

If this is the wrong place, I would kindly apologize and ask you to 
redirect me to the proper place,

Thank you.

Description of the issue:

I am using pktgen-dpdk to replay a pcap file containing exactly 1 packet.

The packet in question is generated using this Scapy command:

pkt=(Ether(src="ec:0d:9a:37:d1:ab",dst="7c:fe:90:31:0d:52")/Dot1Q(vlan=2)/IP(dst="192.168.0.254")/UDP(sport=1020,dport=1021)/Raw(RandBin(size=8500)))

The pcap is then replayed in pktgen-dpdk:

./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m [1-7].0 -s 
0:pcap/8500Bpp.pcap

When I run this on a machine with Mellanox ConnectX-4 NIC (MCX4), the 
switch towards which I generate traffic gets a strange behaviour

#sh int et29/1 | i rate
   5 seconds input rate 39.4 Gbps (98.4% with framing overhead), 0 
packets/sec

A capture of this traffic (I used a monitor session to redirect all to a 
different port, connected to a machine on which I ran tcpdump) gives me 
this:

19:04:50.210792 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 
(oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
19:04:50.210795 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 
(oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
19:04:50.210796 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 
(oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
19:04:50.210797 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 
(oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
19:04:50.210799 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 
(oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500

The issue cannot be reproduced if any of the following conditions is met:

- Set the size in the Raw(RandBin()) to a value lower than 1500

- Send the packet from a Mellanox ConnectX-3 (MCX3) NIC (both machines 
are indentical in terms of software).

Is this a known problem ?

I remain available for any question you may have.

Regards,

-- 
Damien Clabaut
R&D vRouter
ovh.qc.ca

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4
  2017-09-25 17:19 [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4 Damien Clabaut
@ 2017-09-26  0:46 ` Yongseok Koh
  2017-09-26 12:43   ` Damien Clabaut
  2017-09-26  1:46 ` Wiles, Keith
  1 sibling, 1 reply; 8+ messages in thread
From: Yongseok Koh @ 2017-09-26  0:46 UTC (permalink / raw)
  To: Damien Clabaut; +Cc: dev

Hi, Damien

Can you please let me know the versions of your SW - pktgen-dpdk, DPDK and MLNX_OFED?
Also Firmware version if available.

Thanks,
Yongseok

> On Sep 25, 2017, at 10:19 AM, Damien Clabaut <damien.clabaut@corp.ovh.com> wrote:
> 
> Hello DPDK devs,
> 
> I am sending this message here as I did not find a bugtracker on the website.
> 
> If this is the wrong place, I would kindly apologize and ask you to redirect me to the proper place,
> 
> Thank you.
> 
> Description of the issue:
> 
> I am using pktgen-dpdk to replay a pcap file containing exactly 1 packet.
> 
> The packet in question is generated using this Scapy command:
> 
> pkt=(Ether(src="ec:0d:9a:37:d1:ab",dst="7c:fe:90:31:0d:52")/Dot1Q(vlan=2)/IP(dst="192.168.0.254")/UDP(sport=1020,dport=1021)/Raw(RandBin(size=8500)))
> 
> The pcap is then replayed in pktgen-dpdk:
> 
> ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m [1-7].0 -s 0:pcap/8500Bpp.pcap
> 
> When I run this on a machine with Mellanox ConnectX-4 NIC (MCX4), the switch towards which I generate traffic gets a strange behaviour
> 
> #sh int et29/1 | i rate
>   5 seconds input rate 39.4 Gbps (98.4% with framing overhead), 0 packets/sec
> 
> A capture of this traffic (I used a monitor session to redirect all to a different port, connected to a machine on which I ran tcpdump) gives me this:
> 
> 19:04:50.210792 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
> 19:04:50.210795 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
> 19:04:50.210796 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
> 19:04:50.210797 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
> 19:04:50.210799 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
> 
> The issue cannot be reproduced if any of the following conditions is met:
> 
> - Set the size in the Raw(RandBin()) to a value lower than 1500
> 
> - Send the packet from a Mellanox ConnectX-3 (MCX3) NIC (both machines are indentical in terms of software).
> 
> Is this a known problem ?
> 
> I remain available for any question you may have.
> 
> Regards,
> 
> -- 
> Damien Clabaut
> R&D vRouter
> ovh.qc.ca
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4
  2017-09-25 17:19 [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4 Damien Clabaut
  2017-09-26  0:46 ` Yongseok Koh
@ 2017-09-26  1:46 ` Wiles, Keith
  2017-09-26 13:09   ` Damien Clabaut
  1 sibling, 1 reply; 8+ messages in thread
From: Wiles, Keith @ 2017-09-26  1:46 UTC (permalink / raw)
  To: Damien Clabaut; +Cc: dev


> On Sep 25, 2017, at 6:19 PM, Damien Clabaut <damien.clabaut@corp.ovh.com> wrote:
> 
> Hello DPDK devs,
> 
> I am sending this message here as I did not find a bugtracker on the website.
> 
> If this is the wrong place, I would kindly apologize and ask you to redirect me to the proper place,
> 
> Thank you.
> 
> Description of the issue:
> 
> I am using pktgen-dpdk to replay a pcap file containing exactly 1 packet.
> 
> The packet in question is generated using this Scapy command:
> 
> pkt=(Ether(src="ec:0d:9a:37:d1:ab",dst="7c:fe:90:31:0d:52")/Dot1Q(vlan=2)/IP(dst="192.168.0.254")/UDP(sport=1020,dport=1021)/Raw(RandBin(size=8500)))
> 
> The pcap is then replayed in pktgen-dpdk:
> 
> ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m [1-7].0 -s 0:pcap/8500Bpp.pcap

This could be the issue as I can not setup your system (no cards). The pktgen command line is using 1-7 cores for TX/RX of packets. This means to pktgen to send the pcap from each core and this means the packet will be sent from each core. If you set the number of TX/RX cores to 1.0 then you should only see one. I assume you are using 1-7 cores to increase the bit rate closer to the performance of the card.

> 
> When I run this on a machine with Mellanox ConnectX-4 NIC (MCX4), the switch towards which I generate traffic gets a strange behaviour
> 
> #sh int et29/1 | i rate
>   5 seconds input rate 39.4 Gbps (98.4% with framing overhead), 0 packets/sec
> 
> A capture of this traffic (I used a monitor session to redirect all to a different port, connected to a machine on which I ran tcpdump) gives me this:
> 
> 19:04:50.210792 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
> 19:04:50.210795 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
> 19:04:50.210796 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
> 19:04:50.210797 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
> 19:04:50.210799 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
> 
> The issue cannot be reproduced if any of the following conditions is met:
> 
> - Set the size in the Raw(RandBin()) to a value lower than 1500
> 
> - Send the packet from a Mellanox ConnectX-3 (MCX3) NIC (both machines are indentical in terms of software).
> 
> Is this a known problem ?
> 
> I remain available for any question you may have.
> 
> Regards,
> 
> -- 
> Damien Clabaut
> R&D vRouter
> ovh.qc.ca
> 

Regards,
Keith

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4
  2017-09-26  0:46 ` Yongseok Koh
@ 2017-09-26 12:43   ` Damien Clabaut
  0 siblings, 0 replies; 8+ messages in thread
From: Damien Clabaut @ 2017-09-26 12:43 UTC (permalink / raw)
  To: dev

Hello and thank you for your answer,

We use the following versions:

MLNX DPDK version 16.11, revision 3.0

pktgen-dpdk version 3.3.8

mlnx_ofed version 4.1-1.0.2.0

fw revision: 12.18.1000

Regards,


On 2017-09-25 08:46 PM, Yongseok Koh wrote:
> Hi, Damien
>
> Can you please let me know the versions of your SW - pktgen-dpdk, DPDK and MLNX_OFED?
> Also Firmware version if available.
>
> Thanks,
> Yongseok
>
>> On Sep 25, 2017, at 10:19 AM, Damien Clabaut <damien.clabaut@corp.ovh.com> wrote:
>>
>> Hello DPDK devs,
>>
>> I am sending this message here as I did not find a bugtracker on the website.
>>
>> If this is the wrong place, I would kindly apologize and ask you to redirect me to the proper place,
>>
>> Thank you.
>>
>> Description of the issue:
>>
>> I am using pktgen-dpdk to replay a pcap file containing exactly 1 packet.
>>
>> The packet in question is generated using this Scapy command:
>>
>> pkt=(Ether(src="ec:0d:9a:37:d1:ab",dst="7c:fe:90:31:0d:52")/Dot1Q(vlan=2)/IP(dst="192.168.0.254")/UDP(sport=1020,dport=1021)/Raw(RandBin(size=8500)))
>>
>> The pcap is then replayed in pktgen-dpdk:
>>
>> ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m [1-7].0 -s 0:pcap/8500Bpp.pcap
>>
>> When I run this on a machine with Mellanox ConnectX-4 NIC (MCX4), the switch towards which I generate traffic gets a strange behaviour
>>
>> #sh int et29/1 | i rate
>>    5 seconds input rate 39.4 Gbps (98.4% with framing overhead), 0 packets/sec
>>
>> A capture of this traffic (I used a monitor session to redirect all to a different port, connected to a machine on which I ran tcpdump) gives me this:
>>
>> 19:04:50.210792 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>> 19:04:50.210795 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>> 19:04:50.210796 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>> 19:04:50.210797 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>> 19:04:50.210799 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>>
>> The issue cannot be reproduced if any of the following conditions is met:
>>
>> - Set the size in the Raw(RandBin()) to a value lower than 1500
>>
>> - Send the packet from a Mellanox ConnectX-3 (MCX3) NIC (both machines are indentical in terms of software).
>>
>> Is this a known problem ?
>>
>> I remain available for any question you may have.
>>
>> Regards,
>>
>> -- 
>> Damien Clabaut
>> R&D vRouter
>> ovh.qc.ca
>>

-- 
Damien Clabaut
R&D vRouter
ovh.qc.ca

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4
  2017-09-26  1:46 ` Wiles, Keith
@ 2017-09-26 13:09   ` Damien Clabaut
  2017-09-28 20:44     ` Wiles, Keith
  0 siblings, 1 reply; 8+ messages in thread
From: Damien Clabaut @ 2017-09-26 13:09 UTC (permalink / raw)
  To: dev

Hello Keith and thank you for your answer,

The goal is indeed to generate as much traffic per machine as possible 
(we use pktgen-dpdk to benchmark datacenter routers before putting them 
on production).

For this we use all available CPU power to send packets.

Following your suggestion, I modified my command to:

./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m 1.0 -s 
0:pcap/8500Bpp.pcap

The issue is still reproduced, though will slightly lower performance 
(reaching linerate at 8500 Bpp does not require much processing power)

#sh int et 29/1 | i rate
   5 seconds input rate 36.2 Gbps (90.5% with framing overhead), 0 
packets/sec
   5 seconds output rate 56 bps (0.0% with framing overhead), 0 packets/sec

Regards,

PS: Sorry for answering you directly, sending this message a second time 
on ML


On 2017-09-25 09:46 PM, Wiles, Keith wrote:
>> On Sep 25, 2017, at 6:19 PM, Damien Clabaut <damien.clabaut@corp.ovh.com> wrote:
>>
>> Hello DPDK devs,
>>
>> I am sending this message here as I did not find a bugtracker on the website.
>>
>> If this is the wrong place, I would kindly apologize and ask you to redirect me to the proper place,
>>
>> Thank you.
>>
>> Description of the issue:
>>
>> I am using pktgen-dpdk to replay a pcap file containing exactly 1 packet.
>>
>> The packet in question is generated using this Scapy command:
>>
>> pkt=(Ether(src="ec:0d:9a:37:d1:ab",dst="7c:fe:90:31:0d:52")/Dot1Q(vlan=2)/IP(dst="192.168.0.254")/UDP(sport=1020,dport=1021)/Raw(RandBin(size=8500)))
>>
>> The pcap is then replayed in pktgen-dpdk:
>>
>> ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m [1-7].0 -s 0:pcap/8500Bpp.pcap
> This could be the issue as I can not setup your system (no cards). The pktgen command line is using 1-7 cores for TX/RX of packets. This means to pktgen to send the pcap from each core and this means the packet will be sent from each core. If you set the number of TX/RX cores to 1.0 then you should only see one. I assume you are using 1-7 cores to increase the bit rate closer to the performance of the card.
>
>> When I run this on a machine with Mellanox ConnectX-4 NIC (MCX4), the switch towards which I generate traffic gets a strange behaviour
>>
>> #sh int et29/1 | i rate
>>    5 seconds input rate 39.4 Gbps (98.4% with framing overhead), 0 packets/sec
>>
>> A capture of this traffic (I used a monitor session to redirect all to a different port, connected to a machine on which I ran tcpdump) gives me this:
>>
>> 19:04:50.210792 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>> 19:04:50.210795 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>> 19:04:50.210796 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>> 19:04:50.210797 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>> 19:04:50.210799 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>>
>> The issue cannot be reproduced if any of the following conditions is met:
>>
>> - Set the size in the Raw(RandBin()) to a value lower than 1500
>>
>> - Send the packet from a Mellanox ConnectX-3 (MCX3) NIC (both machines are indentical in terms of software).
>>
>> Is this a known problem ?
>>
>> I remain available for any question you may have.
>>
>> Regards,
>>
>> -- 
>> Damien Clabaut
>> R&D vRouter
>> ovh.qc.ca
>>
> Regards,
> Keith
>

-- 
Damien Clabaut
R&D vRouter
ovh.qc.ca

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4
  2017-09-26 13:09   ` Damien Clabaut
@ 2017-09-28 20:44     ` Wiles, Keith
  2017-10-11 18:59       ` Yongseok Koh
  0 siblings, 1 reply; 8+ messages in thread
From: Wiles, Keith @ 2017-09-28 20:44 UTC (permalink / raw)
  To: Damien Clabaut; +Cc: dev


> On Sep 26, 2017, at 8:09 AM, Damien Clabaut <damien.clabaut@corp.ovh.com> wrote:
> 
> Hello Keith and thank you for your answer,
> 
> The goal is indeed to generate as much traffic per machine as possible (we use pktgen-dpdk to benchmark datacenter routers before putting them on production).
> 
> For this we use all available CPU power to send packets.
> 
> Following your suggestion, I modified my command to:
> 
> ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m 1.0 -s 0:pcap/8500Bpp.pcap


I just noticed you were sending 8500 byte frames and you have to modify Pktgen to increase the size of the mbufs in the mempool. I only configure the mbufs to 1518 byte buffers or really 2048 byte, but I only deal with 1518 max size. The size can be changed, but I am not next to a machine right now.

> 
> The issue is still reproduced, though will slightly lower performance (reaching linerate at 8500 Bpp does not require much processing power)
> 
> #sh int et 29/1 | i rate
>   5 seconds input rate 36.2 Gbps (90.5% with framing overhead), 0 packets/sec
>   5 seconds output rate 56 bps (0.0% with framing overhead), 0 packets/sec
> 
> Regards,
> 
> PS: Sorry for answering you directly, sending this message a second time on ML
> 
> 
> On 2017-09-25 09:46 PM, Wiles, Keith wrote:
>>> On Sep 25, 2017, at 6:19 PM, Damien Clabaut <damien.clabaut@corp.ovh.com> wrote:
>>> 
>>> Hello DPDK devs,
>>> 
>>> I am sending this message here as I did not find a bugtracker on the website.
>>> 
>>> If this is the wrong place, I would kindly apologize and ask you to redirect me to the proper place,
>>> 
>>> Thank you.
>>> 
>>> Description of the issue:
>>> 
>>> I am using pktgen-dpdk to replay a pcap file containing exactly 1 packet.
>>> 
>>> The packet in question is generated using this Scapy command:
>>> 
>>> pkt=(Ether(src="ec:0d:9a:37:d1:ab",dst="7c:fe:90:31:0d:52")/Dot1Q(vlan=2)/IP(dst="192.168.0.254")/UDP(sport=1020,dport=1021)/Raw(RandBin(size=8500)))
>>> 
>>> The pcap is then replayed in pktgen-dpdk:
>>> 
>>> ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m [1-7].0 -s 0:pcap/8500Bpp.pcap
>> This could be the issue as I can not setup your system (no cards). The pktgen command line is using 1-7 cores for TX/RX of packets. This means to pktgen to send the pcap from each core and this means the packet will be sent from each core. If you set the number of TX/RX cores to 1.0 then you should only see one. I assume you are using 1-7 cores to increase the bit rate closer to the performance of the card.
>> 
>>> When I run this on a machine with Mellanox ConnectX-4 NIC (MCX4), the switch towards which I generate traffic gets a strange behaviour
>>> 
>>> #sh int et29/1 | i rate
>>>   5 seconds input rate 39.4 Gbps (98.4% with framing overhead), 0 packets/sec
>>> 
>>> A capture of this traffic (I used a monitor session to redirect all to a different port, connected to a machine on which I ran tcpdump) gives me this:
>>> 
>>> 19:04:50.210792 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>>> 19:04:50.210795 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>>> 19:04:50.210796 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>>> 19:04:50.210797 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>>> 19:04:50.210799 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>>> 
>>> The issue cannot be reproduced if any of the following conditions is met:
>>> 
>>> - Set the size in the Raw(RandBin()) to a value lower than 1500
>>> 
>>> - Send the packet from a Mellanox ConnectX-3 (MCX3) NIC (both machines are indentical in terms of software).
>>> 
>>> Is this a known problem ?
>>> 
>>> I remain available for any question you may have.
>>> 
>>> Regards,
>>> 
>>> -- 
>>> Damien Clabaut
>>> R&D vRouter
>>> ovh.qc.ca
>>> 
>> Regards,
>> Keith
>> 
> 
> -- 
> Damien Clabaut
> R&D vRouter
> ovh.qc.ca
> 

Regards,
Keith

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4
  2017-09-28 20:44     ` Wiles, Keith
@ 2017-10-11 18:59       ` Yongseok Koh
  2017-10-12 12:34         ` Damien Clabaut
  0 siblings, 1 reply; 8+ messages in thread
From: Yongseok Koh @ 2017-10-11 18:59 UTC (permalink / raw)
  To: Damien Clabaut; +Cc: Wiles, Keith, dev

On Thu, Sep 28, 2017 at 08:44:26PM +0000, Wiles, Keith wrote:
> 
> > On Sep 26, 2017, at 8:09 AM, Damien Clabaut <damien.clabaut@corp.ovh.com> wrote:
> > 
> > Hello Keith and thank you for your answer,
> > 
> > The goal is indeed to generate as much traffic per machine as possible (we use pktgen-dpdk to benchmark datacenter routers before putting them on production).
> > 
> > For this we use all available CPU power to send packets.
> > 
> > Following your suggestion, I modified my command to:
> > 
> > ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m 1.0 -s 0:pcap/8500Bpp.pcap
> 
> 
> I just noticed you were sending 8500 byte frames and you have to modify Pktgen
> to increase the size of the mbufs in the mempool. I only configure the mbufs
> to 1518 byte buffers or really 2048 byte, but I only deal with 1518 max size.
> The size can be changed, but I am not next to a machine right now.

Hi Damien,

Could you manage to resolve this issue? Keith mentioned pktgen doesn't support
jumbo frames w/o modifying code. Do you still have an issue with Mellanox NIC
and its PMDs? Please let me know.

Thanks
Yongseok

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4
  2017-10-11 18:59       ` Yongseok Koh
@ 2017-10-12 12:34         ` Damien Clabaut
  0 siblings, 0 replies; 8+ messages in thread
From: Damien Clabaut @ 2017-10-12 12:34 UTC (permalink / raw)
  Cc: dev

Hello,

I opened a ticket with Mellanox in parallel.

We are trying to figure out why it doesn't work on MCX4 but it works on 
MCX3, even though MCX3 is not officially supported and neither are jumbo 
frames.

In case you want to check, the case ID is 00392710.

Regards,


On 2017-10-11 02:59 PM, Yongseok Koh wrote:
> On Thu, Sep 28, 2017 at 08:44:26PM +0000, Wiles, Keith wrote:
>>> On Sep 26, 2017, at 8:09 AM, Damien Clabaut <damien.clabaut@corp.ovh.com> wrote:
>>>
>>> Hello Keith and thank you for your answer,
>>>
>>> The goal is indeed to generate as much traffic per machine as possible (we use pktgen-dpdk to benchmark datacenter routers before putting them on production).
>>>
>>> For this we use all available CPU power to send packets.
>>>
>>> Following your suggestion, I modified my command to:
>>>
>>> ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m 1.0 -s 0:pcap/8500Bpp.pcap
>>
>> I just noticed you were sending 8500 byte frames and you have to modify Pktgen
>> to increase the size of the mbufs in the mempool. I only configure the mbufs
>> to 1518 byte buffers or really 2048 byte, but I only deal with 1518 max size.
>> The size can be changed, but I am not next to a machine right now.
> Hi Damien,
>
> Could you manage to resolve this issue? Keith mentioned pktgen doesn't support
> jumbo frames w/o modifying code. Do you still have an issue with Mellanox NIC
> and its PMDs? Please let me know.
>
> Thanks
> Yongseok

-- 
Damien Clabaut
R&D vRouter
ovh.qc.ca

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-10-12 12:34 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-25 17:19 [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4 Damien Clabaut
2017-09-26  0:46 ` Yongseok Koh
2017-09-26 12:43   ` Damien Clabaut
2017-09-26  1:46 ` Wiles, Keith
2017-09-26 13:09   ` Damien Clabaut
2017-09-28 20:44     ` Wiles, Keith
2017-10-11 18:59       ` Yongseok Koh
2017-10-12 12:34         ` Damien Clabaut

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).