DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Behavior of pktgen's "rate" option
@ 2017-03-16 13:33 Julien Castets
  2017-03-17  0:25 ` Wiles, Keith
  0 siblings, 1 reply; 5+ messages in thread
From: Julien Castets @ 2017-03-16 13:33 UTC (permalink / raw)
  To: users

I'm benchmarking my DPDK application with pktgen and I don't understand how the
"rate" parameter works.

Let's run pktgen:

$> pktgen -l '0-11' --master-lcore=0 -w 03:00.0 -- -f
/home/jcastets/test.lua -m '[1].0'

(for information, my test.lua is at the bottom of this email)

Here, pktgen generates traffic with one core.


If I set the pktgen "rate" to "20" and the packet size to "64", I would expect
to generate 20% of my NIC's max speed of traffic. In my case with my 40Gbps
NIC, I'd expect to generate 8Gbps of traffic.

And guess what, my expectations are good since with such configuration, pktgen
generates traffic as expected: 8Gbps.

However:

* If I run pktgen with 2 cores (-m '[1-2].0' instead of -m '[1].0'), pktgen
  generates 28Gbps.
* If I run pktgen with 1 core, keep the "rate" at "20" but set the pktsize at
  "128", pktgen generates 14Gbps (where I'd expect it to still generate 8Gbps).


Can please anyone clarify how the rate/pktsize/cores relation works?



### test.lua

pktgen.stop("all");
pktgen.reset("all");
pktgen.clear("all");

pktgen.src_mac("all", "start", "3c:fd:fe:a2:2c:88");
pktgen.dst_mac("all", "start", "3c:fd:fe:a2:2b:b0");

pktgen.src_ip("all", "start", "192.168.1.1");
pktgen.src_ip("all", "min", "0.0.0.0");
pktgen.src_ip("all", "inc", "0.0.0.0");
pktgen.src_ip("all", "max", "0.0.0.0");

pktgen.src_port("all", "start", 1234);
pktgen.src_port("all", "min", 1234);
pktgen.src_port("all", "inc", 1);
pktgen.src_port("all", "max", 4096);

pktgen.ip_proto("all", "udp");


pktgen.dst_ip("all", "start", "192.168.100.1");
pktgen.dst_ip("all", "min", "192.168.100.1");
pktgen.dst_ip("all", "inc", "0.0.0.1");
pktgen.dst_ip("all", "max", "192.168.100.99");

pktgen.dst_port("all", "start", 1234);
pktgen.dst_port("all", "min", 0);
pktgen.dst_port("all", "inc", 0);
pktgen.dst_port("all", "max", 0);


pktgen.vlan("all", "off");
pktgen.vlan_id("all", "start", 1024);
pktgen.vlan_id("all", "min", 1);
pktgen.vlan_id("all", "inc", 5);
pktgen.vlan_id("all", "max", 3100);

pktgen.pkt_size("all", "start", 128); -- this is the parameter I'm
trying to adjust
pktgen.pkt_size("all", "min", 0);
pktgen.pkt_size("all", "inc", 0);
pktgen.pkt_size("all", "max", 0);

pktgen.set_range("all", "on");

pktgen.set("all", "count", 0); -- count 0 is forever
pktgen.set("all", "rate", 20); -- this is the parameter I'm trying to adjust
pktgen.start("all");

-- 
Julien Castets

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Behavior of pktgen's "rate" option
  2017-03-16 13:33 [dpdk-users] Behavior of pktgen's "rate" option Julien Castets
@ 2017-03-17  0:25 ` Wiles, Keith
  2017-03-17  7:50   ` Wiles, Keith
  2017-03-17 12:59   ` Julien Castets
  0 siblings, 2 replies; 5+ messages in thread
From: Wiles, Keith @ 2017-03-17  0:25 UTC (permalink / raw)
  To: Julien Castets; +Cc: users


> On Mar 16, 2017, at 9:33 PM, Julien Castets <castets.j@gmail.com> wrote:
> 
> I'm benchmarking my DPDK application with pktgen and I don't understand how the
> "rate" parameter works.
> 
> Let's run pktgen:
> 
> $> pktgen -l '0-11' --master-lcore=0 -w 03:00.0 -- -f
> /home/jcastets/test.lua -m '[1].0'
> 
> (for information, my test.lua is at the bottom of this email)
> 
> Here, pktgen generates traffic with one core.
> 
> 
> If I set the pktgen "rate" to "20" and the packet size to "64", I would expect
> to generate 20% of my NIC's max speed of traffic. In my case with my 40Gbps
> NIC, I'd expect to generate 8Gbps of traffic.
> 
> And guess what, my expectations are good since with such configuration, pktgen
> generates traffic as expected: 8Gbps.
> 
> However:
> 
> * If I run pktgen with 2 cores (-m '[1-2].0' instead of -m '[1].0'), pktgen
>  generates 28Gbps.
> * If I run pktgen with 1 core, keep the "rate" at "20" but set the pktsize at
>  "128", pktgen generates 14Gbps (where I'd expect it to still generate 8Gbps).

Be careful in how you setup and change the size of a packet. Changing the size while Pktgen is running can lead to some weird performance. The problem is the packets already in flight are not changed to the new size or they are stuck on the TX done queue. The new TX flush API needs to be added to the PMDs and then in Pktgen I can flush all of the buffers back to the mempool and then alter the size of the mbuf. You did not state the version of Pktgen you are using, as DPDK 16.11 added some support for pktgen to locate and update all of the mbuf, I think that is kind of a hack on my part.


The code in Pktgen takes your rate and packet size to determine the number of packets to send at a given rate to obtain you desired rate. I thought the calculation  accounted for the number of TX cores, but maybe that is a bug. I would have expected 16Gbps in the two core case if I did not account for that config. I do not have direct access to the machine to verify the code.

Try stopping the traffic before changing any of the configs, changing the rate should work while Pktgen is sending. The problem is if the size of content needs to change the system should be halted. Also if you want to do a some more debugging, try quiting Pktgen between configurations and see if that works better. It is not what I intended on the usage of Pktgen is to quit all of the time.

I will try to have a look this weekend when I get back home.

> 
> 
> Can please anyone clarify how the rate/pktsize/cores relation works?
> 
> 
> 
> ### test.lua
> 
> pktgen.stop("all");
> pktgen.reset("all");
> pktgen.clear("all");
> 
> pktgen.src_mac("all", "start", "3c:fd:fe:a2:2c:88");
> pktgen.dst_mac("all", "start", "3c:fd:fe:a2:2b:b0");
> 
> pktgen.src_ip("all", "start", "192.168.1.1");
> pktgen.src_ip("all", "min", "0.0.0.0");
> pktgen.src_ip("all", "inc", "0.0.0.0");
> pktgen.src_ip("all", "max", "0.0.0.0");
> 
> pktgen.src_port("all", "start", 1234);
> pktgen.src_port("all", "min", 1234);
> pktgen.src_port("all", "inc", 1);
> pktgen.src_port("all", "max", 4096);
> 
> pktgen.ip_proto("all", "udp");
> 
> 
> pktgen.dst_ip("all", "start", "192.168.100.1");
> pktgen.dst_ip("all", "min", "192.168.100.1");
> pktgen.dst_ip("all", "inc", "0.0.0.1");
> pktgen.dst_ip("all", "max", "192.168.100.99");
> 
> pktgen.dst_port("all", "start", 1234);
> pktgen.dst_port("all", "min", 0);
> pktgen.dst_port("all", "inc", 0);
> pktgen.dst_port("all", "max", 0);
> 
> 
> pktgen.vlan("all", "off");
> pktgen.vlan_id("all", "start", 1024);
> pktgen.vlan_id("all", "min", 1);
> pktgen.vlan_id("all", "inc", 5);
> pktgen.vlan_id("all", "max", 3100);
> 
> pktgen.pkt_size("all", "start", 128); -- this is the parameter I'm
> trying to adjust
> pktgen.pkt_size("all", "min", 0);
> pktgen.pkt_size("all", "inc", 0);
> pktgen.pkt_size("all", "max", 0);
> 
> pktgen.set_range("all", "on");
> 
> pktgen.set("all", "count", 0); -- count 0 is forever
> pktgen.set("all", "rate", 20); -- this is the parameter I'm trying to adjust
> pktgen.start("all");
> 
> -- 
> Julien Castets

Regards,
Keith

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Behavior of pktgen's "rate" option
  2017-03-17  0:25 ` Wiles, Keith
@ 2017-03-17  7:50   ` Wiles, Keith
  2017-03-21  9:23     ` Julien Castets
  2017-03-17 12:59   ` Julien Castets
  1 sibling, 1 reply; 5+ messages in thread
From: Wiles, Keith @ 2017-03-17  7:50 UTC (permalink / raw)
  To: Julien Castets; +Cc: users



Sent from my iPhone

> On Mar 17, 2017, at 8:26 AM, Wiles, Keith <keith.wiles@intel.com> wrote:
> 
> 
>> On Mar 16, 2017, at 9:33 PM, Julien Castets <castets.j@gmail.com> wrote:
>> 
>> I'm benchmarking my DPDK application with pktgen and I don't understand how the
>> "rate" parameter works.
>> 
>> Let's run pktgen:
>> 
>> $> pktgen -l '0-11' --master-lcore=0 -w 03:00.0 -- -f
>> /home/jcastets/test.lua -m '[1].0'
>> 
>> (for information, my test.lua is at the bottom of this email)
>> 
>> Here, pktgen generates traffic with one core.
>> 
>> 
>> If I set the pktgen "rate" to "20" and the packet size to "64", I would expect
>> to generate 20% of my NIC's max speed of traffic. In my case with my 40Gbps
>> NIC, I'd expect to generate 8Gbps of traffic.
>> 
>> And guess what, my expectations are good since with such configuration, pktgen
>> generates traffic as expected: 8Gbps.
>> 
>> However:
>> 
>> * If I run pktgen with 2 cores (-m '[1-2].0' instead of -m '[1].0'), pktgen
>> generates 28Gbps.
>> * If I run pktgen with 1 core, keep the "rate" at "20" but set the pktsize at
>> "128", pktgen generates 14Gbps (where I'd expect it to still generate 8Gbps).
> 
> Be careful in how you setup and change the size of a packet. Changing the size while Pktgen is running can lead to some weird performance. The problem is the packets already in flight are not changed to the new size or they are stuck on the TX done queue. The new TX flush API needs to be added to the PMDs and then in Pktgen I can flush all of the buffers back to the mempool and then alter the size of the mbuf. You did not state the version of Pktgen you are using, as DPDK 16.11 added some support for pktgen to locate and update all of the mbuf, I think that is kind of a hack on my part.
> 
> 
> The code in Pktgen takes your rate and packet size to determine the number of packets to send at a given rate to obtain you desired rate. I thought the calculation  accounted for the number of TX cores, but maybe that is a bug. I would have expected 16Gbps in the two core case if I did not account for that config. I do not have direct access to the machine to verify the code.
> 
> Try stopping the traffic before changing any of the configs, changing the rate should work while Pktgen is sending. The problem is if the size of content needs to change the system should be halted. Also if you want to do a some more debugging, try quiting Pktgen between configurations and see if that works better. It is not what I intended on the usage of Pktgen is to quit all of the time.
> 
> I will try to have a look this weekend when I get back home.
> 

In file app/pktgen.c around line 113 I calculate the tx_cycles and I divided by the tx port count but it should be multiple. 

Let me know if that works. 


>> 
>> 
>> Can please anyone clarify how the rate/pktsize/cores relation works?
>> 
>> 
>> 
>> ### test.lua
>> 
>> pktgen.stop("all");
>> pktgen.reset("all");
>> pktgen.clear("all");
>> 
>> pktgen.src_mac("all", "start", "3c:fd:fe:a2:2c:88");
>> pktgen.dst_mac("all", "start", "3c:fd:fe:a2:2b:b0");
>> 
>> pktgen.src_ip("all", "start", "192.168.1.1");
>> pktgen.src_ip("all", "min", "0.0.0.0");
>> pktgen.src_ip("all", "inc", "0.0.0.0");
>> pktgen.src_ip("all", "max", "0.0.0.0");
>> 
>> pktgen.src_port("all", "start", 1234);
>> pktgen.src_port("all", "min", 1234);
>> pktgen.src_port("all", "inc", 1);
>> pktgen.src_port("all", "max", 4096);
>> 
>> pktgen.ip_proto("all", "udp");
>> 
>> 
>> pktgen.dst_ip("all", "start", "192.168.100.1");
>> pktgen.dst_ip("all", "min", "192.168.100.1");
>> pktgen.dst_ip("all", "inc", "0.0.0.1");
>> pktgen.dst_ip("all", "max", "192.168.100.99");
>> 
>> pktgen.dst_port("all", "start", 1234);
>> pktgen.dst_port("all", "min", 0);
>> pktgen.dst_port("all", "inc", 0);
>> pktgen.dst_port("all", "max", 0);
>> 
>> 
>> pktgen.vlan("all", "off");
>> pktgen.vlan_id("all", "start", 1024);
>> pktgen.vlan_id("all", "min", 1);
>> pktgen.vlan_id("all", "inc", 5);
>> pktgen.vlan_id("all", "max", 3100);
>> 
>> pktgen.pkt_size("all", "start", 128); -- this is the parameter I'm
>> trying to adjust
>> pktgen.pkt_size("all", "min", 0);
>> pktgen.pkt_size("all", "inc", 0);
>> pktgen.pkt_size("all", "max", 0);
>> 
>> pktgen.set_range("all", "on");
>> 
>> pktgen.set("all", "count", 0); -- count 0 is forever
>> pktgen.set("all", "rate", 20); -- this is the parameter I'm trying to adjust
>> pktgen.start("all");
>> 
>> -- 
>> Julien Castets
> 
> Regards,
> Keith
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Behavior of pktgen's "rate" option
  2017-03-17  0:25 ` Wiles, Keith
  2017-03-17  7:50   ` Wiles, Keith
@ 2017-03-17 12:59   ` Julien Castets
  1 sibling, 0 replies; 5+ messages in thread
From: Julien Castets @ 2017-03-17 12:59 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: users

On Fri, Mar 17, 2017 at 1:25 AM, Wiles, Keith <keith.wiles@intel.com> wrote:
>
> Be careful in how you setup and change the size of a packet. Changing the size while Pktgen is running can lead to some weird performance. The problem is the packets already in flight are not changed to the new size or they are stuck on the TX done queue. The new TX flush API needs to be added to the PMDs and then in Pktgen I can flush all of the buffers back to the mempool and then alter the size of the mbuf. You did not state the version of Pktgen you are using, as DPDK 16.11 added some support for pktgen to locate and update all of the mbuf, I think that is kind of a hack on my part.

Pktgen Ver: 3.1.2 (DPDK 17.02.0)

> The code in Pktgen takes your rate and packet size to determine the number of packets to send at a given rate to obtain you desired rate. I thought the calculation  accounted for the number of TX cores, but maybe that is a bug. I would have expected 16Gbps in the two core case if I did not account for that config. I do not have direct access to the machine to verify the code.
>
> Try stopping the traffic before changing any of the configs, changing the rate should work while Pktgen is sending. The problem is if the size of content needs to change the system should be halted. Also if you want to do a some more debugging, try quiting Pktgen between configurations and see if that works better. It is not what I intended on the usage of Pktgen is to quit all of the time.

To generate traffic, I'm loading the lua configuration file (attached
to my first email) from pktgen.

This lua file starts with:

pktgen.stop("all");
pktgen.reset("all");
pktgen.clear("all");

So I'm not changing parameters while pktgen is sending.

Anyway, the results are the same if I stop pktgen between tests:

* 1 core, pktsize 64 bytes, rate 20: 8Gbps
* 2 cores, pktsize 64 bytes, rate 20: 28Gbps
* 1 core, pktsize 128 bytes, rate 20: 14Gbps

> I will try to have a look this weekend when I get back home.

Awesome!


-- 
Julien Castets

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Behavior of pktgen's "rate" option
  2017-03-17  7:50   ` Wiles, Keith
@ 2017-03-21  9:23     ` Julien Castets
  0 siblings, 0 replies; 5+ messages in thread
From: Julien Castets @ 2017-03-21  9:23 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: users

On Fri, Mar 17, 2017 at 8:50 AM, Wiles, Keith <keith.wiles@intel.com> wrote:
>
> In file app/pktgen.c around line 113 I calculate the tx_cycles and I divided by the tx port count but it should be multiple.
>
> Let me know if that works.
>

I misunderstood your email.

Replacing the / with a * works fine,

Thanks a lot!
-- 
Julien Castets

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-03-21  9:23 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-16 13:33 [dpdk-users] Behavior of pktgen's "rate" option Julien Castets
2017-03-17  0:25 ` Wiles, Keith
2017-03-17  7:50   ` Wiles, Keith
2017-03-21  9:23     ` Julien Castets
2017-03-17 12:59   ` Julien Castets

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).