DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] doc: fix default burst size
@ 2021-05-28 17:45 Ajit Khaparde
  2021-06-17  7:47 ` Thomas Monjalon
  0 siblings, 1 reply; 2+ messages in thread
From: Ajit Khaparde @ 2021-05-28 17:45 UTC (permalink / raw)
  To: dev; +Cc: stable, bruce.richardson, xiaoyun.li, ferruh.yigit, andrew.rybchenko

[-- Attachment #1: Type: text/plain, Size: 2796 bytes --]

Default burst size in testpmd has been changed from 16 to 32
for some time now. But the documentation had not been updated.

Fixes: 836853d3d4cf7 ("app/testpmd: increase default burst size to 32")
Cc: stable@dpdk.org
Cc: bruce.richardson@intel.com
Cc: xiaoyun.li@intel.com
Cc: ferruh.yigit@intel.com
Cc: andrew.rybchenko@oktetlabs.ru

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 doc/guides/prog_guide/writing_efficient_code.rst | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/doc/guides/prog_guide/writing_efficient_code.rst b/doc/guides/prog_guide/writing_efficient_code.rst
index 7baeaae431..a61e8320ae 100644
--- a/doc/guides/prog_guide/writing_efficient_code.rst
+++ b/doc/guides/prog_guide/writing_efficient_code.rst
@@ -143,20 +143,21 @@ In order to achieve higher throughput,
 the DPDK attempts to aggregate the cost of processing each packet individually by processing packets in bursts.
 
 Using the testpmd application as an example,
-the burst size can be set on the command line to a value of 16 (also the default value).
-This allows the application to request 16 packets at a time from the PMD.
+the burst size can be set on the command line to a value of 32 (also the default value).
+This allows the application to request 32 packets at a time from the PMD.
 The testpmd application then immediately attempts to transmit all the packets that were received,
-in this case, all 16 packets.
+in this case, all 32 packets.
 
 The packets are not transmitted until the tail pointer is updated on the corresponding TX queue of the network port.
 This behavior is desirable when tuning for high throughput because
-the cost of tail pointer updates to both the RX and TX queues can be spread across 16 packets,
+the cost of tail pointer updates to both the RX and TX queues can be spread
+across 32 packets,
 effectively hiding the relatively slow MMIO cost of writing to the PCIe* device.
 However, this is not very desirable when tuning for low latency because
-the first packet that was received must also wait for another 15 packets to be received.
-It cannot be transmitted until the other 15 packets have also been processed because
+the first packet that was received must also wait for another 31 packets to be received.
+It cannot be transmitted until the other 31 packets have also been processed because
 the NIC will not know to transmit the packets until the TX tail pointer has been updated,
-which is not done until all 16 packets have been processed for transmission.
+which is not done until all 32 packets have been processed for transmission.
 
 To consistently achieve low latency, even under heavy system load,
 the application developer should avoid processing packets in bunches.
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-06-17  7:47 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-28 17:45 [dpdk-dev] [PATCH] doc: fix default burst size Ajit Khaparde
2021-06-17  7:47 ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).