DPDK usage discussions
 help / color / Atom feed
* [dpdk-users] Transmit Chained Buffers On mlx5
@ 2019-10-08 21:35 Cliff Burdick
  0 siblings, 0 replies; only message in thread
From: Cliff Burdick @ 2019-10-08 21:35 UTC (permalink / raw)
  To: users

Hi, I'm trying to figure out how to transmit a chained buffer, and I can't
find any examples or threads on this mailing list to do so. My assumptions

1) Only the first mbuf contains the Ethernet and other packet headers, and
all the rest are essentially just appended to the end of that.
2) Once transmitted, the driver will construct all links of the chain into
a single jumboframe.
3) DEV_TX_OFFLOAD_MULTI_SEGS must be set in the TX offloads.

After setting up everything as above, I construct a chain of buffers using
rte_pktmbuf_chain. When I call rte_eth_tx_burst, it reports 1 packet sent
for the first packet with 2 segments, then 0 packets sent with packets
containing 20+ segments. Is there some limitation with chaining that I'm
hitting? It seems the maximum number of chains is set to UINT16_MAX in
DPDK, so 20 should be plenty small.

I'm using ConnectX-5/mlx with DPDK 19.05.


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, back to index

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-08 21:35 [dpdk-users] Transmit Chained Buffers On mlx5 Cliff Burdick

DPDK usage discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
	public-inbox-index users

Newsgroup available over NNTP:

AGPL code for this site: git clone https://public-inbox.org/ public-inbox