DPDK patches and discussions
 help / color / mirror / Atom feed
From: Olivier Matz <olivier.matz@6wind.com>
To: Ferruh Yigit <ferruh.yigit@intel.com>
Cc: Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
	dev@dpdk.org, thomasm@monjalon.net, stephen@networkplumber.org,
	jerinjacobk@gmail.com, maxime.coquelin@redhat.com,
	david.marchand@redhat.com, arybchenko@solarflare.com
Subject: Re: [dpdk-dev] [PATCH v5 1/6] ethdev: introduce Rx buffer split
Date: Wed, 14 Oct 2020 15:31:38 +0200	[thread overview]
Message-ID: <20201014133138.GR21395@platinum> (raw)
In-Reply-To: <eec0d043-306b-bfa4-f45c-8c650782eaf5@intel.com>

Hi Slava,

After reviewing this version and comparing with v4, I think having the
configuration in rxconf (like in this patch) is a better choice.

Few comments below.

On Tue, Oct 13, 2020 at 11:34:16PM +0100, Ferruh Yigit wrote:
> On 10/13/2020 8:21 PM, Viacheslav Ovsiienko wrote:
> > The DPDK datapath in the transmit direction is very flexible.
> > An application can build the multi-segment packet and manages
> > almost all data aspects - the memory pools where segments
> > are allocated from, the segment lengths, the memory attributes
> > like external buffers, registered for DMA, etc.
> > 
> > In the receiving direction, the datapath is much less flexible,
> > an application can only specify the memory pool to configure the
> > receiving queue and nothing more. In order to extend receiving
> > datapath capabilities it is proposed to add the way to provide
> > extended information how to split the packets being received.
> > 
> > The following structure is introduced to specify the Rx packet
> > segment:
> > 
> > struct rte_eth_rxseg {
> >      struct rte_mempool *mp; /* memory pools to allocate segment from */
> >      uint16_t length; /* segment maximal data length,
> > 		       	configures "split point" */
> >      uint16_t offset; /* data offset from beginning
> > 		       	of mbuf data buffer */
> >      uint32_t reserved; /* reserved field */
> > };
> > 
> > The segment descriptions are added to the rte_eth_rxconf structure:
> >     rx_seg - pointer the array of segment descriptions, each element
> >               describes the memory pool, maximal data length, initial
> >               data offset from the beginning of data buffer in mbuf.
> > 	     This array allows to specify the different settings for
> > 	     each segment in individual fashion.
> >     n_seg - number of elements in the array
> > 
> > If the extended segment descriptions is provided with these new
> > fields the mp parameter of the rte_eth_rx_queue_setup must be
> > specified as NULL to avoid ambiguity.
> > 
> > The new offload flag RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT in device
> > capabilities is introduced to present the way for PMD to report to
> > application about supporting Rx packet split to configurable
> > segments. Prior invoking the rte_eth_rx_queue_setup() routine
> > application should check RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT flag.
> > 
> > If the Rx queue is configured with new routine the packets being
> > received will be split into multiple segments pushed to the mbufs
> > with specified attributes. The PMD will split the received packets
> > into multiple segments according to the specification in the
> > description array:
> > 
> > - the first network buffer will be allocated from the memory pool,
> >    specified in the first segment description element, the second
> >    network buffer - from the pool in the second segment description
> >    element and so on. If there is no enough elements to describe
> >    the buffer for entire packet of maximal length the pool from the
> >    last valid element will be used to allocate the buffers from for the
> >    rest of segments
> > 
> > - the offsets from the segment description elements will provide
> >    the data offset from the buffer beginning except the first mbuf -
> >    for this one the offset is added to the RTE_PKTMBUF_HEADROOM to get
> >    actual offset from the buffer beginning. If there is no enough
> >    elements to describe the buffer for entire packet of maximal length
> >    the offsets for the rest of segment will be supposed to be zero.
> > 
> > - the data length being received to each segment is limited  by the
> >    length specified in the segment description element. The data
> >    receiving starts with filling up the first mbuf data buffer, if the
> >    specified maximal segment length is reached and there are data
> >    remaining (packet is longer than buffer in the first mbuf) the
> >    following data will be pushed to the next segment up to its own
> >    maximal length. If the first two segments is not enough to store
> >    all the packet remaining data  the next (third) segment will
> >    be engaged and so on. If the length in the segment description
> >    element is zero the actual buffer size will be deduced from
> >    the appropriate memory pool properties. If there is no enough
> >    elements to describe the buffer for entire packet of maximal
> >    length the buffer size will be deduced from the pool of the last
> >    valid element for the remaining segments.
> > 
> > For example, let's suppose we configured the Rx queue with the
> > following segments:
> >      seg0 - pool0, len0=14B, off0=2
> >      seg1 - pool1, len1=20B, off1=128B
> >      seg2 - pool2, len2=20B, off2=0B
> >      seg3 - pool3, len3=512B, off3=0B
> > 
> > The packet 46 bytes long will look like the following:
> >      seg0 - 14B long @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
> >      seg1 - 20B long @ 128 in mbuf from pool1
> >      seg2 - 12B long @ 0 in mbuf from pool2
> > 
> > The packet 1500 bytes long will look like the following:
> >      seg0 - 14B @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
> >      seg1 - 20B @ 128 in mbuf from pool1
> >      seg2 - 20B @ 0 in mbuf from pool2
> >      seg3 - 512B @ 0 in mbuf from pool3
> >      seg4 - 512B @ 0 in mbuf from pool3
> >      seg5 - 422B @ 0 in mbuf from pool3
> > 
> > The offload RTE_ETH_RX_OFFLOAD_SCATTER must be present and
> > configured to support new buffer split feature (if n_seg
> > is greater than one).
> > 
> > The new approach would allow splitting the ingress packets into
> > multiple parts pushed to the memory with different attributes.
> > For example, the packet headers can be pushed to the embedded
> > data buffers within mbufs and the application data into
> > the external buffers attached to mbufs allocated from the
> > different memory pools. The memory attributes for the split
> > parts may differ either - for example the application data
> > may be pushed into the external memory located on the dedicated
> > physical device, say GPU or NVMe. This would improve the DPDK
> > receiving datapath flexibility with preserving compatibility
> > with existing API.
> > 
> > Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> > ---
> >   doc/guides/nics/features.rst             | 15 +++++
> >   doc/guides/rel_notes/release_20_11.rst   |  9 +++
> >   lib/librte_ethdev/rte_ethdev.c           | 95 ++++++++++++++++++++++++--------
> >   lib/librte_ethdev/rte_ethdev.h           | 58 ++++++++++++++++++-
> >   lib/librte_ethdev/rte_ethdev_version.map |  1 +
> 
> Can you please update deprecation notice too, to remove the notice?
> 
> >   5 files changed, 155 insertions(+), 23 deletions(-)
> > 
> > diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> > index dd8c955..a45a9e8 100644
> > --- a/doc/guides/nics/features.rst
> > +++ b/doc/guides/nics/features.rst
> > @@ -185,6 +185,21 @@ Supports receiving segmented mbufs.
> >   * **[related]    eth_dev_ops**: ``rx_pkt_burst``.
> > +.. _nic_features_buffer_split:
> > +
> > +Buffer Split on Rx
> > +------------------
> > +
> > +Scatters the packets being received on specified boundaries to segmented mbufs.
> > +
> > +* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT``.
> 
> uses 'rx_seg and rx_nseg' from 'rte_eth_rxconf', perhaps it can be another line.
> 
> > +* **[implements] datapath**: ``Buffer Split functionality``.
> > +* **[implements] rte_eth_dev_data**: ``buffer_split``.
> 
> What is implemented here?
> 
> > +* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT``.
> > +* **[provides]   eth_dev_ops**: ``rxq_info_get:buffer_split``.
> 
> Is this correct?
> 
> > +* **[related] API**: ``rte_eth_rx_queue_setup()``.
> > +
> > +
> >   .. _nic_features_lro:
> >   LRO
> > diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> > index bcc0fc2..f67ec62 100644
> > --- a/doc/guides/rel_notes/release_20_11.rst
> > +++ b/doc/guides/rel_notes/release_20_11.rst
> > @@ -60,6 +60,12 @@ New Features
> >     Added the FEC API which provides functions for query FEC capabilities and
> >     current FEC mode from device. Also, API for configuring FEC mode is also provided.
> > +* **Introduced extended buffer description for receiving.**
> > +
> > +  Added the extended Rx queue setup routine providing the individual
> 
> This looks wrong with last version.
> 
> > +  descriptions for each Rx segment with maximal size, buffer offset and memory
> > +  pool to allocate data buffers from.
> > +
> >   * **Updated Broadcom bnxt driver.**
> >     Updated the Broadcom bnxt driver with new features and improvements, including:
> > @@ -253,6 +259,9 @@ API Changes
> >     As the data of ``uint8_t`` will be truncated when queue number under
> >     a TC is greater than 256.
> > +* ethdev: Added fields rx_seg and rx_nseg to rte_eth_rxconf structure
> > +  to provide extended description of the receiving buffer.
> > +
> >   * vhost: Moved vDPA APIs from experimental to stable.
> >   * rawdev: Added a structure size parameter to the functions
> > diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
> > index 892c246..7b64a6e 100644
> > --- a/lib/librte_ethdev/rte_ethdev.c
> > +++ b/lib/librte_ethdev/rte_ethdev.c
> > @@ -105,6 +105,9 @@ struct rte_eth_xstats_name_off {
> >   #define RTE_RX_OFFLOAD_BIT2STR(_name)	\
> >   	{ DEV_RX_OFFLOAD_##_name, #_name }
> > +#define RTE_ETH_RX_OFFLOAD_BIT2STR(_name)	\
> > +	{ RTE_ETH_RX_OFFLOAD_##_name, #_name }
> > +
> >   static const struct {
> >   	uint64_t offload;
> >   	const char *name;
> > @@ -128,9 +131,11 @@ struct rte_eth_xstats_name_off {
> >   	RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM),
> >   	RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
> >   	RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
> > +	RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
> >   };
> >   #undef RTE_RX_OFFLOAD_BIT2STR
> > +#undef RTE_ETH_RX_OFFLOAD_BIT2STR
> >   #define RTE_TX_OFFLOAD_BIT2STR(_name)	\
> >   	{ DEV_TX_OFFLOAD_##_name, #_name }
> > @@ -1770,10 +1775,14 @@ struct rte_eth_dev *
> >   		       struct rte_mempool *mp)
> >   {
> >   	int ret;
> > +	uint16_t seg_idx;
> >   	uint32_t mbp_buf_size;
> >   	struct rte_eth_dev *dev;
> >   	struct rte_eth_dev_info dev_info;
> >   	struct rte_eth_rxconf local_conf;
> > +	const struct rte_eth_rxseg *rx_seg;
> > +	struct rte_eth_rxseg seg_single = { .mp = mp};

missing space

> > +	uint16_t n_seg;
> >   	void **rxq;
> >   	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
> > @@ -1784,13 +1793,32 @@ struct rte_eth_dev *
> >   		return -EINVAL;
> >   	}
> > -	if (mp == NULL) {
> > -		RTE_ETHDEV_LOG(ERR, "Invalid null mempool pointer\n");
> > -		return -EINVAL;
> > -	}
> > -
> >   	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
> > +	rx_seg = rx_conf->rx_seg;
> > +	n_seg = rx_conf->rx_nseg;
> > +	if (rx_seg == NULL) {
> > +		/* Exclude ambiguities about segment descrtiptions. */
> > +		if (n_seg) {
> > +			RTE_ETHDEV_LOG(ERR,
> > +				       "Non empty array with null pointer\n");
> > +			return -EINVAL;
> > +		}
> > +		rx_seg = &seg_single;
> > +		n_seg = 1;
> 
> Why setting 'rx_seg' & 'n_seg'? Why not leaving them NULL and 0 when not used?
> This was PMD can do NULL/0 check and can know they are not used.

I think they are just set locally to factorize the checks below, but I agree it
is questionnable: it seems the only check which is really factorized is about
private_data_size.

> I think better to do a "if (mp == NULL)" check here, both 'rx_seg' & 'mp'
> should not be NULL.

Agree, something like this looks more simple:

	if (mp == NULL) {
		if (n_seg == 0 || rx_seg == NULL)
			RTE_ETHDEV_LOG(ERR, "...");
	} else {
		if (n_seg != 0 || rx_seg != NULL)
			RTE_ETHDEV_LOG(ERR, "...");
		rx_seg = &seg_single;
		n_seg = 1;
	}

 
> > +	} else {
> > +		if (n_seg == 0) {
> > +			RTE_ETHDEV_LOG(ERR,
> > +				       "Invalid zero descriptions number\n");
> > +			return -EINVAL;
> > +		}
> > +		if (mp) {
> > +			RTE_ETHDEV_LOG(ERR,
> > +				       "Memory pool duplicated definition\n");
> > +			return -EINVAL;
> > +		}
> > +	}
> > +
> >   	/*
> >   	 * Check the size of the mbuf data buffer.
> >   	 * This value must be provided in the private data of the memory pool.
> > @@ -1800,23 +1828,48 @@ struct rte_eth_dev *
> >   	if (ret != 0)
> >   		return ret;
> > -	if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) {
> > -		RTE_ETHDEV_LOG(ERR, "%s private_data_size %d < %d\n",
> > -			mp->name, (int)mp->private_data_size,
> > -			(int)sizeof(struct rte_pktmbuf_pool_private));
> > -		return -ENOSPC;
> > -	}
> > -	mbp_buf_size = rte_pktmbuf_data_room_size(mp);
> > +	for (seg_idx = 0; seg_idx < n_seg; seg_idx++) {
> > +		struct rte_mempool *mpl = rx_seg[seg_idx].mp;
> > +		uint32_t length = rx_seg[seg_idx].length;
> > +		uint32_t offset = rx_seg[seg_idx].offset;
> > +		uint32_t head_room = seg_idx ? 0 : RTE_PKTMBUF_HEADROOM;
> > -	if (mbp_buf_size < dev_info.min_rx_bufsize + RTE_PKTMBUF_HEADROOM) {
> > -		RTE_ETHDEV_LOG(ERR,
> > -			"%s mbuf_data_room_size %d < %d (RTE_PKTMBUF_HEADROOM=%d + min_rx_bufsize(dev)=%d)\n",
> > -			mp->name, (int)mbp_buf_size,
> > -			(int)(RTE_PKTMBUF_HEADROOM + dev_info.min_rx_bufsize),
> > -			(int)RTE_PKTMBUF_HEADROOM,
> > -			(int)dev_info.min_rx_bufsize);
> > -		return -EINVAL;
> > +		if (mpl == NULL) {
> > +			RTE_ETHDEV_LOG(ERR, "Invalid null mempool pointer\n");
> > +			return -EINVAL;
> > +		}
> > +
> > +		if (mpl->private_data_size <
> > +				sizeof(struct rte_pktmbuf_pool_private)) {
> > +			RTE_ETHDEV_LOG(ERR, "%s private_data_size %d < %d\n",
> > +				mpl->name, (int)mpl->private_data_size,
> > +				(int)sizeof(struct rte_pktmbuf_pool_private));
> > +			return -ENOSPC;
> > +		}
> > +
> > +		mbp_buf_size = rte_pktmbuf_data_room_size(mpl);
> > +		length = length ? length : (mbp_buf_size - head_room);
> > +		if (mbp_buf_size < length + offset + head_room) {


Is length == 0 allowed? Or is it to handle the case where mp != NULL?
Is the test needed in that case? It seems it is equivalent to do "if
(offset > 0)".

Wouldn't it be better to check mp like this?

	if (mp == NULL) {
		if (mbp_buf_size < length + offset + head_room)
			...error...
	}


> > +			RTE_ETHDEV_LOG(ERR,
> > +				"%s mbuf_data_room_size %u < %u"
> > +				" (segment length=%u + segment offset=%u)\n",
> > +				mpl->name, mbp_buf_size,
> > +				length + offset, length, offset);
> > +			return -EINVAL;
> > +		}
> >   	}
> > +	/* Check the minimal buffer size for the single segment only. */
> 
> This is the main branch, what do you think moving the comment to the
> beggining of above for loop and add a comment about testing the multiple
> segment.
> 
> Btw, I have a concern that this single/multi segment can cause a confusion
> with multi segment packets. Can something else, like "split package" can be
> used instead of segment?
> 
> > +	if (mp && (mbp_buf_size < dev_info.min_rx_bufsize +
> > +				  RTE_PKTMBUF_HEADROOM)) {
> > +		RTE_ETHDEV_LOG(ERR,
> > +			       "%s mbuf_data_room_size %u < %u "
> > +			       "(RTE_PKTMBUF_HEADROOM=%u + "
> > +			       "min_rx_bufsize(dev)=%u)\n",
> > +			       mp->name, mbp_buf_size,
> > +			       RTE_PKTMBUF_HEADROOM + dev_info.min_rx_bufsize,
> > +			       RTE_PKTMBUF_HEADROOM, dev_info.min_rx_bufsize);
> > +			return -EINVAL;
> > +		}

The "}" is not indented correctly


> >   	/* Use default specified by driver, if nb_rx_desc is zero */
> >   	if (nb_rx_desc == 0) {
> > @@ -1914,8 +1967,6 @@ struct rte_eth_dev *
> >   			dev->data->min_rx_buf_size = mbp_buf_size;
> >   	}
> > -	rte_ethdev_trace_rxq_setup(port_id, rx_queue_id, nb_rx_desc, mp,
> > -		rx_conf, ret);
> 
> Is this removed intentionally?
> 
> >   	return eth_err(port_id, ret);
> >   }
> > diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
> > index 5bcfbb8..9cf0a03 100644
> > --- a/lib/librte_ethdev/rte_ethdev.h
> > +++ b/lib/librte_ethdev/rte_ethdev.h
> > @@ -970,6 +970,16 @@ struct rte_eth_txmode {
> >   };
> >   /**
> > + * A structure used to configure an RX packet segment to split.
> > + */
> > +struct rte_eth_rxseg {
> > +	struct rte_mempool *mp; /**< Memory pools to allocate segment from */

pools -> pool

> > +	uint16_t length; /**< Segment data length, configures split point. */
> > +	uint16_t offset; /**< Data offset from beginning of mbuf data buffer */
> > +	uint32_t reserved; /**< Reserved field */

To allow future additions to the structure, should the reserved field
always be set to 0? If yes, maybe it should be specified here and
checked in rte_eth_rx_queue_setup().

Some "." missing at the end of comments.

> > +};
> > +
> > +/**
> >    * A structure used to configure an RX ring of an Ethernet port.
> >    */
> >   struct rte_eth_rxconf {
> > @@ -977,13 +987,23 @@ struct rte_eth_rxconf {
> >   	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
> >   	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
> >   	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> > +	uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */
> > +	/**
> > +	 * The pointer to the array of segment descriptions, each element
> > +	 * describes the memory pool, maximal segment data length, initial
> > +	 * data offset from the beginning of data buffer in mbuf. This allow

allow -> allows
or maybe "This allows to specify" -> "This specifies"

> > +	 * to specify the dedicated properties for each segment in the receiving
> > +	 * buffer - pool, buffer offset, maximal segment size. The number of
> > +	 * segment descriptions in the array is specified by the rx_nseg
> > +	 * field.
> > +	 */
> 
> What do you think providing a short description here, and move above comment
> to abice "struct rte_eth_rxseg" struct?

+1

> 
> > +	struct rte_eth_rxseg *rx_seg;
> >   	/**
> >   	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> >   	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
> >   	 * fields on rte_eth_dev_info structure are allowed to be set.
> >   	 */
> >   	uint64_t offloads;
> > -
> 
> unrelated
> 
> >   	uint64_t reserved_64s[2]; /**< Reserved for future fields */
> >   	void *reserved_ptrs[2];   /**< Reserved for future fields */
> >   };
> > @@ -1260,6 +1280,7 @@ struct rte_eth_conf {
> >   #define DEV_RX_OFFLOAD_SCTP_CKSUM	0x00020000
> >   #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
> >   #define DEV_RX_OFFLOAD_RSS_HASH		0x00080000
> > +#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
> >   #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> >   				 DEV_RX_OFFLOAD_UDP_CKSUM | \
> > @@ -2027,6 +2048,41 @@ int rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_queue,
> >    *   No need to repeat any bit in rx_conf->offloads which has already been
> >    *   enabled in rte_eth_dev_configure() at port level. An offloading enabled
> >    *   at port level can't be disabled at queue level.
> 
> Can it be possible to put a kind of marker here, like "@rx_seg & @rx_nseg",
> to clarify what are you talking about.
> 
> > + *   The configuration structure also contains the pointer to the array
> > + *   of the receiving buffer segment descriptions, each element describes
> > + *   the memory pool, maximal segment data length, initial data offset from
> > + *   the beginning of data buffer in mbuf. This allow to specify the dedicated
> > + *   properties for each segment in the receiving buffer - pool, buffer
> > + *   offset, maximal segment size. If RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT offload
> > + *   flag is configured the PMD will split the received packets into multiple
> > + *   segments according to the specification in the description array:
> > + *   - the first network buffer will be allocated from the memory pool,
> > + *     specified in the first segment description element, the second
> > + *     network buffer - from the pool in the second segment description
> > + *     element and so on. If there is no enough elements to describe
> > + *     the buffer for entire packet of maximal length the pool from the last
> > + *     valid element will be used to allocate the buffers from for the rest
> > + *     of segments.
> > + *   - the offsets from the segment description elements will provide the
> > + *     data offset from the buffer beginning except the first mbuf - for this
> > + *     one the offset is added to the RTE_PKTMBUF_HEADROOM to get actual
> > + *     offset from the buffer beginning. If there is no enough elements
> > + *     to describe the buffer for entire packet of maximal length the offsets
> > + *     for the rest of segment will be supposed to be zero.
> > + *   - the data length being received to each segment is limited by the
> > + *     length specified in the segment description element. The data receiving
> > + *     starts with filling up the first mbuf data buffer, if the specified
> > + *     maximal segment length is reached and there are data remaining
> > + *     (packet is longer than buffer in the first mbuf) the following data
> > + *     will be pushed to the next segment up to its own length. If the first
> > + *     two segments is not enough to store all the packet data the next
> > + *     (third) segment will be engaged and so on. If the length in the segment
> > + *     description element is zero the actual buffer size will be deduced
> > + *     from the appropriate memory pool properties. If there is no enough
> > + *     elements to describe the buffer for entire packet of maximal length
> > + *     the buffer size will be deduced from the pool of the last valid
> > + *     element for the all remaining segments.
> > + *
> 
> I think as a first thing the comment should clarify that if @rx_seg provided
> 'mb_pool' should be NULL, and if split Rx feature is not used "@rx_seg &
> @rx_nseg" should be NULL and 0.
> 
> Also above is too wordy, it is hard to follow. Like "@rx_seg & @rx_nseg" are
> only taken into account if application provides
> 'RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT' offload should be clearer to see, etc.
> Can you try to simplify it, perhpas moving some of above comments to the
> "struct rte_eth_rxseg" can work?
> 
> >    * @param mb_pool
> >    *   The pointer to the memory pool from which to allocate *rte_mbuf* network
> >    *   memory buffers to populate each descriptor of the receive ring.
> > diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map
> > index f8a0945..25f7cee 100644
> > --- a/lib/librte_ethdev/rte_ethdev_version.map
> > +++ b/lib/librte_ethdev/rte_ethdev_version.map
> > @@ -232,6 +232,7 @@ EXPERIMENTAL {
> >   	rte_eth_fec_get_capability;
> >   	rte_eth_fec_get;
> >   	rte_eth_fec_set;
> > +
> 
> unrelated

  reply	other threads:[~2020-10-14 13:38 UTC|newest]

Thread overview: 172+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-17 17:49 [dpdk-dev] [RFC] " Slava Ovsiienko
2020-09-17 16:55 ` Andrew Rybchenko
2020-10-01  8:54   ` Slava Ovsiienko
2020-10-12  8:45     ` Andrew Rybchenko
2020-10-12  9:56       ` Slava Ovsiienko
2020-10-12 15:14         ` Thomas Monjalon
2020-10-12 15:28           ` Ananyev, Konstantin
2020-10-12 15:34             ` Slava Ovsiienko
2020-10-12 15:56               ` Ananyev, Konstantin
2020-10-12 15:59                 ` Slava Ovsiienko
2020-10-12 16:52                 ` Thomas Monjalon
2020-10-12 16:03           ` Andrew Rybchenko
2020-10-12 16:10             ` Slava Ovsiienko
2020-10-13 21:59         ` Ferruh Yigit
2020-10-14  7:17           ` Thomas Monjalon
2020-10-14  7:37           ` Slava Ovsiienko
2020-10-05  6:26 ` [dpdk-dev] [PATCH 0/5] " Viacheslav Ovsiienko
2020-10-05  6:26   ` [dpdk-dev] [PATCH 1/5] " Viacheslav Ovsiienko
2020-10-05  6:26   ` [dpdk-dev] [PATCH 2/5] app/testpmd: add multiple pools per core creation Viacheslav Ovsiienko
2020-10-05  6:26   ` [dpdk-dev] [PATCH 3/5] app/testpmd: add buffer split offload configuration Viacheslav Ovsiienko
2020-10-05  6:26   ` [dpdk-dev] [PATCH 4/5] app/testpmd: add rxpkts commands and parameters Viacheslav Ovsiienko
2020-10-05  6:26   ` [dpdk-dev] [PATCH 5/5] app/testpmd: add extended Rx queue setup Viacheslav Ovsiienko
2020-10-07 15:06 ` [dpdk-dev] [PATCH v2 0/9] ethdev: introduce Rx buffer split Viacheslav Ovsiienko
2020-10-07 15:06   ` [dpdk-dev] [PATCH v2 1/9] " Viacheslav Ovsiienko
2020-10-11 22:17     ` Thomas Monjalon
2020-10-12  9:40       ` Slava Ovsiienko
2020-10-12 10:09         ` Thomas Monjalon
2020-10-07 15:06   ` [dpdk-dev] [PATCH v2 2/9] app/testpmd: add multiple pools per core creation Viacheslav Ovsiienko
2020-10-07 15:06   ` [dpdk-dev] [PATCH v2 3/9] app/testpmd: add buffer split offload configuration Viacheslav Ovsiienko
2020-10-07 15:06   ` [dpdk-dev] [PATCH v2 4/9] app/testpmd: add rxpkts commands and parameters Viacheslav Ovsiienko
2020-10-07 15:06   ` [dpdk-dev] [PATCH v2 5/9] app/testpmd: add extended Rx queue setup Viacheslav Ovsiienko
2020-10-07 15:06   ` [dpdk-dev] [PATCH v2 6/9] net/mlx5: add extended Rx queue setup routine Viacheslav Ovsiienko
2020-10-07 15:06   ` [dpdk-dev] [PATCH v2 7/9] net/mlx5: configure Rx queue to support split Viacheslav Ovsiienko
2020-10-07 15:06   ` [dpdk-dev] [PATCH v2 8/9] net/mlx5: register multiple pool for Rx queue Viacheslav Ovsiienko
2020-10-07 15:06   ` [dpdk-dev] [PATCH v2 9/9] net/mlx5: update Rx datapath to support split Viacheslav Ovsiienko
2020-10-12 16:19 ` [dpdk-dev] [PATCH v3 0/9] ethdev: introduce Rx buffer split Viacheslav Ovsiienko
2020-10-12 16:19   ` [dpdk-dev] [PATCH v3 1/9] " Viacheslav Ovsiienko
2020-10-12 16:38     ` Andrew Rybchenko
2020-10-12 17:03       ` Thomas Monjalon
2020-10-12 17:11         ` Andrew Rybchenko
2020-10-12 20:22           ` Slava Ovsiienko
2020-10-12 17:11         ` Slava Ovsiienko
2020-10-12 16:19   ` [dpdk-dev] [PATCH v3 2/9] app/testpmd: add multiple pools per core creation Viacheslav Ovsiienko
2020-10-12 16:19   ` [dpdk-dev] [PATCH v3 3/9] app/testpmd: add buffer split offload configuration Viacheslav Ovsiienko
2020-10-12 16:19   ` [dpdk-dev] [PATCH v3 4/9] app/testpmd: add rxpkts commands and parameters Viacheslav Ovsiienko
2020-10-12 16:19   ` [dpdk-dev] [PATCH v3 5/9] app/testpmd: add extended Rx queue setup Viacheslav Ovsiienko
2020-10-12 16:19   ` [dpdk-dev] [PATCH v3 6/9] net/mlx5: add extended Rx queue setup routine Viacheslav Ovsiienko
2020-10-12 16:19   ` [dpdk-dev] [PATCH v3 7/9] net/mlx5: configure Rx queue to support split Viacheslav Ovsiienko
2020-10-12 16:19   ` [dpdk-dev] [PATCH v3 8/9] net/mlx5: register multiple pool for Rx queue Viacheslav Ovsiienko
2020-10-12 16:19   ` [dpdk-dev] [PATCH v3 9/9] net/mlx5: update Rx datapath to support split Viacheslav Ovsiienko
2020-10-12 20:09 ` [dpdk-dev] [PATCH v4 0/9] ethdev: introduce Rx buffer split Viacheslav Ovsiienko
2020-10-12 20:09   ` [dpdk-dev] [PATCH v4 1/9] " Viacheslav Ovsiienko
2020-10-12 20:09   ` [dpdk-dev] [PATCH v4 2/9] app/testpmd: add multiple pools per core creation Viacheslav Ovsiienko
2020-10-12 20:09   ` [dpdk-dev] [PATCH v4 3/9] app/testpmd: add buffer split offload configuration Viacheslav Ovsiienko
2020-10-12 20:09   ` [dpdk-dev] [PATCH v4 4/9] app/testpmd: add rxpkts commands and parameters Viacheslav Ovsiienko
2020-10-12 20:09   ` [dpdk-dev] [PATCH v4 5/9] app/testpmd: add extended Rx queue setup Viacheslav Ovsiienko
2020-10-12 20:09   ` [dpdk-dev] [PATCH v4 6/9] net/mlx5: add extended Rx queue setup routine Viacheslav Ovsiienko
2020-10-12 20:10   ` [dpdk-dev] [PATCH v4 7/9] net/mlx5: configure Rx queue to support split Viacheslav Ovsiienko
2020-10-12 20:10   ` [dpdk-dev] [PATCH v4 8/9] net/mlx5: register multiple pool for Rx queue Viacheslav Ovsiienko
2020-10-12 20:10   ` [dpdk-dev] [PATCH v4 9/9] net/mlx5: update Rx datapath to support split Viacheslav Ovsiienko
2020-10-13 19:21 ` [dpdk-dev] [PATCH v5 0/6] ethdev: introduce Rx buffer split Viacheslav Ovsiienko
2020-10-13 19:21   ` [dpdk-dev] [PATCH v5 1/6] " Viacheslav Ovsiienko
2020-10-13 22:34     ` Ferruh Yigit
2020-10-14 13:31       ` Olivier Matz [this message]
2020-10-14 14:42       ` Slava Ovsiienko
2020-10-13 19:21   ` [dpdk-dev] [PATCH v5 2/6] app/testpmd: add multiple pools per core creation Viacheslav Ovsiienko
2020-10-13 19:21   ` [dpdk-dev] [PATCH v5 3/6] app/testpmd: add buffer split offload configuration Viacheslav Ovsiienko
2020-10-13 19:21   ` [dpdk-dev] [PATCH v5 4/6] app/testpmd: add rxpkts commands and parameters Viacheslav Ovsiienko
2020-10-13 19:21   ` [dpdk-dev] [PATCH v5 5/6] app/testpmd: add rxoffs " Viacheslav Ovsiienko
2020-10-13 19:21   ` [dpdk-dev] [PATCH v5 6/6] app/testpmd: add extended Rx queue setup Viacheslav Ovsiienko
2020-10-14 18:11 ` [dpdk-dev] [PATCH v6 0/6] ethdev: introduce Rx buffer split Viacheslav Ovsiienko
2020-10-14 18:11   ` [dpdk-dev] [PATCH v6 1/6] " Viacheslav Ovsiienko
2020-10-14 18:57     ` Jerin Jacob
2020-10-15  7:43       ` Slava Ovsiienko
2020-10-15  9:27         ` Jerin Jacob
2020-10-15 10:27           ` Jerin Jacob
2020-10-15 10:51             ` Slava Ovsiienko
2020-10-15 11:26               ` Jerin Jacob
2020-10-15 11:36                 ` Ferruh Yigit
2020-10-15 11:49                   ` Slava Ovsiienko
2020-10-15 12:49                     ` Thomas Monjalon
2020-10-15 13:07                       ` Andrew Rybchenko
2020-10-15 13:57                         ` Slava Ovsiienko
2020-10-15 20:22                         ` Slava Ovsiienko
2020-10-15  9:49         ` Andrew Rybchenko
2020-10-15 10:34           ` Slava Ovsiienko
2020-10-15 11:09             ` Andrew Rybchenko
2020-10-15 14:39               ` Slava Ovsiienko
2020-10-14 22:13     ` Thomas Monjalon
2020-10-14 22:50     ` Ajit Khaparde
2020-10-15 10:11     ` Andrew Rybchenko
2020-10-15 10:19       ` Thomas Monjalon
2020-10-14 18:11   ` [dpdk-dev] [PATCH v6 2/6] app/testpmd: add multiple pools per core creation Viacheslav Ovsiienko
2020-10-14 18:11   ` [dpdk-dev] [PATCH v6 3/6] app/testpmd: add buffer split offload configuration Viacheslav Ovsiienko
2020-10-14 18:12   ` [dpdk-dev] [PATCH v6 4/6] app/testpmd: add rxpkts commands and parameters Viacheslav Ovsiienko
2020-10-14 18:12   ` [dpdk-dev] [PATCH v6 5/6] app/testpmd: add rxoffs " Viacheslav Ovsiienko
2020-10-14 18:12   ` [dpdk-dev] [PATCH v6 6/6] app/testpmd: add extended Rx queue setup Viacheslav Ovsiienko
2020-10-15  0:55 ` [dpdk-dev] [PATCH v2] eal/rte_malloc: add alloc_size() attribute to allocation functions Stephen Hemminger
2020-10-19 14:13   ` Thomas Monjalon
2020-10-19 14:22     ` Thomas Monjalon
2020-10-15 20:17 ` [dpdk-dev] [PATCH v7 0/6] ethdev: introduce Rx buffer split Viacheslav Ovsiienko
2020-10-15 20:17   ` [dpdk-dev] [PATCH v7 1/6] " Viacheslav Ovsiienko
2020-10-15 20:30     ` Jerin Jacob
2020-10-15 20:33     ` Thomas Monjalon
2020-10-15 22:01       ` Ajit Khaparde
2020-10-15 20:17   ` [dpdk-dev] [PATCH v7 2/6] app/testpmd: add multiple pools per core creation Viacheslav Ovsiienko
2020-10-15 20:17   ` [dpdk-dev] [PATCH v7 3/6] app/testpmd: add buffer split offload configuration Viacheslav Ovsiienko
2020-10-15 20:17   ` [dpdk-dev] [PATCH v7 4/6] app/testpmd: add rxpkts commands and parameters Viacheslav Ovsiienko
2020-10-15 20:17   ` [dpdk-dev] [PATCH v7 5/6] app/testpmd: add rxoffs " Viacheslav Ovsiienko
2020-10-15 20:17   ` [dpdk-dev] [PATCH v7 6/6] app/testpmd: add extended Rx queue setup Viacheslav Ovsiienko
2020-10-16  7:48 ` [dpdk-dev] [PATCH v8 0/6] ethdev: introduce Rx buffer split Viacheslav Ovsiienko
2020-10-16  7:48   ` [dpdk-dev] [PATCH v8 1/6] " Viacheslav Ovsiienko
2020-10-16  8:51     ` Andrew Rybchenko
2020-10-16  8:58       ` Andrew Rybchenko
2020-10-16  9:15       ` Slava Ovsiienko
2020-10-16  9:27         ` Andrew Rybchenko
2020-10-16  9:34           ` Slava Ovsiienko
2020-10-16  9:37         ` Thomas Monjalon
2020-10-16  9:38           ` Slava Ovsiienko
2020-10-16  9:19     ` Ferruh Yigit
2020-10-16  9:21       ` Andrew Rybchenko
2020-10-16  9:22       ` Slava Ovsiienko
2020-10-16  7:48   ` [dpdk-dev] [PATCH v8 2/6] app/testpmd: add multiple pools per core creation Viacheslav Ovsiienko
2020-10-16  7:48   ` [dpdk-dev] [PATCH v8 3/6] app/testpmd: add buffer split offload configuration Viacheslav Ovsiienko
2020-10-16  7:48   ` [dpdk-dev] [PATCH v8 4/6] app/testpmd: add rxpkts commands and parameters Viacheslav Ovsiienko
2020-10-16  7:48   ` [dpdk-dev] [PATCH v8 5/6] app/testpmd: add rxoffs " Viacheslav Ovsiienko
2020-10-16  7:48   ` [dpdk-dev] [PATCH v8 6/6] app/testpmd: add extended Rx queue setup Viacheslav Ovsiienko
2020-10-16 10:22 ` [dpdk-dev] [PATCH v9 0/6] ethdev: introduce Rx buffer split Viacheslav Ovsiienko
2020-10-16 10:22   ` [dpdk-dev] [PATCH v9 1/6] " Viacheslav Ovsiienko
2020-10-16 11:21     ` Ferruh Yigit
2020-10-16 13:08       ` Slava Ovsiienko
2020-10-16 10:22   ` [dpdk-dev] [PATCH v9 2/6] app/testpmd: add multiple pools per core creation Viacheslav Ovsiienko
2020-10-16 10:22   ` [dpdk-dev] [PATCH v9 3/6] app/testpmd: add buffer split offload configuration Viacheslav Ovsiienko
2020-10-16 10:22   ` [dpdk-dev] [PATCH v9 4/6] app/testpmd: add rxpkts commands and parameters Viacheslav Ovsiienko
2020-10-16 10:22   ` [dpdk-dev] [PATCH v9 5/6] app/testpmd: add rxoffs " Viacheslav Ovsiienko
2020-10-16 10:22   ` [dpdk-dev] [PATCH v9 6/6] app/testpmd: add extended Rx queue setup Viacheslav Ovsiienko
2020-10-16 12:41 ` [dpdk-dev] [PATCH v10 0/6] ethdev: introduce Rx buffer split Viacheslav Ovsiienko
2020-10-16 12:41   ` [dpdk-dev] [PATCH v10 1/6] " Viacheslav Ovsiienko
2020-10-16 12:41   ` [dpdk-dev] [PATCH v10 2/6] app/testpmd: add multiple pools per core creation Viacheslav Ovsiienko
2020-10-16 12:41   ` [dpdk-dev] [PATCH v10 3/6] app/testpmd: add buffer split offload configuration Viacheslav Ovsiienko
2020-10-16 12:41   ` [dpdk-dev] [PATCH v10 4/6] app/testpmd: add rxpkts commands and parameters Viacheslav Ovsiienko
2020-10-16 12:41   ` [dpdk-dev] [PATCH v10 5/6] app/testpmd: add rxoffs " Viacheslav Ovsiienko
2020-10-16 12:41   ` [dpdk-dev] [PATCH v10 6/6] app/testpmd: add extended Rx queue setup Viacheslav Ovsiienko
2020-10-16 13:39 ` [dpdk-dev] [PATCH v11 0/6] ethdev: introduce Rx buffer split Viacheslav Ovsiienko
2020-10-16 13:39   ` [dpdk-dev] [PATCH v11 1/6] " Viacheslav Ovsiienko
2020-10-16 15:14     ` Thomas Monjalon
2020-10-16 16:18       ` Slava Ovsiienko
2020-10-16 15:47     ` Ferruh Yigit
2020-10-16 16:05       ` Thomas Monjalon
2020-10-16 16:06         ` Ferruh Yigit
2020-10-16 13:39   ` [dpdk-dev] [PATCH v11 2/6] app/testpmd: add multiple pools per core creation Viacheslav Ovsiienko
2020-10-16 15:05     ` Ferruh Yigit
2020-10-16 15:38       ` Ferruh Yigit
2020-10-16 15:48         ` Slava Ovsiienko
2020-10-16 15:52           ` Ferruh Yigit
2020-10-16 15:55             ` Slava Ovsiienko
2020-10-16 15:57               ` Ferruh Yigit
2020-10-16 13:39   ` [dpdk-dev] [PATCH v11 3/6] app/testpmd: add buffer split offload configuration Viacheslav Ovsiienko
2020-10-16 13:39   ` [dpdk-dev] [PATCH v11 4/6] app/testpmd: add rxpkts commands and parameters Viacheslav Ovsiienko
2020-10-16 13:39   ` [dpdk-dev] [PATCH v11 5/6] app/testpmd: add rxoffs " Viacheslav Ovsiienko
2020-10-16 13:39   ` [dpdk-dev] [PATCH v11 6/6] app/testpmd: add extended Rx queue setup Viacheslav Ovsiienko
2020-10-16 16:44 ` [dpdk-dev] [PATCH v12 0/6] ethdev: introduce Rx buffer split Viacheslav Ovsiienko
2020-10-16 16:44   ` [dpdk-dev] [PATCH v12 1/6] " Viacheslav Ovsiienko
2020-10-16 19:22     ` Ferruh Yigit
2020-10-16 21:36       ` Ferruh Yigit
2020-10-16 16:44   ` [dpdk-dev] [PATCH v12 2/6] app/testpmd: add multiple pools per core creation Viacheslav Ovsiienko
2020-10-16 16:44   ` [dpdk-dev] [PATCH v12 3/6] app/testpmd: add buffer split offload configuration Viacheslav Ovsiienko
2020-10-16 16:44   ` [dpdk-dev] [PATCH v12 4/6] app/testpmd: add rxpkts commands and parameters Viacheslav Ovsiienko
2020-10-16 16:44   ` [dpdk-dev] [PATCH v12 5/6] app/testpmd: add rxoffs " Viacheslav Ovsiienko
2020-10-16 16:44   ` [dpdk-dev] [PATCH v12 6/6] app/testpmd: add extended Rx queue setup Viacheslav Ovsiienko
2020-10-16 17:05   ` [dpdk-dev] [PATCH v12 0/6] ethdev: introduce Rx buffer split Ferruh Yigit
2020-10-16 17:07     ` Slava Ovsiienko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201014133138.GR21395@platinum \
    --to=olivier.matz@6wind.com \
    --cc=arybchenko@solarflare.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=jerinjacobk@gmail.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=stephen@networkplumber.org \
    --cc=thomasm@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).