From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B47CA04DB; Fri, 16 Oct 2020 17:14:16 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 06F3B1EFAE; Fri, 16 Oct 2020 17:14:15 +0200 (CEST) Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com [64.147.123.20]) by dpdk.org (Postfix) with ESMTP id A41D81EFA0 for ; Fri, 16 Oct 2020 17:14:12 +0200 (CEST) Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.west.internal (Postfix) with ESMTP id 2012CE9E; Fri, 16 Oct 2020 11:14:10 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Fri, 16 Oct 2020 11:14:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= Sb7Af0IqHKxHruUMciMLOq4SWEdA49Rpgu7KnvFCstw=; b=aEfJmKzApXv7MWm1 7VUbybBWobomeFwlMAvvNFn0alxYlyHnx9XyUHCSLGIX41N/y7oLOEKhb8vqWnYH NPZDt/ohmoajFTU8QKqG2GBjl0QfZV55xH6Si+fmh+oL7fdd2ywE9/FQdGoBBZUt eQImp/ELf5YG6K7zXggPifuq/l/vzwasVScE/AjQTGrL+OjA77WVGnlfpX1op/yz zbRObf3PUqEp97WBiwpAHXWPgT5q7XZLBZyxga93WV7r4VoT0oy+06cv2J/ghGNP 3czmXuRQxisSwnAA2a5Awb0iy8AA9X7xoWLdtsv5N2zDXpCF85Izf7QicQ79NDiP rWTNHg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=Sb7Af0IqHKxHruUMciMLOq4SWEdA49Rpgu7KnvFCs tw=; b=KdUeELgwHmomeLVkwd8+g+1ugpF3w2mq8+3000ZGuyw+LlDVorHkNf560 OIMFB0tnAF4G7sFOJ3UlNH0HRTYQWCnKlMFOZao6zfy5W02aAatmuwViFDCWLwaE Gqo4DMuOp/DAtCHsjZhV202PhfOBnOHAkh3qlB5Cxmmx8+cnbdttM8MK9w5UeG/g YsvChk8GdAgQfTjuCA5I2II6T7a4+EqE3mWHSjbvNDoEeaGJVe25Fu0fZfTjzlcB TFQJXSPbEUN96EKNtYhyvK06U9/VGPJoJxTUy9xPx6Ftcw9D+QY8VG3L8qjxmiUQ FlZu3MCQy7utmYB571H/Ah51wGYrw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrieehgdekiecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhmrghs ucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucggtf frrghtthgvrhhnpedugefgvdefudfftdefgeelgffhueekgfffhfeujedtteeutdejueei iedvffegheenucfkphepjeejrddufeegrddvtdefrddukeegnecuvehluhhsthgvrhfuih iivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepthhhohhmrghssehmohhnjhgrlhho nhdrnhgvth X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id DC2543064680; Fri, 16 Oct 2020 11:14:07 -0400 (EDT) From: Thomas Monjalon To: Viacheslav Ovsiienko Cc: dev@dpdk.org, stephen@networkplumber.org, ferruh.yigit@intel.com, olivier.matz@6wind.com, jerinjacobk@gmail.com, maxime.coquelin@redhat.com, david.marchand@redhat.com, arybchenko@solarflare.com Date: Fri, 16 Oct 2020 17:14:06 +0200 Message-ID: <17810755.ypUCan5TCX@thomas> In-Reply-To: <1602855582-15847-2-git-send-email-viacheslavo@nvidia.com> References: <1602855582-15847-1-git-send-email-viacheslavo@nvidia.com> <1602855582-15847-2-git-send-email-viacheslavo@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v11 1/6] ethdev: introduce Rx buffer split X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 16/10/2020 15:39, Viacheslav Ovsiienko: > + if (offset != 0) { > + if (seg_capa->offset_allowed == 0) { > + RTE_ETHDEV_LOG(ERR, "Rx segmentation with offset is not supported\n"); > + return -ENOTSUP; > + } > + if (offset & offset_mask) { > + RTE_ETHDEV_LOG(ERR, "Rx segmentat invalid offset alignment %u, %u\n", small typo: segmentat -> segmentation [...] > /** > + * A structure used to configure an Rx packet segment to split. > + * > + * If RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT flag is set in offloads field, > + * the PMD will split the received packets into multiple segments > + * according to the specification in the description array: > + * Minor detail: below bullets should start with a capital letter. > + * - the first network buffer will be allocated from the memory pool, > + * specified in the first array element, the second buffer, from the > + * pool in the second element, and so on. > + * > + * - the offsets from the segment description elements specify > + * the data offset from the buffer beginning except the first mbuf. > + * For this one the offset is added with RTE_PKTMBUF_HEADROOM. "this one" -> "the first segment, " > + * > + * - the lengths in the elements define the maximal data amount > + * being received to each segment. The receiving starts with filling > + * up the first mbuf data buffer up to specified length. If the > + * there are data remaining (packet is longer than buffer in the first > + * mbuf) the following data will be pushed to the next segment > + * up to its own length, and so on. > + * > + * - If the length in the segment description element is zero > + * the actual buffer size will be deduced from the appropriate > + * memory pool properties. > + * > + * - if there is not enough elements to describe the buffer for entire > + * packet of maximal length the following parameters will be used > + * for the all remaining segments: > + * - pool from the last valid element > + * - the buffer size from this pool > + * - zero offset > + */ I think the description is precise enough, thanks. > +__rte_experimental > +struct rte_eth_rxseg_split { > + struct rte_mempool *mp; /**< Memory pool to allocate segment from. */ > + uint16_t length; /**< Segment data length, configures split point. */ > + uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */ > + uint32_t reserved; /**< Reserved field. */ > +}; > + > +/** > + * A common structure used to describe Rx packet segment properties. > + */ > +__rte_experimental > +union rte_eth_rxseg { > + /* The settings for buffer split offload. */ > + struct rte_eth_rxseg_split split; > + /* The other features settings should be added here. */ > +}; OK for having an union. > + > +/** > * A structure used to configure an RX ring of an Ethernet port. > */ > struct rte_eth_rxconf { > @@ -977,12 +1028,22 @@ struct rte_eth_rxconf { > uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */ > uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */ > uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ > + uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */ > /** > * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags. > * Only offloads set on rx_queue_offload_capa or rx_offload_capa > * fields on rte_eth_dev_info structure are allowed to be set. > */ > uint64_t offloads; > + /** > + * Points to the array of segment descriptions. Each array element > + * describes the properties for each segment in the receiving > + * buffer according to feature descripting structure. Alternative (shorter) wording: Points to the array of segment descriptions for an entire packet. Array elements are properties for consecutive Rx segments. > + * > + * The supported capabilities of receiving segmentation is reported > + * in rte_eth_dev_info ->rx_seg_capa field. Rx segmentation capabilities are reported in rte_eth_dev_info.rx_seg_capa. > + */ > + union rte_eth_rxseg *rx_seg; [...] > + * The configuration structure also contains the pointer to the array > + * of the receiving buffer segment descriptions, see rx_seg and rx_nseg > + * fields, this extended configuration might be used by split offloads like > + * RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT. If mp_pool is not NULL, > + * the extended configuration fields must be set to NULL and zero. > * @param mb_pool > * The pointer to the memory pool from which to allocate *rte_mbuf* network > - * memory buffers to populate each descriptor of the receive ring. > + * memory buffers to populate each descriptor of the receive ring. There are > + * two options to provide Rx buffer configuration: > + * - single pool: > + * mb_pool is not NULL, rx_conf.rx_seg is NULL, rx_conf.rx_nseg is 0. > + * - multiple segments description: > + * mb_pool is NULL, rx_conf.rx_seg is not NULL, rx_conf.rx_nseg is not 0. > + * Taken only if flag RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is set in offloads. As said previously, I feel rx_conf.rx_seg does not need to be part of the requirements, as long as rx_conf.rx_nseg is defined. All my comments are optional, this version is already very good. Acked-by: Thomas Monjalon