From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A1AC3A034C; Sat, 4 Jun 2022 16:26:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 459754021E; Sat, 4 Jun 2022 16:26:01 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id A5C7B40041 for ; Sat, 4 Jun 2022 16:25:59 +0200 (CEST) Received: from [192.168.1.126] (unknown [188.242.181.57]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id ED432E6; Sat, 4 Jun 2022 17:25:55 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru ED432E6 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1654352756; bh=5paQReGJCwlIqKY0EjpOIuTmsQSGktKS3ppBnzZkd1I=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=Wwj/eGnyajRTWvzKnpNm5qbH30eadgrTO1ERgtBoXS8B77Mm7aTdZfu6bDF7idfvL fe+45SD29XeJxRsx57L4AqD2lS+ZATLQ8K2jc+7JseB4S8yEOfgz+HRCvWOX8a3uY7 f7O167ZSljsaeqwM+/JMdxUWZ73VS3N4kZSxs+GY= Message-ID: <3d246777-a86b-ebc5-8288-86de89568536@oktetlabs.ru> Date: Sat, 4 Jun 2022 17:25:54 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.0 Subject: Re: [PATCH v8 1/3] ethdev: introduce protocol hdr based buffer split Content-Language: en-US To: "Ding, Xuan" , "Wu, WenxuanX" , "thomas@monjalon.net" , "Li, Xiaoyun" , "ferruh.yigit@xilinx.com" , "Singh, Aman Deep" , "dev@dpdk.org" , "Zhang, Yuying" , "Zhang, Qi Z" , "jerinjacobk@gmail.com" Cc: "stephen@networkplumber.org" , "Wang, YuanX" , Ray Kinsella References: <20220303060136.36427-1-xuan.ding@intel.com> <20220601135059.958882-1-wenxuanx.wu@intel.com> <20220601135059.958882-2-wenxuanx.wu@intel.com> From: Andrew Rybchenko In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 6/3/22 19:30, Ding, Xuan wrote: > Hi Andrew, > >> -----Original Message----- >> From: Andrew Rybchenko >> Sent: Thursday, June 2, 2022 9:21 PM >> To: Wu, WenxuanX ; thomas@monjalon.net; Li, >> Xiaoyun ; ferruh.yigit@xilinx.com; Singh, Aman Deep >> ; dev@dpdk.org; Zhang, Yuying >> ; Zhang, Qi Z ; >> jerinjacobk@gmail.com >> Cc: stephen@networkplumber.org; Ding, Xuan ; >> Wang, YuanX ; Ray Kinsella >> Subject: Re: [PATCH v8 1/3] ethdev: introduce protocol hdr based buffer split >> >> Is it the right one since it is listed in patchwork? > > Yes, it is. > >> >> On 6/1/22 16:50, wenxuanx.wu@intel.com wrote: >>> From: Wenxuan Wu >>> >>> Currently, Rx buffer split supports length based split. With Rx queue >>> offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet >> segment >>> configured, PMD will be able to split the received packets into >>> multiple segments. >>> >>> However, length based buffer split is not suitable for NICs that do >>> split based on protocol headers. Given a arbitrarily variable length >>> in Rx packet >> >> a -> an > > Thanks for your catch, will fix it in next version. > >> >>> segment, it is almost impossible to pass a fixed protocol header to PMD. >>> Besides, the existence of tunneling results in the composition of a >>> packet is various, which makes the situation even worse. >>> >>> This patch extends current buffer split to support protocol header >>> based buffer split. A new proto_hdr field is introduced in the >>> reserved field of rte_eth_rxseg_split structure to specify protocol >>> header. The proto_hdr field defines the split position of packet, >>> splitting will always happens after the protocol header defined in the >>> Rx packet segment. When Rx queue offload >>> RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is enabled and corresponding >> protocol >>> header is configured, PMD will split the ingress packets into multiple >> segments. >>> >>> struct rte_eth_rxseg_split { >>> >>> struct rte_mempool *mp; /* memory pools to allocate segment from >> */ >>> uint16_t length; /* segment maximal data length, >>> configures "split point" */ >>> uint16_t offset; /* data offset from beginning >>> of mbuf data buffer */ >>> uint32_t proto_hdr; /* inner/outer L2/L3/L4 protocol header, >>> configures "split point" */ >>> }; >>> >>> Both inner and outer L2/L3/L4 level protocol header split can be supported. >>> Corresponding protocol header capability is RTE_PTYPE_L2_ETHER, >>> RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV6, RTE_PTYPE_L4_TCP, >>> RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP, RTE_PTYPE_INNER_L2_ETHER, >>> RTE_PTYPE_INNER_L3_IPV4, RTE_PTYPE_INNER_L3_IPV6, >>> RTE_PTYPE_INNER_L4_TCP, RTE_PTYPE_INNER_L4_UDP, >> RTE_PTYPE_INNER_L4_SCTP. >>> >>> For example, let's suppose we configured the Rx queue with the >>> following segments: >>> seg0 - pool0, proto_hdr0=RTE_PTYPE_L3_IPV4, off0=2B >>> seg1 - pool1, proto_hdr1=RTE_PTYPE_L4_UDP, off1=128B >>> seg2 - pool2, off1=0B >>> >>> The packet consists of MAC_IPV4_UDP_PAYLOAD will be split like >>> following: >>> seg0 - ipv4 header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from >> pool0 >>> seg1 - udp header @ 128 in mbuf from pool1 >>> seg2 - payload @ 0 in mbuf from pool2 >> >> It must be defined how ICMPv4 packets will be split in such case. >> And how UDP over IPv6 will be split. > > The ICMP header type is missed, I will define the expected split behavior and > add it in next version, thanks for your catch. > > In fact, the buffer split based on protocol header depends on the driver parsing result. > As long as driver can recognize this packet type, I think there is no difference between > UDP over IPV4 and UDP over IPV6? We can bind it to ptypes recognized by the HW+driver, but I can easily imagine the case when HW has no means to report recognized packet type (i.e. ptype get returns empty list), but still could split on it. Also, nobody guarantees that there is no different in UDP over IPv4 vs IPv6 recognition and split. IPv6 could have a number of extension headers which could be not that trivial to hop in HW. So, HW could recognize IPv6, but not protocols after it. Also it is very interesting question how to define protocol split for IPv6 plus extension headers. Where to stop? > >>> >>> Now buffet split can be configured in two modes. For length based >>> buffer split, the mp, length, offset field in Rx packet segment should >>> be configured, while the proto_hdr field should not be configured. >>> For protocol header based buffer split, the mp, offset, proto_hdr >>> field in Rx packet segment should be configured, while the length >>> field should not be configured. >>> >>> The split limitations imposed by underlying PMD is reported in the >>> rte_eth_dev_info->rx_seg_capa field. The memory attributes for the >>> split parts may differ either, dpdk memory and external memory, >> respectively. >>> >>> Signed-off-by: Xuan Ding >>> Signed-off-by: Yuan Wang >>> Signed-off-by: Wenxuan Wu >>> Reviewed-by: Qi Zhang >>> Acked-by: Ray Kinsella >>> --- >>> lib/ethdev/rte_ethdev.c | 40 +++++++++++++++++++++++++++++++++------- >>> lib/ethdev/rte_ethdev.h | 28 +++++++++++++++++++++++++++- >>> 2 files changed, 60 insertions(+), 8 deletions(-) >>> >>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index >>> 29a3d80466..fbd55cdd9d 100644 >>> --- a/lib/ethdev/rte_ethdev.c >>> +++ b/lib/ethdev/rte_ethdev.c >>> @@ -1661,6 +1661,7 @@ rte_eth_rx_queue_check_split(const struct >> rte_eth_rxseg_split *rx_seg, >>> struct rte_mempool *mpl = rx_seg[seg_idx].mp; >>> uint32_t length = rx_seg[seg_idx].length; >>> uint32_t offset = rx_seg[seg_idx].offset; >>> + uint32_t proto_hdr = rx_seg[seg_idx].proto_hdr; >>> >>> if (mpl == NULL) { >>> RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); >> @@ -1694,13 >>> +1695,38 @@ rte_eth_rx_queue_check_split(const struct >> rte_eth_rxseg_split *rx_seg, >>> } >>> offset += seg_idx != 0 ? 0 : RTE_PKTMBUF_HEADROOM; >>> *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); >>> - length = length != 0 ? length : *mbp_buf_size; >>> - if (*mbp_buf_size < length + offset) { >>> - RTE_ETHDEV_LOG(ERR, >>> - "%s mbuf_data_room_size %u < %u >> (segment length=%u + segment offset=%u)\n", >>> - mpl->name, *mbp_buf_size, >>> - length + offset, length, offset); >>> - return -EINVAL; >>> + if (proto_hdr == RTE_PTYPE_UNKNOWN) { >>> + /* Split at fixed length. */ >>> + length = length != 0 ? length : *mbp_buf_size; >>> + if (*mbp_buf_size < length + offset) { >>> + RTE_ETHDEV_LOG(ERR, >>> + "%s mbuf_data_room_size %u < %u >> (segment length=%u + segment offset=%u)\n", >>> + mpl->name, *mbp_buf_size, >>> + length + offset, length, offset); >>> + return -EINVAL; >>> + } >>> + } else { >>> + /* Split after specified protocol header. */ >>> + if (!(proto_hdr & >> RTE_BUFFER_SPLIT_PROTO_HDR_MASK)) { >> >> The condition looks suspicious. It will be true if proto_hdr has no single bit >> from the mask. I guess it is not the intent. > > Actually it is the intent... Here the mask is used to check if proto_hdr > belongs to the inner/outer L2/L3/L4 capability we defined. And which > proto_hdr is supported by the NIC will be checked in the PMD later. Frankly speaking I see no value in such incomplete check if we still rely on driver. I simply see no reason to oblige the driver to support one of these protocols. > >> I guess the condition should be >> proto_hdr & ~RTE_BUFFER_SPLIT_PROTO_HDR_MASK i.e. there is >> unsupported bits in proto_hdr >> >> IMHO we need extra field in dev_info to report supported protocols to split >> on. Or a new API to get an array similar to ptype get. >> May be a new API is a better choice to not overload dev_info and to be more >> flexible in reporting. > > Thanks for your suggestion. > Here I hope to confirm the intent of dev_info or API to expose the supported proto_hdr of driver. > Is it for the pro_hdr check in the rte_eth_rx_queue_check_split()? > If so, could we just check whether pro_hdrs configured belongs to L2/L3/L4 in lib, and check the > capability in PMD? This is what the current design does. Look. Application needs to know what to expect from eth device. It should know which protocols it can split on. Of course we can enforce application to use try-fail approach which would make sense if we have dedicated API to request Rx buffer split, but since it is done via Rx queue configuration, it could be tricky for application to realize which part of the configuration is wrong. It could simply result in a too many retries with different configuration. I.e. the information should be used by ethdev to validate request and the information should be ued by the application to understand what is supported. > > Actually I have another question, do we need a API or dev_info to expose which buffer split the driver supports. > i.e. length based or proto_hdr based. Because it requires different fields to be configured > in RX packet segment. See above. If dedicated API return -ENOTSUP or empty set of supported protocols to split on, the answer is clear. > > Hope to get your insights. :) > >> >>> + RTE_ETHDEV_LOG(ERR, >>> + "Protocol header %u not >> supported)\n", >>> + proto_hdr); >> >> I think it would be useful to log unsupported bits only, if we say so. > > The same as above. > Thanks again for your time. > > Regards, > Xuan