* [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info @ 2020-06-23 6:48 Chengchang Tang 2020-06-23 9:30 ` Andrew Rybchenko ` (2 more replies) 0 siblings, 3 replies; 16+ messages in thread From: Chengchang Tang @ 2020-06-23 6:48 UTC (permalink / raw) To: dev; +Cc: linuxarm, thomas, ferruh.yigit, arybchenko In common practice, PMD configure the rx_buf_size according to the data room size of the object in mempool. But in fact the final value is related to the specifications of hw, and its values will affect the number of fragments in recieving pkts. At present, we seem to have no way to espose relevant information to upper layer users. Add a field named rx_bufsize in rte_eth_rxq_info to indicate the buffer size used in recieving pkts for hw. Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> --- lib/librte_ethdev/rte_ethdev.h | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h index 0f6d053..82b7e98 100644 --- a/lib/librte_ethdev/rte_ethdev.h +++ b/lib/librte_ethdev/rte_ethdev.h @@ -1306,6 +1306,7 @@ struct rte_eth_rxq_info { struct rte_eth_rxconf conf; /**< queue config parameters. */ uint8_t scattered_rx; /**< scattered packets RX supported. */ uint16_t nb_desc; /**< configured number of RXDs. */ + uint16_t rx_bufsize; /**< size of RX buffer. */ } __rte_cache_min_aligned; /** -- 2.7.4 ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info 2020-06-23 6:48 [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info Chengchang Tang @ 2020-06-23 9:30 ` Andrew Rybchenko 2020-06-24 3:48 ` Chengchang Tang 2020-06-23 14:48 ` Stephen Hemminger 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 0/3] add rx buffer size " Chengchang Tang 2 siblings, 1 reply; 16+ messages in thread From: Andrew Rybchenko @ 2020-06-23 9:30 UTC (permalink / raw) To: Chengchang Tang, dev; +Cc: linuxarm, thomas, ferruh.yigit On 6/23/20 9:48 AM, Chengchang Tang wrote: > In common practice, PMD configure the rx_buf_size according to the data > room size of the object in mempool. But in fact the final value is related > to the specifications of hw, and its values will affect the number of > fragments in recieving pkts. > > At present, we seem to have no way to espose relevant information to upper > layer users. > > Add a field named rx_bufsize in rte_eth_rxq_info to indicate the buffer > size used in recieving pkts for hw. > I'm OK with the change in general. I'm unsure which name to use: 'rx_buf_size' or 'rx_bursize', since I found both 'min_rx_buf_size' and 'min_rx_bufsize' in ethdev. I think it is important to update PMDs which provides the information to fill the field in. > Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com> > --- > lib/librte_ethdev/rte_ethdev.h | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h > index 0f6d053..82b7e98 100644 > --- a/lib/librte_ethdev/rte_ethdev.h > +++ b/lib/librte_ethdev/rte_ethdev.h > @@ -1306,6 +1306,7 @@ struct rte_eth_rxq_info { > struct rte_eth_rxconf conf; /**< queue config parameters. */ > uint8_t scattered_rx; /**< scattered packets RX supported. */ > uint16_t nb_desc; /**< configured number of RXDs. */ > + uint16_t rx_bufsize; /**< size of RX buffer. */ > } __rte_cache_min_aligned; > > /** > -- > 2.7.4 > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info 2020-06-23 9:30 ` Andrew Rybchenko @ 2020-06-24 3:48 ` Chengchang Tang 2020-06-24 8:52 ` Ferruh Yigit 0 siblings, 1 reply; 16+ messages in thread From: Chengchang Tang @ 2020-06-24 3:48 UTC (permalink / raw) To: Andrew Rybchenko, dev; +Cc: linuxarm, thomas, ferruh.yigit On 2020/6/23 17:30, Andrew Rybchenko wrote: > On 6/23/20 9:48 AM, Chengchang Tang wrote: >> In common practice, PMD configure the rx_buf_size according to the data >> room size of the object in mempool. But in fact the final value is related >> to the specifications of hw, and its values will affect the number of >> fragments in recieving pkts. >> >> At present, we seem to have no way to espose relevant information to upper >> layer users. >> >> Add a field named rx_bufsize in rte_eth_rxq_info to indicate the buffer >> size used in recieving pkts for hw. >> > > I'm OK with the change in general. > I'm unsure which name to use: 'rx_buf_size' or 'rx_bursize', > since I found both 'min_rx_buf_size' and 'min_rx_bufsize' in > ethdev. > > I think it is important to update PMDs which provides the > information to fill the field in. My plan is to divide the subsequent series into two patches, one to modify rte_eth_rxq_info, and one to add our hns3 PMD implementation of rxq_info_get. Should i update all the PMDs that provide this information and test programs such as testpmd at the same time? > >> Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> > > Acked-by: Andrew Rybchenko <arybchenko@solarflare.com> > >> --- >> lib/librte_ethdev/rte_ethdev.h | 1 + >> 1 file changed, 1 insertion(+) >> >> diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h >> index 0f6d053..82b7e98 100644 >> --- a/lib/librte_ethdev/rte_ethdev.h >> +++ b/lib/librte_ethdev/rte_ethdev.h >> @@ -1306,6 +1306,7 @@ struct rte_eth_rxq_info { >> struct rte_eth_rxconf conf; /**< queue config parameters. */ >> uint8_t scattered_rx; /**< scattered packets RX supported. */ >> uint16_t nb_desc; /**< configured number of RXDs. */ >> + uint16_t rx_bufsize; /**< size of RX buffer. */ >> } __rte_cache_min_aligned; >> >> /** >> -- >> 2.7.4 >> > > > . > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info 2020-06-24 3:48 ` Chengchang Tang @ 2020-06-24 8:52 ` Ferruh Yigit 2020-06-24 18:32 ` Ferruh Yigit 0 siblings, 1 reply; 16+ messages in thread From: Ferruh Yigit @ 2020-06-24 8:52 UTC (permalink / raw) To: Chengchang Tang, Andrew Rybchenko, dev; +Cc: linuxarm, thomas On 6/24/2020 4:48 AM, Chengchang Tang wrote: > > On 2020/6/23 17:30, Andrew Rybchenko wrote: >> On 6/23/20 9:48 AM, Chengchang Tang wrote: >>> In common practice, PMD configure the rx_buf_size according to the data >>> room size of the object in mempool. But in fact the final value is related >>> to the specifications of hw, and its values will affect the number of >>> fragments in recieving pkts. >>> >>> At present, we seem to have no way to espose relevant information to upper >>> layer users. >>> >>> Add a field named rx_bufsize in rte_eth_rxq_info to indicate the buffer >>> size used in recieving pkts for hw. >>> >> >> I'm OK with the change in general. >> I'm unsure which name to use: 'rx_buf_size' or 'rx_bursize', >> since I found both 'min_rx_buf_size' and 'min_rx_bufsize' in >> ethdev. >> >> I think it is important to update PMDs which provides the >> information to fill the field in. > > My plan is to divide the subsequent series into two patches, > one to modify rte_eth_rxq_info, and one to add our hns3 PMD > implementation of rxq_info_get. Should i update all the PMDs > that provide this information and test programs such as > testpmd at the same time? Hi Chengchang, Andrew, No objection to the change, but it should be crystal clear what is added. These are for PMD developers to implement and when it is not clear we end up having different implementations and inconsistencies. There is already some confusion for the Rx packet size etc.. my concern is adding more to it, here all we have is "size of RX buffer." comment, I think we need more. Adding a PMD implementation and testpmd updates helps to clarify the intention/usage, so I suggest sending them as a single patch with this one. Updating all PMDs is a bigger ask and sometimes too hard because of lack of knowledge on the internals of other PMDs, although this is causing feature gaps time to time, we are not mandating this to developers, so please update as many PMD as you can, that you are confident, rest should be done by their maintainers. >> >>> Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> >> >> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com> >> >>> --- >>> lib/librte_ethdev/rte_ethdev.h | 1 + >>> 1 file changed, 1 insertion(+) >>> >>> diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h >>> index 0f6d053..82b7e98 100644 >>> --- a/lib/librte_ethdev/rte_ethdev.h >>> +++ b/lib/librte_ethdev/rte_ethdev.h >>> @@ -1306,6 +1306,7 @@ struct rte_eth_rxq_info { >>> struct rte_eth_rxconf conf; /**< queue config parameters. */ >>> uint8_t scattered_rx; /**< scattered packets RX supported. */ >>> uint16_t nb_desc; /**< configured number of RXDs. */ >>> + uint16_t rx_bufsize; /**< size of RX buffer. */ >>> } __rte_cache_min_aligned; >>> >>> /** >>> -- >>> 2.7.4 >>> >> >> >> . >> > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info 2020-06-24 8:52 ` Ferruh Yigit @ 2020-06-24 18:32 ` Ferruh Yigit 2020-06-25 9:06 ` Andrew Rybchenko 0 siblings, 1 reply; 16+ messages in thread From: Ferruh Yigit @ 2020-06-24 18:32 UTC (permalink / raw) To: Chengchang Tang, Andrew Rybchenko, dev Cc: linuxarm, thomas, Olivier MATZ, Matan Azrad, Jerin Jacob, Shy Shyman, Qi Zhang, Konstantin Ananyev On 6/24/2020 9:52 AM, Ferruh Yigit wrote: > On 6/24/2020 4:48 AM, Chengchang Tang wrote: >> >> On 2020/6/23 17:30, Andrew Rybchenko wrote: >>> On 6/23/20 9:48 AM, Chengchang Tang wrote: >>>> In common practice, PMD configure the rx_buf_size according to the data >>>> room size of the object in mempool. But in fact the final value is related >>>> to the specifications of hw, and its values will affect the number of >>>> fragments in recieving pkts. >>>> >>>> At present, we seem to have no way to espose relevant information to upper >>>> layer users. >>>> >>>> Add a field named rx_bufsize in rte_eth_rxq_info to indicate the buffer >>>> size used in recieving pkts for hw. >>>> >>> >>> I'm OK with the change in general. >>> I'm unsure which name to use: 'rx_buf_size' or 'rx_bursize', >>> since I found both 'min_rx_buf_size' and 'min_rx_bufsize' in >>> ethdev. >>> >>> I think it is important to update PMDs which provides the >>> information to fill the field in. >> >> My plan is to divide the subsequent series into two patches, >> one to modify rte_eth_rxq_info, and one to add our hns3 PMD >> implementation of rxq_info_get. Should i update all the PMDs >> that provide this information and test programs such as >> testpmd at the same time? > > Hi Chengchang, Andrew, > > No objection to the change, but it should be crystal clear what is added. These > are for PMD developers to implement and when it is not clear we end up having > different implementations and inconsistencies. > > There is already some confusion for the Rx packet size etc.. my concern is > adding more to it, here all we have is "size of RX buffer." comment, I think we > need more. cc'ed a few more people. Back to this favorite topic, how to configure/limit the packet size. Can you please help to have a common/correct understanding? I tried to clarify as much as I got it, any comment welcome. (I know it is long, please bare with me) The related config options I can see, 1) rte_eth_conf->rxmode->max_rx_pkt_len 2) rte_eth_dev_info->max_rx_pktlen 3) DEV_RX_OFFLOAD_JUMBO_FRAME 4) rte_eth_dev->data->mtu 5) DEV_RX_OFFLOAD_SCATTER 6) dev->data->scattered_rx 7) rte_eth_dev_info->min_mtu, rte_eth_dev_info->max_mtu 'mtu' (4): Both for Tx and Rx. The network layer payload length. Default value 'RTE_ETHER_MTU'. 'max_rx_pkt_len' (1): Only for Rx, maximum Rx frame length configured by application. 'max_rx_pktlen' (2): Device reported value on what maximum Rx frame length it can receive. Application shouldn't set Rx frame length more than this value. 'DEV_RX_OFFLOAD_JUMBO_FRAME' (3): Device Jumbo Frame capability. When not enabled the Rx frame length is 'MTU' + overhead When enabled Rx frame length is 'max_rx_pkt_len' 'DEV_RX_OFFLOAD_SCATTER' (5): Capability to scatter packet to multiple descriptor by device and driver converting this to chained mbuf. 'dev->data->scattered_rx' (6): The current status of driver scattered Rx, in device data mostly for PMD internal usage. 'rte_eth_dev_info->min_mtu' & 'rte_eth_dev_info->max_mtu' (7): minimum and maximum MTU values device supported. 'max_mtu' == 'max_rx_pkt_len' - L2_OVERHEAD. I can see two different limits, a) The Rx frame length limit that device can receive from wire. Any packet larger than this size will be dropped by device in an early stage. b) The Rx buffer length limit that received packets are written to. Device shouldn't DMA larger than reserved buffer size. If device supports scattered Rx to multiple descriptors, it can be possible to configure (a) > (b). Otherwise configuration have to be (b) >= (a). For example if the mbuf size is 2Kb and the device can receive up to 9000 bytes. Options are: - If device supports it, large packet will be scattered on multiple mbufs - or need to configure device Rx frame length to 2K (mbuf size) - or need to allocate mbuf big enough to get largest possible packet (9000) Issues I see: ------------- i) Although the code clearly says 'max_rx_pkt_len' is only valid when jumbo frames enabled, some drivers are taking it account always. ii) Some drivers enable 'scattered_rx' & 'jumbo frame' implicitly, without having 'DEV_RX_OFFLOAD_JUMBO_FRAME' or 'DEV_RX_OFFLOAD_SCATTER' requested by application. iii) Having both 'mtu' & 'max_rx_pkt_len' are confusing, although they are not exactly same thing they are related. Difference is MTU applies for Tx too, and L2 network layer overhead is not included. 'MTU' can be more interested by upper layers, 'max_rx_pkt_len' is more driver level information. And driver should know how to convert one to another. iv) 'max_rx_pkt_len' provided as part of 'rte_eth_dev_configure()' and there is no API to update it later. 'mtu' is not part of 'rte_eth_dev_configure()', it can only be updated later with specific API. But driver have to keep these two values consistent. v) 'max_rx_pktlen' & 'max_mtu' reports from driver are redundant information. Again they are not same thing, but correlated. Suggested changes: ----------------- Overall unify 'max_rx_pkt_len' & 'mtu' as much as possible, at first step: i) Make 'max_rx_pkt_len' independent from 'DEV_RX_OFFLOAD_JUMBO_FRAME', so 'max_rx_pkt_len' value will be always valid, jumbo frame enabled or not. ii) in '.dev_configure' convert 'max_rx_pkt_len' value to 'mtu' value, this will be only point 'max_rx_pkt_len' is used, after that point PMD will always use 'mtu' value. Even don't reflect 'rte_eth_dev_set_mtu()' changes to 'max_rx_pkt_len' anymore. iii) Don't make 'max_rx_pkt_len' a mandatory config option, let it be '0' by application, in that case 'rte_eth_dev_configure()' will set "'max_rx_pkt_len' = RTE_ETHER_MAX_LEN" if 'DEV_RX_OFFLOAD_JUMBO_FRAME' disabled "'max_rx_pkt_len' = 9000 if 'DEV_RX_OFFLOAD_JUMBO_FRAME' enabled iv) Allow implicit update of 'DEV_RX_OFFLOAD_JUMBO_FRAME' on MTU set, since setting a large MTU implies the jumbo frame request. And there is no harm to application. v) Do NOT allow implicit update of 'DEV_RX_OFFLOAD_SCATTER' on MTU set (when Rx frame length > Rx buffer length), since application may not be capable of parsing chained mbufs. Instead fails the MTU set in that case. [This can break some applications, relying on this implicit set.] Any comments? Additional details: ------------------- Behavior of some drivers: What igb & ixgbe does - Set Rx frame limit (a) using 'max_rx_pkt_len' (1) - Set Rx buffer limit (b) using mbuf data size - Enable Scattered Rx (5 & 6) if the Rx frame limit (a) bigger than Rx buffer limit (b) (even user not requested for it) What i40e does same as above, only differences - Return error if jumbo frame enabled and 'max_rx_pkt_len' < RTE_ETHER_MAX_LEN sfc: - Set Rx frame limit (a) - using 'max_rx_pkt_len' (1) when jumbo frame enabled - using 'mtu' when jumbo frame not enabled. - Set Rx buffer limit (b) using mbuf data size - If Rx frame limit (a) bigger than Rx buffer limit (b), and user not requested 'DEV_RX_OFFLOAD_SCATTER', return error. octeontx2: - Set Rx frame limit (a) using 'max_rx_pkt_len' (1). Implicitly enable jumbo frame based on 'max_rx_pkt_len'. - I can't able find how Rx buffer limit (b) set - Enable Scattered Rx (5) if the Rx frame limit (a) bigger than Rx buffer limit (b) (even user not requested for it). 'dev->data->scattered_rx' not set at all. > Adding a PMD implementation and testpmd updates helps to clarify the > intention/usage, so I suggest sending them as a single patch with this one. > > Updating all PMDs is a bigger ask and sometimes too hard because of lack of > knowledge on the internals of other PMDs, although this is causing feature gaps > time to time, we are not mandating this to developers, so please update as many > PMD as you can, that you are confident, rest should be done by their maintainers. > >>> >>>> Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> >>> >>> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com> >>> >>>> --- >>>> lib/librte_ethdev/rte_ethdev.h | 1 + >>>> 1 file changed, 1 insertion(+) >>>> >>>> diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h >>>> index 0f6d053..82b7e98 100644 >>>> --- a/lib/librte_ethdev/rte_ethdev.h >>>> +++ b/lib/librte_ethdev/rte_ethdev.h >>>> @@ -1306,6 +1306,7 @@ struct rte_eth_rxq_info { >>>> struct rte_eth_rxconf conf; /**< queue config parameters. */ >>>> uint8_t scattered_rx; /**< scattered packets RX supported. */ >>>> uint16_t nb_desc; /**< configured number of RXDs. */ >>>> + uint16_t rx_bufsize; /**< size of RX buffer. */ >>>> } __rte_cache_min_aligned; >>>> >>>> /** >>>> -- >>>> 2.7.4 >>>> >>> >>> >>> . >>> >> > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info 2020-06-24 18:32 ` Ferruh Yigit @ 2020-06-25 9:06 ` Andrew Rybchenko 0 siblings, 0 replies; 16+ messages in thread From: Andrew Rybchenko @ 2020-06-25 9:06 UTC (permalink / raw) To: Ferruh Yigit, Chengchang Tang, dev Cc: linuxarm, thomas, Olivier MATZ, Matan Azrad, Jerin Jacob, Shy Shyman, Qi Zhang, Konstantin Ananyev On 6/24/20 9:32 PM, Ferruh Yigit wrote: > On 6/24/2020 9:52 AM, Ferruh Yigit wrote: >> On 6/24/2020 4:48 AM, Chengchang Tang wrote: >>> >>> On 2020/6/23 17:30, Andrew Rybchenko wrote: >>>> On 6/23/20 9:48 AM, Chengchang Tang wrote: >>>>> In common practice, PMD configure the rx_buf_size according to the data >>>>> room size of the object in mempool. But in fact the final value is related >>>>> to the specifications of hw, and its values will affect the number of >>>>> fragments in recieving pkts. >>>>> >>>>> At present, we seem to have no way to espose relevant information to upper >>>>> layer users. >>>>> >>>>> Add a field named rx_bufsize in rte_eth_rxq_info to indicate the buffer >>>>> size used in recieving pkts for hw. >>>>> >>>> >>>> I'm OK with the change in general. >>>> I'm unsure which name to use: 'rx_buf_size' or 'rx_bursize', >>>> since I found both 'min_rx_buf_size' and 'min_rx_bufsize' in >>>> ethdev. >>>> >>>> I think it is important to update PMDs which provides the >>>> information to fill the field in. >>> >>> My plan is to divide the subsequent series into two patches, >>> one to modify rte_eth_rxq_info, and one to add our hns3 PMD >>> implementation of rxq_info_get. Should i update all the PMDs >>> that provide this information and test programs such as >>> testpmd at the same time? >> >> Hi Chengchang, Andrew, >> >> No objection to the change, but it should be crystal clear what is added. These >> are for PMD developers to implement and when it is not clear we end up having >> different implementations and inconsistencies. >> >> There is already some confusion for the Rx packet size etc.. my concern is >> adding more to it, here all we have is "size of RX buffer." comment, I think we >> need more. > > cc'ed a few more people. > > Back to this favorite topic, how to configure/limit the packet size. > > Can you please help to have a common/correct understanding? I tried to clarify > as much as I got it, any comment welcome. (I know it is long, please bare with me) > > > The related config options I can see, > 1) rte_eth_conf->rxmode->max_rx_pkt_len > 2) rte_eth_dev_info->max_rx_pktlen > 3) DEV_RX_OFFLOAD_JUMBO_FRAME > 4) rte_eth_dev->data->mtu > 5) DEV_RX_OFFLOAD_SCATTER > 6) dev->data->scattered_rx > 7) rte_eth_dev_info->min_mtu, rte_eth_dev_info->max_mtu > > 'mtu' (4): Both for Tx and Rx. The network layer payload length. Default value > 'RTE_ETHER_MTU'. > > 'max_rx_pkt_len' (1): Only for Rx, maximum Rx frame length configured by > application. > > 'max_rx_pktlen' (2): Device reported value on what maximum Rx frame length it > can receive. Application shouldn't set Rx frame length more than this value. > > 'DEV_RX_OFFLOAD_JUMBO_FRAME' (3): Device Jumbo Frame capability. > When not enabled the Rx frame length is 'MTU' + overhead > When enabled Rx frame length is 'max_rx_pkt_len' > > 'DEV_RX_OFFLOAD_SCATTER' (5): Capability to scatter packet to multiple > descriptor by device and driver converting this to chained mbuf. > > 'dev->data->scattered_rx' (6): The current status of driver scattered Rx, in > device data mostly for PMD internal usage. > > 'rte_eth_dev_info->min_mtu' & 'rte_eth_dev_info->max_mtu' (7): minimum and > maximum MTU values device supported. > 'max_mtu' == 'max_rx_pkt_len' - L2_OVERHEAD. > > > I can see two different limits, > a) The Rx frame length limit that device can receive from wire. Any packet > larger than this size will be dropped by device in an early stage. > b) The Rx buffer length limit that received packets are written to. Device > shouldn't DMA larger than reserved buffer size. > > If device supports scattered Rx to multiple descriptors, it can be possible to > configure (a) > (b). > Otherwise configuration have to be (b) >= (a). > > For example if the mbuf size is 2Kb and the device can receive up to 9000 bytes. > Options are: > - If device supports it, large packet will be scattered on multiple mbufs > - or need to configure device Rx frame length to 2K (mbuf size) > - or need to allocate mbuf big enough to get largest possible packet (9000) > > > > Issues I see: > ------------- > > i) Although the code clearly says 'max_rx_pkt_len' is only valid when jumbo > frames enabled, some drivers are taking it account always. Ack, that's not good. > ii) Some drivers enable 'scattered_rx' & 'jumbo frame' implicitly, without > having 'DEV_RX_OFFLOAD_JUMBO_FRAME' or 'DEV_RX_OFFLOAD_SCATTER' requested by > application. Ack that it is a problem especially for scatter. > iii) Having both 'mtu' & 'max_rx_pkt_len' are confusing, although they are not > exactly same thing they are related. Difference is MTU applies for Tx too, and > L2 network layer overhead is not included. > 'MTU' can be more interested by upper layers, 'max_rx_pkt_len' is more driver > level information. And driver should know how to convert one to another. Agree > iv) 'max_rx_pkt_len' provided as part of 'rte_eth_dev_configure()' and there is > no API to update it later. > 'mtu' is not part of 'rte_eth_dev_configure()', it can only be updated later > with specific API. > But driver have to keep these two values consistent. Agree > v) 'max_rx_pktlen' & 'max_mtu' reports from driver are redundant information. > Again they are not same thing, but correlated. Agree > > Suggested changes: > ----------------- > > Overall unify 'max_rx_pkt_len' & 'mtu' as much as possible, at first step: > > i) Make 'max_rx_pkt_len' independent from 'DEV_RX_OFFLOAD_JUMBO_FRAME', so > 'max_rx_pkt_len' value will be always valid, jumbo frame enabled or not. It will make handling of the max_rx_pkt_len mandatory in all network PMDs. > > ii) in '.dev_configure' convert 'max_rx_pkt_len' value to 'mtu' value, this will > be only point 'max_rx_pkt_len' is used, after that point PMD will always use > 'mtu' value. I'm not sure that it is a right direction. Above you say that 'max_rx_pkt_len' is more driver level and I agree with it. I guess most drivers operate it finally (i.e. configure underlying HW in terms of max_rx_pkt_len, not MTU). So, converted from max_rx_pkt_len to MTU on ethdev level and covered back from MTU to max_rx_pkt_len in drivers. > Even don't reflect 'rte_eth_dev_set_mtu()' changes to 'max_rx_pkt_len' anymore. Not sure that I get it. max_rx_pkt_len is used on dev_configure only. Is it reported on get somewhere? > > iii) Don't make 'max_rx_pkt_len' a mandatory config option, let it be '0' by > application, in that case 'rte_eth_dev_configure()' will set > "'max_rx_pkt_len' = RTE_ETHER_MAX_LEN" if 'DEV_RX_OFFLOAD_JUMBO_FRAME' disabled > "'max_rx_pkt_len' = 9000 if 'DEV_RX_OFFLOAD_JUMBO_FRAME' enabled Why 9000? IMHO, if max_rx_pkt_len is 0, just use value derived from MTU. > > iv) Allow implicit update of 'DEV_RX_OFFLOAD_JUMBO_FRAME' on MTU set, since > setting a large MTU implies the jumbo frame request. And there is no harm to > application. Yes and I'd deprecate DEV_RX_OFFLOAD_JUMBO_FRAME. > > v) Do NOT allow implicit update of 'DEV_RX_OFFLOAD_SCATTER' on MTU set (when Rx > frame length > Rx buffer length), since application may not be capable of > parsing chained mbufs. Instead fails the MTU set in that case. > [This can break some applications, relying on this implicit set.] Yes, definitely. > > > Any comments? > > > > Additional details: > ------------------- > > Behavior of some drivers: > > What igb & ixgbe does > - Set Rx frame limit (a) using 'max_rx_pkt_len' (1) > - Set Rx buffer limit (b) using mbuf data size > - Enable Scattered Rx (5 & 6) if the Rx frame limit (a) bigger than Rx buffer > limit (b) (even user not requested for it) > > What i40e does same as above, only differences > - Return error if jumbo frame enabled and 'max_rx_pkt_len' < RTE_ETHER_MAX_LEN > > sfc: > - Set Rx frame limit (a) > - using 'max_rx_pkt_len' (1) when jumbo frame enabled > - using 'mtu' when jumbo frame not enabled. > - Set Rx buffer limit (b) using mbuf data size > - If Rx frame limit (a) bigger than Rx buffer limit (b), and user not requested > 'DEV_RX_OFFLOAD_SCATTER', return error. Ack > > octeontx2: > - Set Rx frame limit (a) using 'max_rx_pkt_len' (1). Implicitly enable jumbo > frame based on 'max_rx_pkt_len'. > - I can't able find how Rx buffer limit (b) set > - Enable Scattered Rx (5) if the Rx frame limit (a) bigger than Rx buffer limit > (b) (even user not requested for it). 'dev->data->scattered_rx' not set at all. > > >> Adding a PMD implementation and testpmd updates helps to clarify the >> intention/usage, so I suggest sending them as a single patch with this one. >> >> Updating all PMDs is a bigger ask and sometimes too hard because of lack of >> knowledge on the internals of other PMDs, although this is causing feature gaps >> time to time, we are not mandating this to developers, so please update as many >> PMD as you can, that you are confident, rest should be done by their maintainers. >> >>>> >>>>> Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> >>>> >>>> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com> >>>> >>>>> --- >>>>> lib/librte_ethdev/rte_ethdev.h | 1 + >>>>> 1 file changed, 1 insertion(+) >>>>> >>>>> diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h >>>>> index 0f6d053..82b7e98 100644 >>>>> --- a/lib/librte_ethdev/rte_ethdev.h >>>>> +++ b/lib/librte_ethdev/rte_ethdev.h >>>>> @@ -1306,6 +1306,7 @@ struct rte_eth_rxq_info { >>>>> struct rte_eth_rxconf conf; /**< queue config parameters. */ >>>>> uint8_t scattered_rx; /**< scattered packets RX supported. */ >>>>> uint16_t nb_desc; /**< configured number of RXDs. */ >>>>> + uint16_t rx_bufsize; /**< size of RX buffer. */ >>>>> } __rte_cache_min_aligned; >>>>> >>>>> /** ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info 2020-06-23 6:48 [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info Chengchang Tang 2020-06-23 9:30 ` Andrew Rybchenko @ 2020-06-23 14:48 ` Stephen Hemminger 2020-06-23 15:22 ` Andrew Rybchenko 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 0/3] add rx buffer size " Chengchang Tang 2 siblings, 1 reply; 16+ messages in thread From: Stephen Hemminger @ 2020-06-23 14:48 UTC (permalink / raw) To: Chengchang Tang; +Cc: dev, linuxarm, thomas, ferruh.yigit, arybchenko On Tue, 23 Jun 2020 14:48:54 +0800 Chengchang Tang <tangchengchang@huawei.com> wrote: > In common practice, PMD configure the rx_buf_size according to the data > room size of the object in mempool. But in fact the final value is related > to the specifications of hw, and its values will affect the number of > fragments in recieving pkts. > > At present, we seem to have no way to espose relevant information to upper > layer users. > > Add a field named rx_bufsize in rte_eth_rxq_info to indicate the buffer > size used in recieving pkts for hw. > > Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> > --- > lib/librte_ethdev/rte_ethdev.h | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h > index 0f6d053..82b7e98 100644 > --- a/lib/librte_ethdev/rte_ethdev.h > +++ b/lib/librte_ethdev/rte_ethdev.h > @@ -1306,6 +1306,7 @@ struct rte_eth_rxq_info { > struct rte_eth_rxconf conf; /**< queue config parameters. */ > uint8_t scattered_rx; /**< scattered packets RX supported. */ > uint16_t nb_desc; /**< configured number of RXDs. */ > + uint16_t rx_bufsize; /**< size of RX buffer. */ > } __rte_cache_min_aligned; > > /** > -- > 2.7.4 > Will have to wait until 20.11 as it is an ABI change. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info 2020-06-23 14:48 ` Stephen Hemminger @ 2020-06-23 15:22 ` Andrew Rybchenko 0 siblings, 0 replies; 16+ messages in thread From: Andrew Rybchenko @ 2020-06-23 15:22 UTC (permalink / raw) To: Stephen Hemminger, Chengchang Tang; +Cc: dev, linuxarm, thomas, ferruh.yigit On 6/23/20 5:48 PM, Stephen Hemminger wrote: > On Tue, 23 Jun 2020 14:48:54 +0800 > Chengchang Tang <tangchengchang@huawei.com> wrote: > >> In common practice, PMD configure the rx_buf_size according to the data >> room size of the object in mempool. But in fact the final value is related >> to the specifications of hw, and its values will affect the number of >> fragments in recieving pkts. >> >> At present, we seem to have no way to espose relevant information to upper >> layer users. >> >> Add a field named rx_bufsize in rte_eth_rxq_info to indicate the buffer >> size used in recieving pkts for hw. >> >> Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> >> --- >> lib/librte_ethdev/rte_ethdev.h | 1 + >> 1 file changed, 1 insertion(+) >> >> diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h >> index 0f6d053..82b7e98 100644 >> --- a/lib/librte_ethdev/rte_ethdev.h >> +++ b/lib/librte_ethdev/rte_ethdev.h >> @@ -1306,6 +1306,7 @@ struct rte_eth_rxq_info { >> struct rte_eth_rxconf conf; /**< queue config parameters. */ >> uint8_t scattered_rx; /**< scattered packets RX supported. */ >> uint16_t nb_desc; /**< configured number of RXDs. */ >> + uint16_t rx_bufsize; /**< size of RX buffer. */ >> } __rte_cache_min_aligned; >> >> /** >> -- >> 2.7.4 >> > > Will have to wait until 20.11 as it is an ABI change. > I thought about it. If I'm not mistaken it does not change size of the structure. ^ permalink raw reply [flat|nested] 16+ messages in thread
* [dpdk-dev] [RFC v2 0/3] add rx buffer size for rte_eth_rxq_info 2020-06-23 6:48 [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info Chengchang Tang 2020-06-23 9:30 ` Andrew Rybchenko 2020-06-23 14:48 ` Stephen Hemminger @ 2020-07-22 6:38 ` Chengchang Tang 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 1/3] ethdev: add a field " Chengchang Tang ` (3 more replies) 2 siblings, 4 replies; 16+ messages in thread From: Chengchang Tang @ 2020-07-22 6:38 UTC (permalink / raw) To: dev; +Cc: linuxarm, thomas, arybchenko, ferruh.yigit In common practice, PMD configure the rx buffer size which indicate the buffer length could be used for hw in receiving packts according to the data room size of the object in mempool. But in fact the final value is related to the specifications of hw, and its values will affect the number of fragments in recieving pkts when scatter is enabled. By the way, some PMDs may force to enable scatter when the MTU is bigger than the hw rx buffer size. At present, we seem to have no way to expose relevant information to upper layer users. So, add a field named rx_buf_size in rte_eth_rxq_info to indicate the buffer size used in recieving pkts for hw. And this patchset also add hns3 PMD implementation and update the testpmd to clarify intention. v2: Add hns3 implementation and update testpmd. Chengchang Tang (2): ethdev: add a field for rte_eth_rxq_info app/testpmd: Add RX buffer size dispaly in queue info querry Huisong Li (1): net/hns3: add support for query of rx/tx queue info app/test-pmd/config.c | 1 + drivers/net/hns3/hns3_ethdev.c | 2 ++ drivers/net/hns3/hns3_ethdev_vf.c | 2 ++ drivers/net/hns3/hns3_rxtx.c | 48 +++++++++++++++++++++++++++++++++++++++ drivers/net/hns3/hns3_rxtx.h | 4 ++++ lib/librte_ethdev/rte_ethdev.h | 2 ++ 6 files changed, 59 insertions(+) -- 2.7.4 ^ permalink raw reply [flat|nested] 16+ messages in thread
* [dpdk-dev] [RFC v2 1/3] ethdev: add a field for rte_eth_rxq_info 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 0/3] add rx buffer size " Chengchang Tang @ 2020-07-22 6:38 ` Chengchang Tang 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 2/3] net/hns3: add support for query of rx/tx queue info Chengchang Tang ` (2 subsequent siblings) 3 siblings, 0 replies; 16+ messages in thread From: Chengchang Tang @ 2020-07-22 6:38 UTC (permalink / raw) To: dev; +Cc: linuxarm, thomas, arybchenko, ferruh.yigit Add a field named rx_buf_size in rte_eth_rxq_info to indicate the buffer size used in recieving pkts for hw. Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com> --- v1->v2: modify the name and add more comments. --- lib/librte_ethdev/rte_ethdev.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h index 57e4a6c..845c6ad 100644 --- a/lib/librte_ethdev/rte_ethdev.h +++ b/lib/librte_ethdev/rte_ethdev.h @@ -1421,6 +1421,8 @@ struct rte_eth_rxq_info { struct rte_eth_rxconf conf; /**< queue config parameters. */ uint8_t scattered_rx; /**< scattered packets RX supported. */ uint16_t nb_desc; /**< configured number of RXDs. */ + /**< buffer size used for hw when receive packets. */ + uint16_t rx_buf_size; } __rte_cache_min_aligned; /** -- 2.7.4 ^ permalink raw reply [flat|nested] 16+ messages in thread
* [dpdk-dev] [RFC v2 2/3] net/hns3: add support for query of rx/tx queue info 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 0/3] add rx buffer size " Chengchang Tang 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 1/3] ethdev: add a field " Chengchang Tang @ 2020-07-22 6:38 ` Chengchang Tang 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 3/3] app/testpmd: Add RX buffer size dispaly in queue info querry Chengchang Tang 2020-07-28 6:29 ` [dpdk-dev] [RFC v2 0/3] add rx buffer size for rte_eth_rxq_info Chengchang Tang 3 siblings, 0 replies; 16+ messages in thread From: Chengchang Tang @ 2020-07-22 6:38 UTC (permalink / raw) To: dev; +Cc: linuxarm, thomas, arybchenko, ferruh.yigit From: Huisong Li <lihuisong@huawei.com> This patch adds support for query of rx/tx queue info for hns3. And the new field rx_buf_size in rte_eth_rxq_info is also filled. Signed-off-by: Huisong Li <lihuisong@huawei.com> Signed-off-by: Chengchang Tang <xavier.huwei@huawei.com> Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com> --- drivers/net/hns3/hns3_ethdev.c | 2 ++ drivers/net/hns3/hns3_ethdev_vf.c | 2 ++ drivers/net/hns3/hns3_rxtx.c | 48 +++++++++++++++++++++++++++++++++++++++ drivers/net/hns3/hns3_rxtx.h | 4 ++++ 4 files changed, 56 insertions(+) diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 81e7730..c6a16f3 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -5413,6 +5413,8 @@ static const struct eth_dev_ops hns3_eth_dev_ops = { .tx_queue_release = hns3_dev_tx_queue_release, .rx_queue_intr_enable = hns3_dev_rx_queue_intr_enable, .rx_queue_intr_disable = hns3_dev_rx_queue_intr_disable, + .rxq_info_get = hns3_rxq_info_get, + .txq_info_get = hns3_txq_info_get, .dev_configure = hns3_dev_configure, .flow_ctrl_get = hns3_flow_ctrl_get, .flow_ctrl_set = hns3_flow_ctrl_set, diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 1d2941f..73f89e3 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -2473,6 +2473,8 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = { .tx_queue_release = hns3_dev_tx_queue_release, .rx_queue_intr_enable = hns3_dev_rx_queue_intr_enable, .rx_queue_intr_disable = hns3_dev_rx_queue_intr_disable, + .rxq_info_get = hns3_rxq_info_get, + .txq_info_get = hns3_txq_info_get, .dev_configure = hns3vf_dev_configure, .mac_addr_add = hns3vf_add_mac_addr, .mac_addr_remove = hns3vf_remove_mac_addr, diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index c0f7981..5a7fb7d 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -2814,3 +2814,51 @@ void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev) eth_dev->tx_pkt_prepare = hns3_dummy_rxtx_burst; } } + +void +hns3_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_rxq_info *qinfo) +{ + struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct hns3_rx_queue *rxq = dev->data->rx_queues[queue_id]; + + if (rxq == NULL) { + hns3_err(hw, "queue pointer of rx queue_id (%u) is NULL.", + queue_id); + return; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->nb_rx_desc; + qinfo->scattered_rx = dev->data->scattered_rx; + /* + * Report the hw rx buffer length which may be smaller than the mbuf + * date room size to user. This value affects the number of packets + * when receive a jumbo frame in scatter mode. + */ + qinfo->rx_buf_size = rxq->rx_buf_len; + + /* If no descriptors available, packets are always dropped. */ + qinfo->conf.rx_drop_en = 1; + qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; +} + +void +hns3_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct hns3_tx_queue *txq = dev->data->tx_queues[queue_id]; + + if (txq == NULL) { + hns3_err(hw, "queue pointer of tx queue_id (%u) is NULL.", + queue_id); + return; + } + + qinfo->nb_desc = txq->nb_tx_desc; + qinfo->conf.offloads = dev->data->dev_conf.txmode.offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; +} diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h index 0d20a27..7a97a37 100644 --- a/drivers/net/hns3/hns3_rxtx.h +++ b/drivers/net/hns3/hns3_rxtx.h @@ -403,4 +403,8 @@ int hns3_config_gro(struct hns3_hw *hw, bool en); int hns3_restore_gro_conf(struct hns3_hw *hw); void hns3_update_all_queues_pvid_state(struct hns3_hw *hw); +void hns3_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_rxq_info *qinfo); +void hns3_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_txq_info *qinfo); #endif /* _HNS3_RXTX_H_ */ -- 2.7.4 ^ permalink raw reply [flat|nested] 16+ messages in thread
* [dpdk-dev] [RFC v2 3/3] app/testpmd: Add RX buffer size dispaly in queue info querry 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 0/3] add rx buffer size " Chengchang Tang 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 1/3] ethdev: add a field " Chengchang Tang 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 2/3] net/hns3: add support for query of rx/tx queue info Chengchang Tang @ 2020-07-22 6:38 ` Chengchang Tang 2020-07-28 6:29 ` [dpdk-dev] [RFC v2 0/3] add rx buffer size for rte_eth_rxq_info Chengchang Tang 3 siblings, 0 replies; 16+ messages in thread From: Chengchang Tang @ 2020-07-22 6:38 UTC (permalink / raw) To: dev; +Cc: linuxarm, thomas, arybchenko, ferruh.yigit Add RX buffer size to queue info querry cmd so that the user can get the buffer length used by hw queue for receiving packets. Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> --- app/test-pmd/config.c | 1 + 1 file changed, 1 insertion(+) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 30bee33..da2f837 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -452,6 +452,7 @@ rx_queue_infos_display(portid_t port_id, uint16_t queue_id) (qinfo.conf.rx_deferred_start != 0) ? "on" : "off"); printf("\nRX scattered packets: %s", (qinfo.scattered_rx != 0) ? "on" : "off"); + printf("\nRX Buffer size: %hu", qinfo.rx_buf_size); printf("\nNumber of RXDs: %hu", qinfo.nb_desc); if (rte_eth_rx_burst_mode_get(port_id, queue_id, &mode) == 0) -- 2.7.4 ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC v2 0/3] add rx buffer size for rte_eth_rxq_info 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 0/3] add rx buffer size " Chengchang Tang ` (2 preceding siblings ...) 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 3/3] app/testpmd: Add RX buffer size dispaly in queue info querry Chengchang Tang @ 2020-07-28 6:29 ` Chengchang Tang 2020-07-28 9:30 ` Ferruh Yigit 3 siblings, 1 reply; 16+ messages in thread From: Chengchang Tang @ 2020-07-28 6:29 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, linuxarm, arybchenko Friendly ping On 2020/7/22 14:38, Chengchang Tang wrote: > In common practice, PMD configure the rx buffer size which indicate the > buffer length could be used for hw in receiving packts according to the > data room size of the object in mempool. But in fact the final value is > related to the specifications of hw, and its values will affect the number > of fragments in recieving pkts when scatter is enabled. By the way, some > PMDs may force to enable scatter when the MTU is bigger than the hw rx > buffer size. > > At present, we seem to have no way to expose relevant information to upper > layer users. > > So, add a field named rx_buf_size in rte_eth_rxq_info to indicate the > buffer size used in recieving pkts for hw. And this patchset also add hns3 > PMD implementation and update the testpmd to clarify intention. > > v2: > Add hns3 implementation and update testpmd. > > Chengchang Tang (2): > ethdev: add a field for rte_eth_rxq_info > app/testpmd: Add RX buffer size dispaly in queue info querry > > Huisong Li (1): > net/hns3: add support for query of rx/tx queue info > > app/test-pmd/config.c | 1 + > drivers/net/hns3/hns3_ethdev.c | 2 ++ > drivers/net/hns3/hns3_ethdev_vf.c | 2 ++ > drivers/net/hns3/hns3_rxtx.c | 48 +++++++++++++++++++++++++++++++++++++++ > drivers/net/hns3/hns3_rxtx.h | 4 ++++ > lib/librte_ethdev/rte_ethdev.h | 2 ++ > 6 files changed, 59 insertions(+) > > -- > 2.7.4 > > _______________________________________________ > Linuxarm mailing list > Linuxarm@huawei.com > http://hulk.huawei.com/mailman/listinfo/linuxarm > > . > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC v2 0/3] add rx buffer size for rte_eth_rxq_info 2020-07-28 6:29 ` [dpdk-dev] [RFC v2 0/3] add rx buffer size for rte_eth_rxq_info Chengchang Tang @ 2020-07-28 9:30 ` Ferruh Yigit 2020-07-28 11:39 ` Chengchang Tang 0 siblings, 1 reply; 16+ messages in thread From: Ferruh Yigit @ 2020-07-28 9:30 UTC (permalink / raw) To: Chengchang Tang, dev; +Cc: thomas, linuxarm, arybchenko On 7/28/2020 7:29 AM, Chengchang Tang wrote: > Friendly ping Hi Tang, Sorry for not making it clear, since it is a library change, the change is already late for this release (20.08), and it will be considered for next release. As current release out, we can continue the discussions. > > On 2020/7/22 14:38, Chengchang Tang wrote: >> In common practice, PMD configure the rx buffer size which indicate the >> buffer length could be used for hw in receiving packts according to the >> data room size of the object in mempool. But in fact the final value is >> related to the specifications of hw, and its values will affect the number >> of fragments in recieving pkts when scatter is enabled. By the way, some >> PMDs may force to enable scatter when the MTU is bigger than the hw rx >> buffer size. >> >> At present, we seem to have no way to expose relevant information to upper >> layer users. >> >> So, add a field named rx_buf_size in rte_eth_rxq_info to indicate the >> buffer size used in recieving pkts for hw. And this patchset also add hns3 >> PMD implementation and update the testpmd to clarify intention. >> >> v2: >> Add hns3 implementation and update testpmd. >> >> Chengchang Tang (2): >> ethdev: add a field for rte_eth_rxq_info >> app/testpmd: Add RX buffer size dispaly in queue info querry >> >> Huisong Li (1): >> net/hns3: add support for query of rx/tx queue info >> >> app/test-pmd/config.c | 1 + >> drivers/net/hns3/hns3_ethdev.c | 2 ++ >> drivers/net/hns3/hns3_ethdev_vf.c | 2 ++ >> drivers/net/hns3/hns3_rxtx.c | 48 +++++++++++++++++++++++++++++++++++++++ >> drivers/net/hns3/hns3_rxtx.h | 4 ++++ >> lib/librte_ethdev/rte_ethdev.h | 2 ++ >> 6 files changed, 59 insertions(+) >> >> -- >> 2.7.4 >> >> _______________________________________________ >> Linuxarm mailing list >> Linuxarm@huawei.com >> http://hulk.huawei.com/mailman/listinfo/linuxarm >> >> . >> > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC v2 0/3] add rx buffer size for rte_eth_rxq_info 2020-07-28 9:30 ` Ferruh Yigit @ 2020-07-28 11:39 ` Chengchang Tang 2020-07-28 15:27 ` Thomas Monjalon 0 siblings, 1 reply; 16+ messages in thread From: Chengchang Tang @ 2020-07-28 11:39 UTC (permalink / raw) To: Ferruh Yigit, dev; +Cc: thomas, linuxarm, arybchenko Hi Ferruh, Thank you for your reply. I'm sorry to bother you while you are busy with the release. I know this change may only be accepted in the next release. I'd like to know what the community thinks about it. If there is no objection, I will send the patch after current release out. On 2020/7/28 17:30, Ferruh Yigit wrote: > On 7/28/2020 7:29 AM, Chengchang Tang wrote: >> Friendly ping > > Hi Tang, > > Sorry for not making it clear, since it is a library change, the change is > already late for this release (20.08), and it will be considered for next > release. As current release out, we can continue the discussions. > >> >> On 2020/7/22 14:38, Chengchang Tang wrote: >>> In common practice, PMD configure the rx buffer size which indicate the >>> buffer length could be used for hw in receiving packts according to the >>> data room size of the object in mempool. But in fact the final value is >>> related to the specifications of hw, and its values will affect the number >>> of fragments in recieving pkts when scatter is enabled. By the way, some >>> PMDs may force to enable scatter when the MTU is bigger than the hw rx >>> buffer size. >>> >>> At present, we seem to have no way to expose relevant information to upper >>> layer users. >>> >>> So, add a field named rx_buf_size in rte_eth_rxq_info to indicate the >>> buffer size used in recieving pkts for hw. And this patchset also add hns3 >>> PMD implementation and update the testpmd to clarify intention. >>> >>> v2: >>> Add hns3 implementation and update testpmd. >>> >>> Chengchang Tang (2): >>> ethdev: add a field for rte_eth_rxq_info >>> app/testpmd: Add RX buffer size dispaly in queue info querry >>> >>> Huisong Li (1): >>> net/hns3: add support for query of rx/tx queue info >>> >>> app/test-pmd/config.c | 1 + >>> drivers/net/hns3/hns3_ethdev.c | 2 ++ >>> drivers/net/hns3/hns3_ethdev_vf.c | 2 ++ >>> drivers/net/hns3/hns3_rxtx.c | 48 +++++++++++++++++++++++++++++++++++++++ >>> drivers/net/hns3/hns3_rxtx.h | 4 ++++ >>> lib/librte_ethdev/rte_ethdev.h | 2 ++ >>> 6 files changed, 59 insertions(+) >>> >>> -- >>> 2.7.4 >>> >>> _______________________________________________ >>> Linuxarm mailing list >>> Linuxarm@huawei.com >>> http://hulk.huawei.com/mailman/listinfo/linuxarm >>> >>> . >>> >> > > > . > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC v2 0/3] add rx buffer size for rte_eth_rxq_info 2020-07-28 11:39 ` Chengchang Tang @ 2020-07-28 15:27 ` Thomas Monjalon 0 siblings, 0 replies; 16+ messages in thread From: Thomas Monjalon @ 2020-07-28 15:27 UTC (permalink / raw) To: Chengchang Tang; +Cc: Ferruh Yigit, dev, linuxarm, arybchenko You're right pinging for reviews. That's the right time to look at next features for those who have completed their 20.08 tasks. Feel free to review other pending patches in the meantime. Some of the next features are classified as "Deferred" in patchwork: https://patches.dpdk.org/project/dpdk/list/?state=10 28/07/2020 13:39, Chengchang Tang: > Hi Ferruh, > > Thank you for your reply. I'm sorry to bother you while you are busy with > the release. I know this change may only be accepted in the next release. > I'd like to know what the community thinks about it. If there is no > objection, I will send the patch after current release out. > > On 2020/7/28 17:30, Ferruh Yigit wrote: > > On 7/28/2020 7:29 AM, Chengchang Tang wrote: > >> Friendly ping > > > > Hi Tang, > > > > Sorry for not making it clear, since it is a library change, the change is > > already late for this release (20.08), and it will be considered for next > > release. As current release out, we can continue the discussions. > > > >> > >> On 2020/7/22 14:38, Chengchang Tang wrote: > >>> In common practice, PMD configure the rx buffer size which indicate the > >>> buffer length could be used for hw in receiving packts according to the > >>> data room size of the object in mempool. But in fact the final value is > >>> related to the specifications of hw, and its values will affect the number > >>> of fragments in recieving pkts when scatter is enabled. By the way, some > >>> PMDs may force to enable scatter when the MTU is bigger than the hw rx > >>> buffer size. > >>> > >>> At present, we seem to have no way to expose relevant information to upper > >>> layer users. > >>> > >>> So, add a field named rx_buf_size in rte_eth_rxq_info to indicate the > >>> buffer size used in recieving pkts for hw. And this patchset also add hns3 > >>> PMD implementation and update the testpmd to clarify intention. > >>> > >>> v2: > >>> Add hns3 implementation and update testpmd. > >>> > >>> Chengchang Tang (2): > >>> ethdev: add a field for rte_eth_rxq_info > >>> app/testpmd: Add RX buffer size dispaly in queue info querry > >>> > >>> Huisong Li (1): > >>> net/hns3: add support for query of rx/tx queue info > >>> > >>> app/test-pmd/config.c | 1 + > >>> drivers/net/hns3/hns3_ethdev.c | 2 ++ > >>> drivers/net/hns3/hns3_ethdev_vf.c | 2 ++ > >>> drivers/net/hns3/hns3_rxtx.c | 48 +++++++++++++++++++++++++++++++++++++++ > >>> drivers/net/hns3/hns3_rxtx.h | 4 ++++ > >>> lib/librte_ethdev/rte_ethdev.h | 2 ++ > >>> 6 files changed, 59 insertions(+) > >>> > >>> -- > >>> 2.7.4 > >>> > >>> _______________________________________________ > >>> Linuxarm mailing list > >>> Linuxarm@huawei.com > >>> http://hulk.huawei.com/mailman/listinfo/linuxarm > >>> > >>> . > >>> > >> > > > > > > . > > > > ^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2020-07-28 15:28 UTC | newest] Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-06-23 6:48 [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info Chengchang Tang 2020-06-23 9:30 ` Andrew Rybchenko 2020-06-24 3:48 ` Chengchang Tang 2020-06-24 8:52 ` Ferruh Yigit 2020-06-24 18:32 ` Ferruh Yigit 2020-06-25 9:06 ` Andrew Rybchenko 2020-06-23 14:48 ` Stephen Hemminger 2020-06-23 15:22 ` Andrew Rybchenko 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 0/3] add rx buffer size " Chengchang Tang 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 1/3] ethdev: add a field " Chengchang Tang 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 2/3] net/hns3: add support for query of rx/tx queue info Chengchang Tang 2020-07-22 6:38 ` [dpdk-dev] [RFC v2 3/3] app/testpmd: Add RX buffer size dispaly in queue info querry Chengchang Tang 2020-07-28 6:29 ` [dpdk-dev] [RFC v2 0/3] add rx buffer size for rte_eth_rxq_info Chengchang Tang 2020-07-28 9:30 ` Ferruh Yigit 2020-07-28 11:39 ` Chengchang Tang 2020-07-28 15:27 ` Thomas Monjalon
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).