From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 764237CB6 for ; Wed, 13 Sep 2017 10:49:46 +0200 (CEST) Received: from pure.maildistiller.com (unknown [10.110.50.29]) by dispatch1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTP id 5402220051; Wed, 13 Sep 2017 08:49:46 +0000 (UTC) X-Virus-Scanned: Proofpoint Essentials engine Received: from mx1-us3.ppe-hosted.com (unknown [10.110.49.251]) by pure.maildistiller.com (Proofpoint Essentials ESMTP Server) with ESMTPS id 8765360049; Wed, 13 Sep 2017 08:49:45 +0000 (UTC) Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1-us3.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id 221DA60056; Wed, 13 Sep 2017 08:49:45 +0000 (UTC) Received: from [192.168.38.17] (84.52.114.114) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 13 Sep 2017 09:49:39 +0100 To: Shahaf Shuler , CC: References: From: Andrew Rybchenko Message-ID: <223be6bf-510e-e34d-2359-c0f1becd5bad@solarflare.com> Date: Wed, 13 Sep 2017 11:49:34 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-GB X-Originating-IP: [84.52.114.114] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-11.0.0.1191-8.100.1062-23326.003 X-TM-AS-Result: No--12.282600-0.000000-31 X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-MDID: 1505292586-urnBkgyiVRcr Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue offloads API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Sep 2017 08:49:46 -0000 On 09/13/2017 09:37 AM, Shahaf Shuler wrote: > Introduce a new API to configure Rx offloads. > > In the new API, offloads are divided into per-port and per-queue > offloads. The PMD reports capability for each of them. > Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags. > To enable per-port offload, the offload should be set on both device > configuration and queue configuration. To enable per-queue offload, the > offloads can be set only on queue configuration. > > Applications should set the ignore_offload_bitfield bit on rxmode > structure in order to move to the new API. > > The old Rx offloads API is kept for the meanwhile, in order to enable a > smooth transition for PMDs and application to the new API. > > Signed-off-by: Shahaf Shuler > --- > doc/guides/nics/features.rst | 33 ++++---- > lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++---- > lib/librte_ether/rte_ethdev.h | 51 +++++++++++- > 3 files changed, 210 insertions(+), 30 deletions(-) [snip] > diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c > index 0597641ee..b3c10701e 100644 > --- a/lib/librte_ether/rte_ethdev.c > +++ b/lib/librte_ether/rte_ethdev.c > @@ -687,12 +687,90 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex) > } > } > > +/** > + * A conversion function from rxmode bitfield API. > + */ > +static void > +rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode, > + uint64_t *rx_offloads) > +{ > + uint64_t offloads = 0; > + > + if (rxmode->header_split == 1) > + offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT; > + if (rxmode->hw_ip_checksum == 1) > + offloads |= DEV_RX_OFFLOAD_CHECKSUM; > + if (rxmode->hw_vlan_filter == 1) > + offloads |= DEV_RX_OFFLOAD_VLAN_FILTER; > + if (rxmode->hw_vlan_strip == 1) > + offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; > + if (rxmode->hw_vlan_extend == 1) > + offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND; > + if (rxmode->jumbo_frame == 1) > + offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; > + if (rxmode->hw_strip_crc == 1) > + offloads |= DEV_RX_OFFLOAD_CRC_STRIP; > + if (rxmode->enable_scatter == 1) > + offloads |= DEV_RX_OFFLOAD_SCATTER; > + if (rxmode->enable_lro == 1) > + offloads |= DEV_RX_OFFLOAD_TCP_LRO; > + > + *rx_offloads = offloads; > +} > + > +/** > + * A conversion function from rxmode offloads API. > + */ > +static void > +rte_eth_convert_rx_offloads(const uint64_t rx_offloads, > + struct rte_eth_rxmode *rxmode) > +{ > + > + if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT) > + rxmode->header_split = 1; > + else > + rxmode->header_split = 0; > + if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM) > + rxmode->hw_ip_checksum = 1; > + else > + rxmode->hw_ip_checksum = 0; > + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) > + rxmode->hw_vlan_filter = 1; > + else > + rxmode->hw_vlan_filter = 0; > + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP) > + rxmode->hw_vlan_strip = 1; > + else > + rxmode->hw_vlan_strip = 0; > + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) > + rxmode->hw_vlan_extend = 1; > + else > + rxmode->hw_vlan_extend = 0; > + if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) > + rxmode->jumbo_frame = 1; > + else > + rxmode->jumbo_frame = 0; > + if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP) > + rxmode->hw_strip_crc = 1; > + else > + rxmode->hw_strip_crc = 0; > + if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) > + rxmode->enable_scatter = 1; > + else > + rxmode->enable_scatter = 0; > + if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) > + rxmode->enable_lro = 1; > + else > + rxmode->enable_lro = 0; > +} > + > int > rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, > const struct rte_eth_conf *dev_conf) > { > struct rte_eth_dev *dev; > struct rte_eth_dev_info dev_info; > + struct rte_eth_conf local_conf = *dev_conf; > int diag; > > RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); > @@ -722,8 +800,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, > return -EBUSY; > } > > + /* > + * Convert between the offloads API to enable PMDs to support > + * only one of them. > + */ > + if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) { > + rte_eth_convert_rx_offload_bitfield( > + &dev_conf->rxmode, &local_conf.rxmode.offloads); > + } else { > + rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads, > + &local_conf.rxmode); Ignore flag is lost here and it will result in treating txq_flags as the primary information about offloads. It is important in the case of failsafe PMD. > + } > + > /* Copy the dev_conf parameter into the dev structure */ > - memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf)); > + memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf)); > > /* > * Check that the numbers of RX and TX queues are not greater [snip]