From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by dpdk.org (Postfix) with ESMTP id DBCEB374F for ; Tue, 5 Sep 2017 09:48:10 +0200 (CEST) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 0DD3220B8E; Tue, 5 Sep 2017 03:48:10 -0400 (EDT) Received: from frontend1 ([10.202.2.160]) by compute1.internal (MEProxy); Tue, 05 Sep 2017 03:48:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc:x-sasl-enc; s=mesmtp; bh=BmrWb0IPctCh6N/ YWljI5tyUXWCpapCzjh+eBBZW0HY=; b=IzdigCJ3OwWjebzCWghqjeBXZ/JnBd+ rKIMt0U2nWdT3jaaVSWJ8y4jGURNZngIdB2NwTp2OMH7b4rHz8w4e5T9lEGdRQEm AamKBmg9GK7KRhIw2d/QHciLOj/dMu9FZXU6vSfe7KKDxYaK+CwGdDs0bGOsflE7 55gOEf/Pw5Rg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc:x-sasl-enc; s= fm1; bh=BmrWb0IPctCh6N/YWljI5tyUXWCpapCzjh+eBBZW0HY=; b=ppX37y2b FnlEbBNwsWtZbXRzEIyENC92K0hWrsh9bNumCRTZNqzcrQR5w15DCu6Pl5cOHkc8 sgKmZWtpvZF58xR1/Mi4XahsvswPNbrbm5s1bMKl9oaNn5XcE6HZUaz07OIAne9G Sj+Xw5QZvZ8CECrpLV6iSnflQ1UjJPTbh6j5dxUGPcDIE1jjMei9fl5V1vsofiEu k4vw/48fKU6olkSEszPt+SshSUrfV6Q/rCmuaA0QqlYHBccVSCytXrABOrwVF5Nf 4XjEH2Qm2WiokoBp0kfPDZvJ8pccO2AzoW+YN1DHZkJJu+tPwpAUhw+htnagWH6B T2sz9PPamXoOGQ== X-ME-Sender: X-Sasl-enc: vGLGPOti1G828OibjyH2fKw4cuVSQTGp66tu8CX4mSGL 1504597689 Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 853377F97D; Tue, 5 Sep 2017 03:48:09 -0400 (EDT) From: Thomas Monjalon To: "Ananyev, Konstantin" Cc: Shahaf Shuler , dev@dpdk.org Date: Tue, 05 Sep 2017 09:48:08 +0200 Message-ID: <2334939.YzL2ADl2XU@xps> In-Reply-To: <2601191342CEEE43887BDE71AB9772584F2460F1@irsmsx105.ger.corp.intel.com> References: <2327783.H4uO08xLcu@xps> <2601191342CEEE43887BDE71AB9772584F2460F1@irsmsx105.ger.corp.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Sep 2017 07:48:11 -0000 04/09/2017 16:18, Ananyev, Konstantin: > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > 04/09/2017 15:25, Ananyev, Konstantin: > > > Hi Shahaf, > > > > > > > +/** > > > > + * A conversion function from rxmode offloads API to rte_eth_rxq_conf > > > > + * offloads API. > > > > + */ > > > > +static void > > > > +rte_eth_convert_rxmode_offloads(struct rte_eth_rxmode *rxmode, > > > > + struct rte_eth_rxq_conf *rxq_conf) > > > > +{ > > > > + if (rxmode->header_split == 1) > > > > + rxq_conf->offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT; > > > > + if (rxmode->hw_ip_checksum == 1) > > > > + rxq_conf->offloads |= DEV_RX_OFFLOAD_CHECKSUM; > > > > + if (rxmode->hw_vlan_filter == 1) > > > > + rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER; > > > > > > Thinking on it a bit more: > > > VLAN_FILTER is definitely one per device, as it would affect VFs also. > > > At least that's what we have for Intel devices (ixgbe, i40e) right now. > > > For Intel devices VLAN_STRIP is also per device and > > > will also be applied to all corresponding VFs. > > > In fact, right now it is possible to query/change these 3 vlan offload flags on the fly > > > (after dev_start) on port basis by rte_eth_dev_(get|set)_vlan_offload API. > > > So, I think at least these 3 flags need to be remained on a port basis. > > > > I don't understand how it helps to be able to configure the same thing > > in 2 places. > > Because some offloads are per device, another - per queue. > Configuring on a device basis would allow most users to conjure all > queues in the same manner by default. > Those users who would need more fine-grained setup (per queue) > will be able to overwrite it by rx_queue_setup(). Those users can set the same config for all queues. > > > I think you are just describing a limitation of these HW: some offloads > > must be the same for all queues. > > As I said above - on some devices some offloads might also affect queues > that belong to VFs (to another ports in DPDK words). > You might never invoke rx_queue_setup() for these queues per your app. > But you still want to enable this offload on that device. You are advocating for per-port configuration API because some settings must be the same on all the ports of your hardware? So there is a big trouble. You don't need per-port settings, but per-hw-device settings. Or would you accept more fine-grained per-port settings? If yes, you can accept even finer grained per-queues settings. > > > It does not prevent from configuring them in the per-queue setup. > > > > > In fact, why can't we have both per port and per queue RX offload: > > > - dev_configure() will accept RX_OFFLOAD_* flags and apply them on a port basis. > > > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and apply them on a queue basis. > > > - if particular RX_OFFLOAD flag for that device couldn't be setup on a queue basis - > > > rx_queue_setup() will return an error. > > > > The queue setup can work while the value is the same for every queues. > > Ok, and how people would know that? > That for device N offload X has to be the same for all queues, > and for device M offload X can be differs for different queues. We can know the hardware limitations by filling this information at PMD init. > Again, if we don't allow to enable/disable offloads for particular queue, > why to bother with updating rx_queue_setup() API at all? I do not understand this question. > > > - rte_eth_rxq_info can be extended to provide information which RX_OFFLOADs > > > can be configured on a per queue basis. > > > > Yes the PMD should advertise its limitations like being forced to > > apply the same configuration to all its queues. > > Didn't get your last sentence. I agree that the hardware limitations must be written in an ethdev structure.