From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id B5FADA034C;
	Wed, 21 Sep 2022 09:35:47 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id A630940E0F;
	Wed, 21 Sep 2022 09:35:47 +0200 (CEST)
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by mails.dpdk.org (Postfix) with ESMTP id EFE4C4014F
 for <dev@dpdk.org>; Wed, 21 Sep 2022 09:35:46 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id A94B35C004F;
 Wed, 21 Sep 2022 03:35:45 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Wed, 21 Sep 2022 03:35:45 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h=
 cc:cc:content-transfer-encoding:content-type:date:date:from:from
 :in-reply-to:in-reply-to:message-id:mime-version:references
 :reply-to:sender:subject:subject:to:to; s=fm2; t=1663745745; x=
 1663832145; bh=w7/ULd1E1Du1IBDFzJro/LfKJjbSwK6M/GbDtQo/HT0=; b=C
 yIcOWu6hM2p8Z3jC2DS4bNA8AzbAqKhdj5uP07dGtOtxLalze3QkqY2NLoZv5OmC
 q6xiLIsOFQYDSEglifIbP/NpJc/jY2hZKHSmJ44SdZct7D9fuvwVbLMVuSkZxset
 nnErvYk+b4yzRG6FoeHe0n8f/LxFfz22m6cqcLaHjlDqzwhXWe8XvRlF4dBQBPa9
 faUuw+y1FpcpZ8+HnQwwUOnapOsj0ePzOikxaJ7xJz+R2Zm5fvGmS4jGYhveWO7Y
 Y3f+4FWW5Ywqypp6Uqno3cOu3tZsS84lohZbzn9tg90PoBe9tbycBbQufZ9UVr+P
 yKqCx2yaUNkCR5dVk9j2g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:cc:content-transfer-encoding
 :content-type:date:date:feedback-id:feedback-id:from:from
 :in-reply-to:in-reply-to:message-id:mime-version:references
 :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
 :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1663745745; x=
 1663832145; bh=w7/ULd1E1Du1IBDFzJro/LfKJjbSwK6M/GbDtQo/HT0=; b=y
 htCXmkmN/u4wFehN7CB3coIJ1yD1ajH/mtAt/7vlCdE/sj7/vbZUNAz3YlpXezAm
 HqilvTNFmhRQZR0BCJ065qfu9bibHUSPlYezTtqbPUJJCDGayG65OA06V9HiGSHv
 njeQG3oAKCyEX2LL7oqmV9Xq30+xhEv+UdZGMI5QNeR+r5bq8uASAIQ8CS55AJ+A
 R+fuercZcTxoIbeTAnNx6pbB+p+8C6mWnL8wlgQcT5L1RPz4ztGnHAnrxS5Gv5jj
 66Jy5IopA4z8t+TXcwv0t3Cb+yd3q336VL/lf7/COI6bJsMG5/xKIzFpNW8bKLQk
 7+g5um31V4DNKbIP/elMA==
X-ME-Sender: <xms:0b4qYx4HoHD23JQHlpsIKjzN-sazUYxC5wPOk5Gy3BOUnG70O4o-Hw>
 <xme:0b4qY-4XJpZKJxNxE_bfx_jvung8GYt_yLSvrpLz2K6RrHgCc-5t8LVybzE-gybmT
 sy4HVGAFB6s4VZLGw>
X-ME-Received: <xmr:0b4qY4dcgJLWfyvSp_PRgGQRckEhvvkIBZRuNmWjLdYC-8vS-MJiQAZnsdb6g19dwIGbHUGTEWzBVG99ui5jczeiDg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrfeeftddguddvfecutefuodetggdotefrod
 ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
 necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
 enucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhho
 mhgrshcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqne
 cuggftrfgrthhtvghrnheptdejieeifeehtdffgfdvleetueeffeehueejgfeuteeftddt
 ieekgfekudehtdfgnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilh
 hfrhhomhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth
X-ME-Proxy: <xmx:0b4qY6IbIFdrQQTS52M67HUJvc-8kiMtowpR7sQFNyu_51HCRRLJcg>
 <xmx:0b4qY1IUGyNMWqmTojCo9haowvyZ4gicvBawBY7OE3YSSMgqyuUhMg>
 <xmx:0b4qYzzvR5-GPlU5XAuIEBTDyv5ig69O9YXpcdwmET4lJlQfl40ESw>
 <xmx:0b4qY6p5HXYjY2gGuniod0Wev_fE18ecRFN0GPfh32d7h-fDxJrJWQ>
Feedback-ID: i47234305:Fastmail
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 21 Sep 2022 03:35:44 -0400 (EDT)
From: Thomas Monjalon <thomas@monjalon.net>
To: Ferruh Yigit <ferruh.yigit@xilinx.com>, "dev@dpdk.org" <dev@dpdk.org>,
 Ian Stokes <ian.stokes@intel.com>, David Marchand <david.marchand@redhat.com>,
 Chaoyong He <chaoyong.he@corigine.com>
Cc: oss-drivers <oss-drivers@corigine.com>,
 Niklas Soderlund <niklas.soderlund@corigine.com>,
 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
 Jerin Jacob Kollanukkaran <jerinj@marvell.com>
Subject: Re: [PATCH v9 05/12] net/nfp: add flower PF setup logic
Date: Wed, 21 Sep 2022 09:35:43 +0200
Message-ID: <831874198.0ifERbkFSE@thomas>
In-Reply-To: <SJ0PR13MB55452B2239840826758AC73A9E4F9@SJ0PR13MB5545.namprd13.prod.outlook.com>
References: <1663238669-12244-1-git-send-email-chaoyong.he@corigine.com>
 <cea1de8b-37ef-f098-16ec-fbd604094c18@xilinx.com>
 <SJ0PR13MB55452B2239840826758AC73A9E4F9@SJ0PR13MB5545.namprd13.prod.outlook.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

I don't understand your logic fully,
but I understand you need special code to make your hardware work with OvS,
meaning:
	- OvS must have a special handling for your HW
	- other applications won't work
Tell me I misunderstand,
but I feel we should not accept this patch,
there is probably a better way to manage the specific of your HW.

You said "NFP PMD can work with up to 8 ports on the same PF device."
Let's imagine you have 8 ports for 1 PF device.
Do you allocate 8 ethdev ports?
If yes, then each ethdev should do the internal work,
and nothing is needed at application level.


21/09/2022 04:50, Chaoyong He:
> > On 9/15/2022 11:44 AM, Chaoyong He wrote:
> > Hi Chaoyong,
> > 
> > Again, similar comment to previous versions, what I understand is this new
> > flower FW supports HW flow filter and intended use case is for OvS HW
> > acceleration.
> > But is DPDK driver need to know OvS data structures, like "struct dp_packet",
> > can it be transparent to application, I am sure there are other devices
> > offloading some OvS task to HW.
> > 
> > @Ian, @David,
> > 
> > Can you please comment on above usage, do you guys see any way to
> > escape from OvS specific code in the driver?
> 
> Firstly, I'll explain why we must include some OvS specific code in the driver.
> If we don't set the `pkt->source = 3`, the OvS will coredump like this:
> ```
> (gdb) bt
> #0  0x00007fe1d48fd387 in raise () from /lib64/libc.so.6
> #1  0x00007fe1d48fea78 in abort () from /lib64/libc.so.6
> #2  0x00007fe1d493ff67 in __libc_message () from /lib64/libc.so.6
> #3  0x00007fe1d4948329 in _int_free () from /lib64/libc.so.6
> #4  0x000000000049c006 in dp_packet_uninit (b=0x1f262db80) at lib/dp-packet.c:135
> #5  0x000000000061440a in dp_packet_delete (b=0x1f262db80) at lib/dp-packet.h:261
> #6  0x0000000000619aa0 in dpdk_copy_batch_to_mbuf (netdev=0x1f0a04a80, batch=0x7fe1b40050c0) at lib/netdev-dpdk.c:274
> #7  0x0000000000619b46 in netdev_dpdk_common_send (netdev=0x1f0a04a80, batch=0x7fe1b40050c0, stats=0x7fe1be7321f0) at
> #8  0x000000000061a0ba in netdev_dpdk_eth_send (netdev=0x1f0a04a80, qid=0, batch=0x7fe1b40050c0, concurrent_txq=true)
> #9  0x00000000004fbd10 in netdev_send (netdev=0x1f0a04a80, qid=0, batch=0x7fe1b40050c0, concurrent_txq=true) at lib/n
> #10 0x00000000004aa663 in dp_netdev_pmd_flush_output_on_port (pmd=0x7fe1be735010, p=0x7fe1b4005090) at lib/dpif-netde
> #11 0x00000000004aa85d in dp_netdev_pmd_flush_output_packets (pmd=0x7fe1be735010, force=false) at lib/dpif-netdev.c:5
> #12 0x00000000004aaaef in dp_netdev_process_rxq_port (pmd=0x7fe1be735010, rxq=0x16f3f80, port_no=3) at lib/dpif-netde
> #13 0x00000000004af17a in pmd_thread_main (f_=0x7fe1be735010) at lib/dpif-netdev.c:6958
> #14 0x000000000057da80 in ovsthread_wrapper (aux_=0x1608b30) at lib/ovs-thread.c:422
> #15 0x00007fe1d51a6ea5 in start_thread () from /lib64/libpthread.so.0
> #16 0x00007fe1d49c5b0d in clone () from /lib64/libc.so.6
> ```
> The logic in function `dp_packet_delete()` run into the wrong branch.
> 
> Then, why just our PMD need do this, and other PMDs don't?
> Generally, it's greatly dependent on the hardware.
> 
> The Netronome's Network Flow Processor 4xxx (NFP-4xxx) card is the target card of these series patches.
> Which only has one PF but has 2 physical ports, and the NFP PMD can work with up to 8 ports on the same PF device. 
> Other PMDs hardware seems all 'one PF <--> one physical port'.
> 
> For the use case of OvS, we should add the representor port of 'physical port' to the bridge, not the representor port of PF like other PMDs.
> 
> We use a two-layer poll mode architecture. (Other PMDs are simple poll mode architecture)
> In the RX direction:
> 1. When the physical port or vf receives pkts, the firmware will prepend a meta-data(indicating the input port) into the pkt.
> 2. We use the PF vNIC as a multiplexer, which keeps polling pkts from the firmware.
> 3. The PF vNIC will parse the meta-data, and enqueue the pkt into the corresponding rte_ring of the representor port of physical port or vf.
> 4. The OVS will polling pkts from the RX function of representor port, which dequeue pkts from the rte_ring.
> In the TX direction:
> 1. The OVS send the pkts from the TX functions of representor port.
> 2. The representor port will prepend a meta-data(indicating the output port) into the pkt and send the pkt to firmware through the queue 0 of PF vNIC.
> 3. The firmware will parse the meta-data, and forward the pkt to the corresponding physical port or vf.
> 
> So the OvS won't create the mempool for us and we must create it ourselves for the PF vNIC to use.
> 
> Hopefully, I explained the things clearly. Thanks.