DPDK patches and discussions
 help / color / mirror / Atom feed
From: Chaoyong He <chaoyong.he@corigine.com>
To: Ferruh Yigit <ferruh.yigit@xilinx.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: oss-drivers <oss-drivers@corigine.com>,
	Niklas Soderlund <niklas.soderlund@corigine.com>
Subject: RE: [PATCH v8 05/12] net/nfp: add flower PF setup logic
Date: Fri, 9 Sep 2022 02:36:24 +0000	[thread overview]
Message-ID: <SJ0PR13MB5545F4961279FACA5D8985A59E439@SJ0PR13MB5545.namprd13.prod.outlook.com> (raw)
In-Reply-To: <49888fb1-e16a-9d55-9855-7e7807922dea@xilinx.com>

> On 9/8/2022 9:44 AM, Chaoyong He wrote:
> > Adds the vNIC initialization logic for the flower PF vNIC. The flower
> > firmware exposes this vNIC for the purposes of fallback traffic in the
> > switchdev use-case.
> >
> > Adds minimal dev_ops for this PF device. Because the device is being
> > exposed externally to DPDK it should also be configured using DPDK
> > helpers like rte_eth_configure(). For these helpers to work the flower
> > logic needs to implements a minimal set of dev_ops.
> >
> > Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
> > Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
> > ---
> >   drivers/net/nfp/flower/nfp_flower.c            | 398
> ++++++++++++++++++++++++-
> >   drivers/net/nfp/flower/nfp_flower.h            |   6 +
> >   drivers/net/nfp/flower/nfp_flower_ovs_compat.h |  37 +++
> 
> Can you please detail why OVS specific header is required? Having application
> specific code in PMD can be sign of some design issue, that is why can you
> please explain more what it does?
> 

Basically, there exist two layers polling mode to move a pkt from firmware to OVS.

When our card using flower application firmware receive pkt and find the pkt can't be offloaded, 
it will record the input port in a place of the pkt, we call it metadata.

There exist a rte_ring for each representor port.

We use the pf device as a multiplexer, which keeps polling pkts from the firmware. 
Based on the metadata, it will enqueue the pkt into the rte_ring of the corresponding representor port.

On the OVS side, it will keeps try to dequeue the pkt from the rte_ring of the representor port.
Once it gets the pkt, the OVS will go its logic and treat the pkt as `struct dp_packet`.

So we copy the definition of `struct dp_packet` from OVS to prevent the coredump caused by memory read/write out of range.

Another possible way is defining a big enough mbuf_priv_len using macro to prevent this structure definition from OVS.
Is this the right way? 

> <...>
> 
> > +static int
> > +nfp_flower_init_pf_vnic(struct nfp_net_hw *hw) {
> > +	int ret;
> > +	uint16_t i;
> > +	uint16_t n_txq;
> > +	uint16_t n_rxq;
> > +	uint16_t port_id;
> > +	unsigned int numa_node;
> > +	struct rte_mempool *mp;
> > +	struct nfp_pf_dev *pf_dev;
> > +	struct rte_eth_dev *eth_dev;
> > +	struct nfp_app_fw_flower *app_fw_flower;
> > +
> > +	static const struct rte_eth_conf port_conf = {
> > +		.rxmode = {
> > +			.mq_mode  = RTE_ETH_MQ_RX_RSS,
> > +			.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
> > +		},
> > +		.txmode = {
> > +			.mq_mode = RTE_ETH_MQ_TX_NONE,
> > +		},
> > +	};
> > +
> > +	/* Set up some pointers here for ease of use */
> > +	pf_dev = hw->pf_dev;
> > +	app_fw_flower = NFP_PRIV_TO_APP_FW_FLOWER(pf_dev-
> >app_fw_priv);
> > +
> > +	/*
> > +	 * Perform the "common" part of setting up a flower vNIC.
> > +	 * Mostly reading configuration from hardware.
> > +	 */
> > +	ret = nfp_flower_init_vnic_common(hw, "pf_vnic");
> > +	if (ret != 0)
> > +		goto done;
> > +
> > +	hw->eth_dev = rte_eth_dev_allocate("nfp_pf_vnic");
> > +	if (hw->eth_dev == NULL) {
> > +		ret = -ENOMEM;
> > +		goto done;
> > +	}
> > +
> > +	/* Grab the pointer to the newly created rte_eth_dev here */
> > +	eth_dev = hw->eth_dev;
> > +
> > +	numa_node = rte_socket_id();
> > +
> > +	/* Fill in some of the eth_dev fields */
> > +	eth_dev->device = &pf_dev->pci_dev->device;
> > +	eth_dev->data->dev_private = hw;
> > +
> > +	/* Create a mbuf pool for the PF */
> > +	app_fw_flower->pf_pktmbuf_pool = nfp_flower_pf_mp_create();
> > +	if (app_fw_flower->pf_pktmbuf_pool == NULL) {
> > +		ret = -ENOMEM;
> > +		goto port_release;
> > +	}
> > +
> > +	mp = app_fw_flower->pf_pktmbuf_pool;
> > +
> > +	/* Add Rx/Tx functions */
> > +	eth_dev->dev_ops = &nfp_flower_pf_vnic_ops;
> > +
> > +	/* PF vNIC gets a random MAC */
> > +	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
> RTE_ETHER_ADDR_LEN, 0);
> > +	if (eth_dev->data->mac_addrs == NULL) {
> > +		ret = -ENOMEM;
> > +		goto mempool_cleanup;
> > +	}
> > +
> > +	rte_eth_random_addr(eth_dev->data->mac_addrs->addr_bytes);
> > +	rte_eth_dev_probing_finish(eth_dev);
> > +
> > +	/* Configure the PF device now */
> > +	n_rxq = hw->max_rx_queues;
> > +	n_txq = hw->max_tx_queues;
> > +	port_id = hw->eth_dev->data->port_id;
> > +
> > +	ret = rte_eth_dev_configure(port_id, n_rxq, n_txq, &port_conf);
> 
> Still not sure about PMD calling 'rte_eth_dev_configure()', can you please
> give more details on what specific configuration is expected with that call?

The main configuration we need is the number of rx/tx queue.
So we should use the internal api `eth_dev_rx/tx_queue_config` to instead?


  reply	other threads:[~2022-09-09  2:36 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-08  8:44 [PATCH v8 00/12] preparation for the rte_flow offload of nfp PMD Chaoyong He
2022-09-08  8:44 ` [PATCH v8 01/12] net/nfp: move app specific attributes to own struct Chaoyong He
2022-09-08 16:24   ` Ferruh Yigit
2022-09-09  1:49     ` Chaoyong He
2022-09-09  5:43       ` Chaoyong He
2022-09-09 12:13         ` Ferruh Yigit
2022-09-08  8:44 ` [PATCH v8 02/12] net/nfp: simplify initialization and remove dead code Chaoyong He
2022-09-08  8:44 ` [PATCH v8 03/12] net/nfp: move app specific init logic to own function Chaoyong He
2022-09-08  8:44 ` [PATCH v8 04/12] net/nfp: add initial flower firmware support Chaoyong He
2022-09-08 16:24   ` Ferruh Yigit
2022-09-08  8:44 ` [PATCH v8 05/12] net/nfp: add flower PF setup logic Chaoyong He
2022-09-08 16:24   ` Ferruh Yigit
2022-09-09  2:36     ` Chaoyong He [this message]
2022-09-09 12:13       ` Ferruh Yigit
2022-09-14  9:20         ` Thomas Monjalon
2022-09-09 12:19       ` Ferruh Yigit
2022-09-13  6:51         ` Chaoyong He
2022-09-13  9:08           ` Ferruh Yigit
2022-09-14  1:23             ` Chaoyong He
2022-09-14  9:25               ` Thomas Monjalon
2022-09-08  8:44 ` [PATCH v8 06/12] net/nfp: add flower PF related routines Chaoyong He
2022-09-08  8:44 ` [PATCH v8 07/12] net/nfp: add flower ctrl VNIC related logics Chaoyong He
2022-09-08  8:44 ` [PATCH v8 08/12] net/nfp: move common rxtx function for flower use Chaoyong He
2022-09-08  8:44 ` [PATCH v8 09/12] net/nfp: add flower ctrl VNIC rxtx logic Chaoyong He
2022-09-08  8:45 ` [PATCH v8 10/12] net/nfp: add flower representor framework Chaoyong He
2022-09-08  8:45 ` [PATCH v8 11/12] net/nfp: move rxtx function to header file Chaoyong He
2022-09-08  8:45 ` [PATCH v8 12/12] net/nfp: add flower PF rxtx logic Chaoyong He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR13MB5545F4961279FACA5D8985A59E439@SJ0PR13MB5545.namprd13.prod.outlook.com \
    --to=chaoyong.he@corigine.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@xilinx.com \
    --cc=niklas.soderlund@corigine.com \
    --cc=oss-drivers@corigine.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).