DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/5] net/enic: add SR-IOV VF representor
@ 2020-09-09 13:56 Hyong Youb Kim
  2020-09-09 13:56 ` [dpdk-dev] [PATCH 1/5] net/enic: extend vnic dev API for VF representors Hyong Youb Kim
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Hyong Youb Kim @ 2020-09-09 13:56 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, Hyong Youb Kim

This series adds VF representors to the driver. It enables
single-queue representors and implements enough flow features to run
OVS-DPDK offload for default vlan+mac based switching.

The flow API handlers and devcmd functions (firmware commands) are now
aware of representors. Representors reserve PF Tx/Rx queues for their
implicit paths to/from VFs. Packet forwarding rules for these implicit
paths are set up using firmware's Flow Manager (flowman), which is
also used for rte_flow API.

Thanks.
-Hyong

Hyong Youb Kim (5):
  net/enic: extend vnic dev API for VF representors
  net/enic: add minimal VF representor
  net/enic: add single-queue Tx and Rx to VF representor
  net/enic: extend flow handler to support VF representors
  net/enic: enable flow API for VF representor

 doc/guides/rel_notes/release_20_11.rst |   4 +
 drivers/net/enic/base/vnic_dev.c       | 112 +++-
 drivers/net/enic/base/vnic_dev.h       |   4 +
 drivers/net/enic/enic.h                | 116 ++++
 drivers/net/enic/enic_ethdev.c         | 107 +++-
 drivers/net/enic/enic_fm_flow.c        | 487 +++++++++++++++--
 drivers/net/enic/enic_main.c           | 114 +++-
 drivers/net/enic/enic_vf_representor.c | 729 +++++++++++++++++++++++++
 drivers/net/enic/meson.build           |   1 +
 9 files changed, 1602 insertions(+), 72 deletions(-)
 create mode 100644 drivers/net/enic/enic_vf_representor.c

-- 
2.26.2


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-dev] [PATCH 1/5] net/enic: extend vnic dev API for VF representors
  2020-09-09 13:56 [dpdk-dev] [PATCH 0/5] net/enic: add SR-IOV VF representor Hyong Youb Kim
@ 2020-09-09 13:56 ` Hyong Youb Kim
  2020-09-09 13:56 ` [dpdk-dev] [PATCH 2/5] net/enic: add minimal VF representor Hyong Youb Kim
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Hyong Youb Kim @ 2020-09-09 13:56 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, Hyong Youb Kim, John Daley

VF representors need to proxy devcmd through the PF vnic_dev
instance. Extend vnic_dev to accommodate them as follows.

1. Add vnic_vf_rep_register()
A VF representor creates its own vnic_dev instance via this function
and saves VF ID. When performing devcmd, vnic_dev uses the saved VF ID
to proxy devcmd through the PF vnic_dev instance.

2. Add vnic_register_lock()
As PF and VF representors appear as independent ports to the
application, its threads may invoke APIs on them simultaneously,
leading to race conditions on the PF vnic_dev. For example, thread A
can query stats on PF port, while thread B queries stats on a VF
representor.

The PF port invokes this function to provide a lock to vnic_dev. This
lock is used to serialize devcmd calls from PF and VF representors.

3. Add utility functions to assist VF representor settings
vnic_dev_mtu() and vnic_dev_uif() retrieve vnic MTU and UIF number
(uplink index), respectively.

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
 drivers/net/enic/base/vnic_dev.c | 112 ++++++++++++++++++++++++++++++-
 drivers/net/enic/base/vnic_dev.h |   4 ++
 2 files changed, 113 insertions(+), 3 deletions(-)

diff --git a/drivers/net/enic/base/vnic_dev.c b/drivers/net/enic/base/vnic_dev.c
index ac03817f4..aaca07ca6 100644
--- a/drivers/net/enic/base/vnic_dev.c
+++ b/drivers/net/enic/base/vnic_dev.c
@@ -61,6 +61,16 @@ struct vnic_dev {
 	void (*free_consistent)(void *priv,
 		size_t size, void *vaddr,
 		dma_addr_t dma_handle);
+	/*
+	 * Used to serialize devcmd access, currently from PF and its
+	 * VF representors. When there are no representors, lock is
+	 * not used.
+	 */
+	int locked;
+	void (*lock)(void *priv);
+	void (*unlock)(void *priv);
+	struct vnic_dev *pf_vdev;
+	int vf_id;
 };
 
 #define VNIC_MAX_RES_HDR_SIZE \
@@ -84,6 +94,14 @@ void vnic_register_cbacks(struct vnic_dev *vdev,
 	vdev->free_consistent = free_consistent;
 }
 
+void vnic_register_lock(struct vnic_dev *vdev, void (*lock)(void *priv),
+	void (*unlock)(void *priv))
+{
+	vdev->lock = lock;
+	vdev->unlock = unlock;
+	vdev->locked = 0;
+}
+
 static int vnic_dev_discover_res(struct vnic_dev *vdev,
 	struct vnic_dev_bar *bar, unsigned int num_bars)
 {
@@ -410,12 +428,39 @@ static int vnic_dev_cmd_no_proxy(struct vnic_dev *vdev,
 	return err;
 }
 
+void vnic_dev_cmd_proxy_by_index_start(struct vnic_dev *vdev, uint16_t index)
+{
+	vdev->proxy = PROXY_BY_INDEX;
+	vdev->proxy_index = index;
+}
+
+void vnic_dev_cmd_proxy_end(struct vnic_dev *vdev)
+{
+	vdev->proxy = PROXY_NONE;
+	vdev->proxy_index = 0;
+}
+
 int vnic_dev_cmd(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
 	uint64_t *a0, uint64_t *a1, int wait)
 {
 	uint64_t args[2];
+	bool vf_rep;
+	int vf_idx;
 	int err;
 
+	vf_rep = false;
+	if (vdev->pf_vdev) {
+		vf_rep = true;
+		vf_idx = vdev->vf_id;
+		/* Everything below assumes PF vdev */
+		vdev = vdev->pf_vdev;
+	}
+	if (vdev->lock)
+		vdev->lock(vdev->priv);
+	/* For VF representor, proxy devcmd to VF index */
+	if (vf_rep)
+		vnic_dev_cmd_proxy_by_index_start(vdev, vf_idx);
+
 	args[0] = *a0;
 	args[1] = *a1;
 	memset(vdev->args, 0, sizeof(vdev->args));
@@ -435,6 +480,10 @@ int vnic_dev_cmd(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
 		break;
 	}
 
+	if (vf_rep)
+		vnic_dev_cmd_proxy_end(vdev);
+	if (vdev->unlock)
+		vdev->unlock(vdev->priv);
 	if (err == 0) {
 		*a0 = args[0];
 		*a1 = args[1];
@@ -446,17 +495,41 @@ int vnic_dev_cmd(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
 int vnic_dev_cmd_args(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
 		      uint64_t *args, int nargs, int wait)
 {
+	bool vf_rep;
+	int vf_idx;
+	int err;
+
+	vf_rep = false;
+	if (vdev->pf_vdev) {
+		vf_rep = true;
+		vf_idx = vdev->vf_id;
+		vdev = vdev->pf_vdev;
+	}
+	if (vdev->lock)
+		vdev->lock(vdev->priv);
+	if (vf_rep)
+		vnic_dev_cmd_proxy_by_index_start(vdev, vf_idx);
+
 	switch (vdev->proxy) {
 	case PROXY_BY_INDEX:
-		return vnic_dev_cmd_proxy(vdev, CMD_PROXY_BY_INDEX, cmd,
+		err = vnic_dev_cmd_proxy(vdev, CMD_PROXY_BY_INDEX, cmd,
 				args, nargs, wait);
+		break;
 	case PROXY_BY_BDF:
-		return vnic_dev_cmd_proxy(vdev, CMD_PROXY_BY_BDF, cmd,
+		err = vnic_dev_cmd_proxy(vdev, CMD_PROXY_BY_BDF, cmd,
 				args, nargs, wait);
+		break;
 	case PROXY_NONE:
 	default:
-		return vnic_dev_cmd_no_proxy(vdev, cmd, args, nargs, wait);
+		err = vnic_dev_cmd_no_proxy(vdev, cmd, args, nargs, wait);
+		break;
 	}
+
+	if (vf_rep)
+		vnic_dev_cmd_proxy_end(vdev);
+	if (vdev->unlock)
+		vdev->unlock(vdev->priv);
+	return err;
 }
 
 int vnic_dev_fw_info(struct vnic_dev *vdev,
@@ -1012,6 +1085,22 @@ uint32_t vnic_dev_port_speed(struct vnic_dev *vdev)
 	return vdev->notify_copy.port_speed;
 }
 
+uint32_t vnic_dev_mtu(struct vnic_dev *vdev)
+{
+	if (!vnic_dev_notify_ready(vdev))
+		return 0;
+
+	return vdev->notify_copy.mtu;
+}
+
+uint32_t vnic_dev_uif(struct vnic_dev *vdev)
+{
+	if (!vnic_dev_notify_ready(vdev))
+		return 0;
+
+	return vdev->notify_copy.uif;
+}
+
 uint32_t vnic_dev_intr_coal_timer_usec_to_hw(struct vnic_dev *vdev,
 					     uint32_t usec)
 {
@@ -1100,6 +1189,23 @@ struct vnic_dev *vnic_dev_register(struct vnic_dev *vdev,
 	return NULL;
 }
 
+struct vnic_dev *vnic_vf_rep_register(void *priv, struct vnic_dev *pf_vdev,
+	int vf_id)
+{
+	struct vnic_dev *vdev;
+
+	vdev = (struct vnic_dev *)rte_zmalloc("enic-vf-rep-vdev",
+				sizeof(struct vnic_dev), RTE_CACHE_LINE_SIZE);
+	if (!vdev)
+		return NULL;
+	vdev->priv = priv;
+	vdev->pf_vdev = pf_vdev;
+	vdev->vf_id = vf_id;
+	vdev->alloc_consistent = pf_vdev->alloc_consistent;
+	vdev->free_consistent = pf_vdev->free_consistent;
+	return vdev;
+}
+
 /*
  *  vnic_dev_classifier: Add/Delete classifier entries
  *  @vdev: vdev of the device
diff --git a/drivers/net/enic/base/vnic_dev.h b/drivers/net/enic/base/vnic_dev.h
index 02e19c0b8..30ba57bfc 100644
--- a/drivers/net/enic/base/vnic_dev.h
+++ b/drivers/net/enic/base/vnic_dev.h
@@ -80,6 +80,8 @@ void vnic_register_cbacks(struct vnic_dev *vdev,
 	void (*free_consistent)(void *priv,
 		size_t size, void *vaddr,
 		dma_addr_t dma_handle));
+void vnic_register_lock(struct vnic_dev *vdev, void (*lock)(void *priv),
+	void (*unlock)(void *priv));
 void __iomem *vnic_dev_get_res(struct vnic_dev *vdev, enum vnic_res_type type,
 	unsigned int index);
 dma_addr_t vnic_dev_get_res_bus_addr(struct vnic_dev *vdev,
@@ -172,6 +174,8 @@ struct vnic_dev *vnic_dev_register(struct vnic_dev *vdev,
 	void *priv, struct rte_pci_device *pdev, struct vnic_dev_bar *bar,
 	unsigned int num_bars);
 struct rte_pci_device *vnic_dev_get_pdev(struct vnic_dev *vdev);
+struct vnic_dev *vnic_vf_rep_register(void *priv, struct vnic_dev *pf_vdev,
+	int vf_id);
 int vnic_dev_alloc_stats_mem(struct vnic_dev *vdev);
 int vnic_dev_cmd_init(struct vnic_dev *vdev, int fallback);
 int vnic_dev_get_size(void);
-- 
2.26.2


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-dev] [PATCH 2/5] net/enic: add minimal VF representor
  2020-09-09 13:56 [dpdk-dev] [PATCH 0/5] net/enic: add SR-IOV VF representor Hyong Youb Kim
  2020-09-09 13:56 ` [dpdk-dev] [PATCH 1/5] net/enic: extend vnic dev API for VF representors Hyong Youb Kim
@ 2020-09-09 13:56 ` Hyong Youb Kim
  2020-09-09 13:56 ` [dpdk-dev] [PATCH 3/5] net/enic: add single-queue Tx and Rx to " Hyong Youb Kim
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Hyong Youb Kim @ 2020-09-09 13:56 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, Hyong Youb Kim, John Daley

Enable the minimal VF representor without Tx/Rx and flow API support.

1. Enable the standard devarg 'representor'
When the devarg is specified, create VF representor ports.

2. Initialize flowman early during PF probe
Representors require the flowman API from the firmware. Initialize it
before creating VF representors, so probe can detect the flowman
support and fail if not available.

3. Add enic_fm_allocate_switch_domain() to allocate switch domain ID
PFs and VFs on the same VIC adapter can forward packets to each other,
so the switch domain is the physical adapter.

4. Create a vnic_dev lock to serialize concurrent devcmd calls
PF and VF representor ports may invoke devcmd (e.g. dump stats)
simultaneously. As they all share a single PF devcmd instance in the
firmware, use a lock to serialize devcmd calls.

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
 drivers/net/enic/enic.h                |  34 ++
 drivers/net/enic/enic_ethdev.c         | 107 ++++++-
 drivers/net/enic/enic_fm_flow.c        |  55 +++-
 drivers/net/enic/enic_main.c           |  41 ++-
 drivers/net/enic/enic_vf_representor.c | 425 +++++++++++++++++++++++++
 drivers/net/enic/meson.build           |   1 +
 6 files changed, 652 insertions(+), 11 deletions(-)
 create mode 100644 drivers/net/enic/enic_vf_representor.c

diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index a9545c015..929ea90a9 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -210,8 +210,38 @@ struct enic {
 
 	/* Flow manager API */
 	struct enic_flowman *fm;
+	/* switchdev */
+	uint8_t switchdev_mode;
+	uint16_t switch_domain_id;
+	uint16_t max_vf_id;
+	/*
+	 * Lock to serialize devcmds from PF, VF representors as they all share
+	 * the same PF devcmd instance in firmware.
+	 */
+	rte_spinlock_t devcmd_lock;
+};
+
+struct enic_vf_representor {
+	struct enic enic;
+	struct vnic_enet_config config;
+	struct rte_eth_dev *eth_dev;
+	struct rte_ether_addr mac_addr;
+	struct rte_pci_addr bdf;
+	struct enic *pf;
+	uint16_t switch_domain_id;
+	uint16_t vf_id;
+	int allmulti;
+	int promisc;
 };
 
+#define VF_ENIC_TO_VF_REP(vf_enic) \
+	container_of(vf_enic, struct enic_vf_representor, enic)
+
+static inline int enic_is_vf_rep(struct enic *enic)
+{
+	return !!(enic->rte_dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR);
+}
+
 /* Compute ethdev's max packet size from MTU */
 static inline uint32_t enic_mtu_to_max_rx_pktlen(uint32_t mtu)
 {
@@ -364,6 +394,10 @@ void enic_pick_rx_handler(struct rte_eth_dev *eth_dev);
 void enic_pick_tx_handler(struct rte_eth_dev *eth_dev);
 void enic_fdir_info(struct enic *enic);
 void enic_fdir_info_get(struct enic *enic, struct rte_eth_fdir_info *stats);
+int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params);
+int enic_vf_representor_uninit(struct rte_eth_dev *ethdev);
+int enic_fm_allocate_switch_domain(struct enic *pf);
 extern const struct rte_flow_ops enic_flow_ops;
 extern const struct rte_flow_ops enic_fm_flow_ops;
+
 #endif /* _ENIC_H_ */
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index ca75919ee..780f746a2 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -68,6 +68,7 @@ static const struct vic_speed_capa {
 #define ENIC_DEVARG_ENABLE_AVX2_RX "enable-avx2-rx"
 #define ENIC_DEVARG_GENEVE_OPT "geneve-opt"
 #define ENIC_DEVARG_IG_VLAN_REWRITE "ig-vlan-rewrite"
+#define ENIC_DEVARG_REPRESENTOR "representor"
 
 RTE_LOG_REGISTER(enic_pmd_logtype, pmd.net.enic, INFO);
 
@@ -1236,6 +1237,7 @@ static int enic_check_devargs(struct rte_eth_dev *dev)
 		ENIC_DEVARG_ENABLE_AVX2_RX,
 		ENIC_DEVARG_GENEVE_OPT,
 		ENIC_DEVARG_IG_VLAN_REWRITE,
+		ENIC_DEVARG_REPRESENTOR,
 		NULL};
 	struct enic *enic = pmd_priv(dev);
 	struct rte_kvargs *kvlist;
@@ -1266,10 +1268,9 @@ static int enic_check_devargs(struct rte_eth_dev *dev)
 	return 0;
 }
 
-/* Initialize the driver
- * It returns 0 on success.
- */
-static int eth_enicpmd_dev_init(struct rte_eth_dev *eth_dev)
+/* Initialize the driver for PF */
+static int eth_enic_dev_init(struct rte_eth_dev *eth_dev,
+			     void *init_params __rte_unused)
 {
 	struct rte_pci_device *pdev;
 	struct rte_pci_addr *addr;
@@ -1277,7 +1278,6 @@ static int eth_enicpmd_dev_init(struct rte_eth_dev *eth_dev)
 	int err;
 
 	ENICPMD_FUNC_TRACE();
-
 	eth_dev->dev_ops = &enicpmd_eth_dev_ops;
 	eth_dev->rx_pkt_burst = &enic_recv_pkts;
 	eth_dev->tx_pkt_burst = &enic_xmit_pkts;
@@ -1305,19 +1305,108 @@ static int eth_enicpmd_dev_init(struct rte_eth_dev *eth_dev)
 	err = enic_check_devargs(eth_dev);
 	if (err)
 		return err;
-	return enic_probe(enic);
+	err = enic_probe(enic);
+	if (!err && enic->fm) {
+		err = enic_fm_allocate_switch_domain(enic);
+		if (err)
+			ENICPMD_LOG(ERR, "failed to allocate switch domain id");
+	}
+	return err;
+}
+
+static int eth_enic_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct enic *enic = pmd_priv(eth_dev);
+	int err;
+
+	ENICPMD_FUNC_TRACE();
+	eth_dev->device = NULL;
+	eth_dev->intr_handle = NULL;
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+	err = rte_eth_switch_domain_free(enic->switch_domain_id);
+	if (err)
+		ENICPMD_LOG(WARNING, "failed to free switch domain: %d", err);
+	return 0;
 }
 
 static int eth_enic_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	struct rte_pci_device *pci_dev)
 {
-	return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct enic),
-		eth_enicpmd_dev_init);
+	char name[RTE_ETH_NAME_MAX_LEN];
+	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+	struct rte_eth_dev *pf_ethdev;
+	struct enic *pf_enic;
+	int i, retval;
+
+	ENICPMD_FUNC_TRACE();
+	if (pci_dev->device.devargs) {
+		retval = rte_eth_devargs_parse(pci_dev->device.devargs->args,
+				&eth_da);
+		if (retval)
+			return retval;
+	}
+	retval = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+		sizeof(struct enic),
+		eth_dev_pci_specific_init, pci_dev,
+		eth_enic_dev_init, NULL);
+	if (retval || eth_da.nb_representor_ports < 1)
+		return retval;
+
+	/* Probe VF representor */
+	pf_ethdev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (pf_ethdev == NULL)
+		return -ENODEV;
+	/* Representors require flowman */
+	pf_enic = pmd_priv(pf_ethdev);
+	if (pf_enic->fm == NULL) {
+		ENICPMD_LOG(ERR, "VF representors require flowman");
+		return -ENOTSUP;
+	}
+	/*
+	 * For now representors imply switchdev, as firmware does not support
+	 * legacy mode SR-IOV
+	 */
+	pf_enic->switchdev_mode = 1;
+	/* Calculate max VF ID before initializing representor*/
+	pf_enic->max_vf_id = 0;
+	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+		pf_enic->max_vf_id = RTE_MAX(pf_enic->max_vf_id,
+					     eth_da.representor_ports[i]);
+	}
+	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+		struct enic_vf_representor representor;
+
+		representor.vf_id = eth_da.representor_ports[i];
+				representor.switch_domain_id =
+			pmd_priv(pf_ethdev)->switch_domain_id;
+		representor.pf = pmd_priv(pf_ethdev);
+		snprintf(name, sizeof(name), "net_%s_representor_%d",
+			pci_dev->device.name, eth_da.representor_ports[i]);
+		retval = rte_eth_dev_create(&pci_dev->device, name,
+			sizeof(struct enic_vf_representor), NULL, NULL,
+			enic_vf_representor_init, &representor);
+		if (retval) {
+			ENICPMD_LOG(ERR, "failed to create enic vf representor %s",
+				    name);
+			return retval;
+		}
+	}
+	return 0;
 }
 
 static int eth_enic_pci_remove(struct rte_pci_device *pci_dev)
 {
-	return rte_eth_dev_pci_generic_remove(pci_dev, NULL);
+	struct rte_eth_dev *ethdev;
+
+	ENICPMD_FUNC_TRACE();
+	ethdev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (!ethdev)
+		return -ENODEV;
+	if (ethdev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+		return rte_eth_dev_destroy(ethdev, enic_vf_representor_uninit);
+	else
+		return rte_eth_dev_destroy(ethdev, eth_enic_dev_uninit);
 }
 
 static struct rte_pci_driver rte_enic_pmd = {
diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c
index cb08a9317..49eaefdec 100644
--- a/drivers/net/enic/enic_fm_flow.c
+++ b/drivers/net/enic/enic_fm_flow.c
@@ -1073,7 +1073,8 @@ enic_fm_find_vnic(struct enic *enic, const struct rte_pci_addr *addr,
 	args[1] = bdf;
 	rc = vnic_dev_flowman_cmd(enic->vdev, args, 2);
 	if (rc != 0) {
-		ENICPMD_LOG(ERR, "allocating counters rc=%d", rc);
+		/* Expected to fail if BDF is not on the adapter */
+		ENICPMD_LOG(DEBUG, "cannot find vnic handle: rc=%d", rc);
 		return rc;
 	}
 	*handle = args[0];
@@ -2522,6 +2523,58 @@ enic_fm_destroy(struct enic *enic)
 	enic->fm = NULL;
 }
 
+int
+enic_fm_allocate_switch_domain(struct enic *pf)
+{
+	const struct rte_pci_addr *cur_a, *prev_a;
+	struct rte_eth_dev *dev;
+	struct enic *cur, *prev;
+	uint16_t domain_id;
+	uint64_t vnic_h;
+	uint16_t pid;
+	int ret;
+
+	ENICPMD_FUNC_TRACE();
+	if (enic_is_vf_rep(pf))
+		return -EINVAL;
+	cur = pf;
+	cur_a = &RTE_ETH_DEV_TO_PCI(cur->rte_dev)->addr;
+	/* Go through ports and find another PF that is on the same adapter */
+	RTE_ETH_FOREACH_DEV(pid) {
+		dev = &rte_eth_devices[pid];
+		if (!dev_is_enic(dev))
+			continue;
+		if (dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+			continue;
+		if (dev == cur->rte_dev)
+			continue;
+		/* dev is another PF. Is it on the same adapter? */
+		prev = pmd_priv(dev);
+		prev_a = &RTE_ETH_DEV_TO_PCI(dev)->addr;
+		if (!enic_fm_find_vnic(cur, prev_a, &vnic_h)) {
+			ENICPMD_LOG(DEBUG, "Port %u (PF BDF %x:%x:%x) and port %u (PF BDF %x:%x:%x domain %u) are on the same VIC",
+				cur->rte_dev->data->port_id,
+				cur_a->bus, cur_a->devid, cur_a->function,
+				dev->data->port_id,
+				prev_a->bus, prev_a->devid, prev_a->function,
+				prev->switch_domain_id);
+			cur->switch_domain_id = prev->switch_domain_id;
+			return 0;
+		}
+	}
+	ret = rte_eth_switch_domain_alloc(&domain_id);
+	if (ret) {
+		ENICPMD_LOG(WARNING, "failed to allocate switch domain for device %d",
+			    ret);
+	}
+	cur->switch_domain_id = domain_id;
+	ENICPMD_LOG(DEBUG, "Port %u (PF BDF %x:%x:%x) is the 1st PF on the VIC. Allocated switch domain id %u",
+		    cur->rte_dev->data->port_id,
+		    cur_a->bus, cur_a->devid, cur_a->function,
+		    domain_id);
+	return ret;
+}
+
 const struct rte_flow_ops enic_fm_flow_ops = {
 	.validate = enic_fm_flow_validate,
 	.create = enic_fm_flow_create,
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 7942b0df6..9865642b2 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -608,7 +608,8 @@ int enic_enable(struct enic *enic)
 		dev_warning(enic, "Init of hash table for clsf failed."\
 			"Flow director feature will not work\n");
 
-	if (enic_fm_init(enic))
+	/* Initialize flowman if not already initialized during probe */
+	if (enic->fm == NULL && enic_fm_init(enic))
 		dev_warning(enic, "Init of flowman failed.\n");
 
 	for (index = 0; index < enic->rq_count; index++) {
@@ -1268,6 +1269,18 @@ int enic_setup_finish(struct enic *enic)
 {
 	enic_init_soft_stats(enic);
 
+	/* switchdev: enable promisc mode on PF */
+	if (enic->switchdev_mode) {
+		vnic_dev_packet_filter(enic->vdev,
+				       0 /* directed  */,
+				       0 /* multicast */,
+				       0 /* broadcast */,
+				       1 /* promisc   */,
+				       0 /* allmulti  */);
+		enic->promisc = 1;
+		enic->allmulti = 0;
+		return 0;
+	}
 	/* Default conf */
 	vnic_dev_packet_filter(enic->vdev,
 		1 /* directed  */,
@@ -1393,6 +1406,11 @@ int enic_set_vlan_strip(struct enic *enic)
 
 int enic_add_packet_filter(struct enic *enic)
 {
+	/* switchdev ignores packet filters */
+	if (enic->switchdev_mode) {
+		ENICPMD_LOG(DEBUG, " switchdev: ignore packet filter");
+		return 0;
+	}
 	/* Args -> directed, multicast, broadcast, promisc, allmulti */
 	return vnic_dev_packet_filter(enic->vdev, 1, 1, 1,
 		enic->promisc, enic->allmulti);
@@ -1785,10 +1803,26 @@ static int enic_dev_init(struct enic *enic)
 		}
 	}
 
+	if (enic_fm_init(enic))
+		dev_warning(enic, "Init of flowman failed.\n");
 	return 0;
 
 }
 
+static void lock_devcmd(void *priv)
+{
+	struct enic *enic = priv;
+
+	rte_spinlock_lock(&enic->devcmd_lock);
+}
+
+static void unlock_devcmd(void *priv)
+{
+	struct enic *enic = priv;
+
+	rte_spinlock_unlock(&enic->devcmd_lock);
+}
+
 int enic_probe(struct enic *enic)
 {
 	struct rte_pci_device *pdev = enic->pdev;
@@ -1864,6 +1898,11 @@ int enic_probe(struct enic *enic)
 		goto err_out_dev_close;
 	}
 
+	/* Use a PF spinlock to serialize devcmd from PF and VF representors */
+	if (enic->switchdev_mode) {
+		rte_spinlock_init(&enic->devcmd_lock);
+		vnic_register_lock(enic->vdev, lock_devcmd, unlock_devcmd);
+	}
 	return 0;
 
 err_out_dev_close:
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
new file mode 100644
index 000000000..bc2d8868e
--- /dev/null
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -0,0 +1,425 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2008-2019 Cisco Systems, Inc.  All rights reserved.
+ */
+
+#include <stdint.h>
+#include <stdio.h>
+
+#include <rte_bus_pci.h>
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_ethdev_driver.h>
+#include <rte_ethdev_pci.h>
+#include <rte_flow_driver.h>
+#include <rte_kvargs.h>
+#include <rte_pci.h>
+#include <rte_string_fns.h>
+
+#include "enic_compat.h"
+#include "enic.h"
+#include "vnic_dev.h"
+#include "vnic_enet.h"
+#include "vnic_intr.h"
+#include "vnic_cq.h"
+#include "vnic_wq.h"
+#include "vnic_rq.h"
+
+static uint16_t enic_vf_recv_pkts(void *rx_queue __rte_unused,
+				  struct rte_mbuf **rx_pkts __rte_unused,
+				  uint16_t nb_pkts __rte_unused)
+{
+	return 0;
+}
+
+static uint16_t enic_vf_xmit_pkts(void *tx_queue __rte_unused,
+				  struct rte_mbuf **tx_pkts __rte_unused,
+				  uint16_t nb_pkts __rte_unused)
+{
+	return 0;
+}
+
+static int enic_vf_dev_tx_queue_setup(struct rte_eth_dev *eth_dev __rte_unused,
+	uint16_t queue_idx __rte_unused,
+	uint16_t nb_desc __rte_unused,
+	unsigned int socket_id __rte_unused,
+	const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+	ENICPMD_FUNC_TRACE();
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -E_RTE_SECONDARY;
+	return 0;
+}
+
+static void enic_vf_dev_tx_queue_release(void *txq __rte_unused)
+{
+	ENICPMD_FUNC_TRACE();
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return;
+}
+
+static int enic_vf_dev_rx_queue_setup(struct rte_eth_dev *eth_dev __rte_unused,
+	uint16_t queue_idx __rte_unused,
+	uint16_t nb_desc __rte_unused,
+	unsigned int socket_id __rte_unused,
+	const struct rte_eth_rxconf *rx_conf __rte_unused,
+	struct rte_mempool *mp __rte_unused)
+{
+	ENICPMD_FUNC_TRACE();
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -E_RTE_SECONDARY;
+	return 0;
+}
+
+static void enic_vf_dev_rx_queue_release(void *rxq __rte_unused)
+{
+	ENICPMD_FUNC_TRACE();
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return;
+}
+
+static int enic_vf_dev_configure(struct rte_eth_dev *eth_dev __rte_unused)
+{
+	ENICPMD_FUNC_TRACE();
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -E_RTE_SECONDARY;
+	return 0;
+}
+
+static int enic_vf_dev_start(struct rte_eth_dev *eth_dev)
+{
+	struct enic_vf_representor *vf;
+	int ret;
+
+	ENICPMD_FUNC_TRACE();
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -E_RTE_SECONDARY;
+
+	vf = eth_dev->data->dev_private;
+	/* Remove all packet filters so no ingress packets go to VF.
+	 * When PF enables switchdev, it will ensure packet filters
+	 * are removed.  So, this is not technically needed.
+	 */
+	ENICPMD_LOG(DEBUG, "Clear packet filters");
+	ret = vnic_dev_packet_filter(vf->enic.vdev, 0, 0, 0, 0, 0);
+	if (ret) {
+		ENICPMD_LOG(ERR, "Cannot clear packet filters");
+		return ret;
+	}
+	return 0;
+}
+
+static void enic_vf_dev_stop(struct rte_eth_dev *eth_dev __rte_unused)
+{
+	ENICPMD_FUNC_TRACE();
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return;
+}
+
+/*
+ * "close" is no-op for now and solely exists so that rte_eth_dev_close()
+ * can finish its own cleanup without errors.
+ */
+static void enic_vf_dev_close(struct rte_eth_dev *eth_dev __rte_unused)
+{
+	ENICPMD_FUNC_TRACE();
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return;
+}
+
+static int enic_vf_link_update(struct rte_eth_dev *eth_dev,
+	int wait_to_complete __rte_unused)
+{
+	struct enic_vf_representor *vf;
+	struct rte_eth_link link;
+	struct enic *pf;
+
+	ENICPMD_FUNC_TRACE();
+	vf = eth_dev->data->dev_private;
+	pf = vf->pf;
+	/*
+	 * Link status and speed are same as PF. Update PF status and then
+	 * copy it to VF.
+	 */
+	enic_link_update(pf->rte_dev);
+	rte_eth_linkstatus_get(pf->rte_dev, &link);
+	rte_eth_linkstatus_set(eth_dev, &link);
+	return 0;
+}
+
+static int enic_vf_stats_get(struct rte_eth_dev *eth_dev,
+	struct rte_eth_stats *stats)
+{
+	struct enic_vf_representor *vf;
+	struct vnic_stats *vs;
+	int err;
+
+	ENICPMD_FUNC_TRACE();
+	vf = eth_dev->data->dev_private;
+	/* Get VF stats via PF */
+	err = vnic_dev_stats_dump(vf->enic.vdev, &vs);
+	if (err) {
+		ENICPMD_LOG(ERR, "error in getting stats\n");
+		return err;
+	}
+	stats->ipackets = vs->rx.rx_frames_ok;
+	stats->opackets = vs->tx.tx_frames_ok;
+	stats->ibytes = vs->rx.rx_bytes_ok;
+	stats->obytes = vs->tx.tx_bytes_ok;
+	stats->ierrors = vs->rx.rx_errors + vs->rx.rx_drop;
+	stats->oerrors = vs->tx.tx_errors;
+	stats->imissed = vs->rx.rx_no_bufs;
+	return 0;
+}
+
+static int enic_vf_stats_reset(struct rte_eth_dev *eth_dev)
+{
+	struct enic_vf_representor *vf;
+	int err;
+
+	ENICPMD_FUNC_TRACE();
+	vf = eth_dev->data->dev_private;
+	/* Ask PF to clear VF stats */
+	err = vnic_dev_stats_clear(vf->enic.vdev);
+	if (err)
+		ENICPMD_LOG(ERR, "error in clearing stats\n");
+	return err;
+}
+
+static int enic_vf_dev_infos_get(struct rte_eth_dev *eth_dev,
+	struct rte_eth_dev_info *device_info)
+{
+	struct enic_vf_representor *vf;
+	struct enic *pf;
+
+	ENICPMD_FUNC_TRACE();
+	vf = eth_dev->data->dev_private;
+	pf = vf->pf;
+	device_info->max_rx_queues = eth_dev->data->nb_rx_queues;
+	device_info->max_tx_queues = eth_dev->data->nb_tx_queues;
+	device_info->min_rx_bufsize = ENIC_MIN_MTU;
+	/* Max packet size is same as PF */
+	device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(pf->max_mtu);
+	device_info->max_mac_addrs = ENIC_UNICAST_PERFECT_FILTERS;
+	/* No offload capa, RSS, etc. until Tx/Rx handlers are added */
+	device_info->rx_offload_capa = 0;
+	device_info->tx_offload_capa = 0;
+	device_info->switch_info.name =	pf->rte_dev->device->name;
+	device_info->switch_info.domain_id = vf->switch_domain_id;
+	device_info->switch_info.port_id = vf->vf_id;
+	return 0;
+}
+
+static void set_vf_packet_filter(struct enic_vf_representor *vf)
+{
+	/* switchdev: packet filters are ignored */
+	if (vf->enic.switchdev_mode)
+		return;
+	/* Ask PF to apply filters on VF */
+	vnic_dev_packet_filter(vf->enic.vdev, 1 /* unicast */, 1 /* mcast */,
+		1 /* bcast */, vf->promisc, vf->allmulti);
+}
+
+static int enic_vf_promiscuous_enable(struct rte_eth_dev *eth_dev)
+{
+	struct enic_vf_representor *vf;
+
+	ENICPMD_FUNC_TRACE();
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -E_RTE_SECONDARY;
+	vf = eth_dev->data->dev_private;
+	vf->promisc = 1;
+	set_vf_packet_filter(vf);
+	return 0;
+}
+
+static int enic_vf_promiscuous_disable(struct rte_eth_dev *eth_dev)
+{
+	struct enic_vf_representor *vf;
+
+	ENICPMD_FUNC_TRACE();
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -E_RTE_SECONDARY;
+	vf = eth_dev->data->dev_private;
+	vf->promisc = 0;
+	set_vf_packet_filter(vf);
+	return 0;
+}
+
+static int enic_vf_allmulticast_enable(struct rte_eth_dev *eth_dev)
+{
+	struct enic_vf_representor *vf;
+
+	ENICPMD_FUNC_TRACE();
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -E_RTE_SECONDARY;
+	vf = eth_dev->data->dev_private;
+	vf->allmulti = 1;
+	set_vf_packet_filter(vf);
+	return 0;
+}
+
+static int enic_vf_allmulticast_disable(struct rte_eth_dev *eth_dev)
+{
+	struct enic_vf_representor *vf;
+
+	ENICPMD_FUNC_TRACE();
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -E_RTE_SECONDARY;
+	vf = eth_dev->data->dev_private;
+	vf->allmulti = 0;
+	set_vf_packet_filter(vf);
+	return 0;
+}
+
+/*
+ * A minimal set of handlers.
+ * The representor can get/set a small set of VF settings via "proxy" devcmd.
+ * With proxy devcmd, the PF driver basically tells the VIC firmware to
+ * "perform this devcmd on that VF".
+ */
+static const struct eth_dev_ops enic_vf_representor_dev_ops = {
+	.allmulticast_enable  = enic_vf_allmulticast_enable,
+	.allmulticast_disable = enic_vf_allmulticast_disable,
+	.dev_configure        = enic_vf_dev_configure,
+	.dev_infos_get        = enic_vf_dev_infos_get,
+	.dev_start            = enic_vf_dev_start,
+	.dev_stop             = enic_vf_dev_stop,
+	.dev_close            = enic_vf_dev_close,
+	.link_update          = enic_vf_link_update,
+	.promiscuous_enable   = enic_vf_promiscuous_enable,
+	.promiscuous_disable  = enic_vf_promiscuous_disable,
+	.stats_get            = enic_vf_stats_get,
+	.stats_reset          = enic_vf_stats_reset,
+	.rx_queue_setup	      = enic_vf_dev_rx_queue_setup,
+	.rx_queue_release     = enic_vf_dev_rx_queue_release,
+	.tx_queue_setup	      = enic_vf_dev_tx_queue_setup,
+	.tx_queue_release     = enic_vf_dev_tx_queue_release,
+};
+
+static int get_vf_config(struct enic_vf_representor *vf)
+{
+	struct vnic_enet_config *c;
+	struct enic *pf;
+	int switch_mtu;
+	int err;
+
+	c = &vf->config;
+	pf = vf->pf;
+	/* VF MAC */
+	err = vnic_dev_get_mac_addr(vf->enic.vdev, vf->mac_addr.addr_bytes);
+	if (err) {
+		ENICPMD_LOG(ERR, "error in getting MAC address\n");
+		return err;
+	}
+	rte_ether_addr_copy(&vf->mac_addr, vf->eth_dev->data->mac_addrs);
+
+	/* VF MTU per its vNIC setting */
+	err = vnic_dev_spec(vf->enic.vdev,
+			    offsetof(struct vnic_enet_config, mtu),
+			    sizeof(c->mtu), &c->mtu);
+	if (err) {
+		ENICPMD_LOG(ERR, "error in getting MTU\n");
+		return err;
+	}
+	/*
+	 * Blade switch (fabric interconnect) port's MTU. Assume the kernel
+	 * enic driver runs on VF. That driver automatically adjusts its MTU
+	 * according to the switch MTU.
+	 */
+	switch_mtu = vnic_dev_mtu(pf->vdev);
+	vf->eth_dev->data->mtu = c->mtu;
+	if (switch_mtu > c->mtu)
+		vf->eth_dev->data->mtu = RTE_MIN(ENIC_MAX_MTU, switch_mtu);
+	return 0;
+}
+
+int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
+{
+	struct enic_vf_representor *vf, *params;
+	struct rte_pci_device *pdev;
+	struct enic *pf, *vf_enic;
+	struct rte_pci_addr *addr;
+	int ret;
+
+	ENICPMD_FUNC_TRACE();
+	params = init_params;
+	vf = eth_dev->data->dev_private;
+	vf->switch_domain_id = params->switch_domain_id;
+	vf->vf_id = params->vf_id;
+	vf->eth_dev = eth_dev;
+	vf->pf = params->pf;
+	vf->allmulti = 1;
+	vf->promisc = 0;
+	pf = vf->pf;
+	vf->enic.switchdev_mode = pf->switchdev_mode;
+	/* Only switchdev is supported now */
+	RTE_ASSERT(vf->enic.switchdev_mode);
+
+	/* Check for non-existent VFs */
+	pdev = RTE_ETH_DEV_TO_PCI(pf->rte_dev);
+	if (vf->vf_id >= pdev->max_vfs) {
+		ENICPMD_LOG(ERR, "VF ID is invalid. vf_id %u max_vfs %u",
+			    vf->vf_id, pdev->max_vfs);
+		return -ENODEV;
+	}
+
+	eth_dev->device->driver = pf->rte_dev->device->driver;
+	eth_dev->dev_ops = &enic_vf_representor_dev_ops;
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR
+		| RTE_ETH_DEV_CLOSE_REMOVE;
+	eth_dev->data->representor_id = vf->vf_id;
+	eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
+		sizeof(struct rte_ether_addr) *
+		ENIC_UNICAST_PERFECT_FILTERS, 0);
+	if (eth_dev->data->mac_addrs == NULL)
+		return -ENOMEM;
+	/* Use 1 RX queue and 1 TX queue for representor path */
+	eth_dev->data->nb_rx_queues = 1;
+	eth_dev->data->nb_tx_queues = 1;
+	eth_dev->rx_pkt_burst = &enic_vf_recv_pkts;
+	eth_dev->tx_pkt_burst = &enic_vf_xmit_pkts;
+	/* Initial link state copied from PF */
+	eth_dev->data->dev_link = pf->rte_dev->data->dev_link;
+	/* Representor vdev to perform devcmd */
+	vf->enic.vdev = vnic_vf_rep_register(&vf->enic, pf->vdev, vf->vf_id);
+	if (vf->enic.vdev == NULL)
+		return -ENOMEM;
+	ret = vnic_dev_alloc_stats_mem(vf->enic.vdev);
+	if (ret)
+		return ret;
+	/* Get/copy VF vNIC MAC, MTU, etc. into eth_dev */
+	ret = get_vf_config(vf);
+	if (ret)
+		return ret;
+
+	/*
+	 * Calculate VF BDF. The firmware ensures that PF BDF is always
+	 * bus:dev.0, and VF BDFs are dev.1, dev.2, and so on.
+	 */
+	vf->bdf = pdev->addr;
+	vf->bdf.function += vf->vf_id + 1;
+
+	/* Copy a few fields used by enic_fm_flow */
+	vf_enic = &vf->enic;
+	vf_enic->switch_domain_id = vf->switch_domain_id;
+	vf_enic->flow_filter_mode = pf->flow_filter_mode;
+	vf_enic->rte_dev = eth_dev;
+	vf_enic->dev_data = eth_dev->data;
+	LIST_INIT(&vf_enic->flows);
+	LIST_INIT(&vf_enic->memzone_list);
+	rte_spinlock_init(&vf_enic->memzone_list_lock);
+	addr = &vf->bdf;
+	snprintf(vf_enic->bdf_name, ENICPMD_BDF_LENGTH, "%04x:%02x:%02x.%x",
+		 addr->domain, addr->bus, addr->devid, addr->function);
+	return 0;
+}
+
+int enic_vf_representor_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct enic_vf_representor *vf;
+
+	ENICPMD_FUNC_TRACE();
+	vf = eth_dev->data->dev_private;
+	vnic_dev_unregister(vf->enic.vdev);
+	return 0;
+}
diff --git a/drivers/net/enic/meson.build b/drivers/net/enic/meson.build
index 1bd7cc7e1..7f4836d0f 100644
--- a/drivers/net/enic/meson.build
+++ b/drivers/net/enic/meson.build
@@ -14,6 +14,7 @@ sources = files(
 	'enic_main.c',
 	'enic_res.c',
 	'enic_rxtx.c',
+	'enic_vf_representor.c',
 	)
 deps += ['hash']
 includes += include_directories('base')
-- 
2.26.2


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-dev] [PATCH 3/5] net/enic: add single-queue Tx and Rx to VF representor
  2020-09-09 13:56 [dpdk-dev] [PATCH 0/5] net/enic: add SR-IOV VF representor Hyong Youb Kim
  2020-09-09 13:56 ` [dpdk-dev] [PATCH 1/5] net/enic: extend vnic dev API for VF representors Hyong Youb Kim
  2020-09-09 13:56 ` [dpdk-dev] [PATCH 2/5] net/enic: add minimal VF representor Hyong Youb Kim
@ 2020-09-09 13:56 ` Hyong Youb Kim
  2020-09-09 13:56 ` [dpdk-dev] [PATCH 4/5] net/enic: extend flow handler to support VF representors Hyong Youb Kim
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Hyong Youb Kim @ 2020-09-09 13:56 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, Hyong Youb Kim, John Daley

A VF representor allocates queues from PF's pool of queues and use
them for its Tx and Rx. It supports 1 Tx queue and 1 Rx queue.

Implicit packet forwarding between representor queues and VF does not
yet exist. It will be enabled in subsequent commits using flowman API.

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
 drivers/net/enic/enic.h                |  74 ++++++++++
 drivers/net/enic/enic_main.c           |  73 ++++++++--
 drivers/net/enic/enic_vf_representor.c | 188 ++++++++++++++++++++++---
 3 files changed, 299 insertions(+), 36 deletions(-)

diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index 929ea90a9..d51781d8c 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -214,6 +214,10 @@ struct enic {
 	uint8_t switchdev_mode;
 	uint16_t switch_domain_id;
 	uint16_t max_vf_id;
+	/* Number of queues needed for VF representor paths */
+	uint32_t vf_required_wq;
+	uint32_t vf_required_cq;
+	uint32_t vf_required_rq;
 	/*
 	 * Lock to serialize devcmds from PF, VF representors as they all share
 	 * the same PF devcmd instance in firmware.
@@ -232,6 +236,11 @@ struct enic_vf_representor {
 	uint16_t vf_id;
 	int allmulti;
 	int promisc;
+	/* Representor path uses PF queues. These are reserved during init */
+	uint16_t pf_wq_idx;      /* WQ dedicated to VF rep */
+	uint16_t pf_wq_cq_idx;   /* CQ for WQ */
+	uint16_t pf_rq_sop_idx;  /* SOP RQ dedicated to VF rep */
+	uint16_t pf_rq_data_idx; /* Data RQ */
 };
 
 #define VF_ENIC_TO_VF_REP(vf_enic) \
@@ -293,6 +302,67 @@ static inline unsigned int enic_cq_wq(struct enic *enic, unsigned int wq)
 	return enic->rq_count + wq;
 }
 
+/*
+ * WQ, RQ, CQ allocation scheme. Firmware gives the driver an array of
+ * WQs, an array of RQs, and an array of CQs. Fow now, these are
+ * statically allocated between PF app send/receive queues and VF
+ * representor app send/receive queues. VF representor supports only 1
+ * send and 1 receive queue. The number of PF app queue is not known
+ * until the queue setup time.
+ *
+ * R = number of receive queues for PF app
+ * S = number of send queues for PF app
+ * V = number of VF representors
+ *
+ * wI = WQ for PF app send queue I
+ * rI = SOP RQ for PF app receive queue I
+ * dI = Data RQ for rI
+ * cwI = CQ for wI
+ * crI = CQ for rI
+ * vwI = WQ for VF representor send queue I
+ * vrI = SOP RQ for VF representor receive queue I
+ * vdI = Data RQ for vrI
+ * vcwI = CQ for vwI
+ * vcrI = CQ for vrI
+ *
+ * WQ array: | w0 |..| wS-1 |..| vwV-1 |..| vw0 |
+ *             ^         ^         ^         ^
+ *    index    0        S-1       W-V       W-1    W=len(WQ array)
+ *
+ * RQ array: | r0  |..| rR-1  |d0 |..|dR-1|  ..|vdV-1 |..| vd0 |vrV-1 |..|vr0 |
+ *             ^         ^     ^       ^         ^          ^     ^        ^
+ *    index    0        R-1    R      2R-1      X-2V    X-(V+1)  X-V      X-1
+ * X=len(RQ array)
+ *
+ * CQ array: | cr0 |..| crR-1 |cw0|..|cwS-1|..|vcwV-1|..| vcw0|vcrV-1|..|vcr0|..
+ *              ^         ^     ^       ^        ^         ^      ^        ^
+ *    index     0        R-1    R     R+S-1     X-2V    X-(V+1)  X-V      X-1
+ * X is not a typo. It really is len(RQ array) to accommodate enic_cq_rq() used
+ * throughout RX handlers. The current scheme requires
+ * len(CQ array) >= len(RQ array).
+ */
+
+static inline unsigned int vf_wq_cq_idx(struct enic_vf_representor *vf)
+{
+	/* rq is not a typo. index(vcwI) coincides with index(vdI) */
+	return vf->pf->conf_rq_count - (vf->pf->max_vf_id + vf->vf_id + 2);
+}
+
+static inline unsigned int vf_wq_idx(struct enic_vf_representor *vf)
+{
+	return vf->pf->conf_wq_count - vf->vf_id - 1;
+}
+
+static inline unsigned int vf_rq_sop_idx(struct enic_vf_representor *vf)
+{
+	return vf->pf->conf_rq_count - vf->vf_id - 1;
+}
+
+static inline unsigned int vf_rq_data_idx(struct enic_vf_representor *vf)
+{
+	return vf->pf->conf_rq_count - (vf->pf->max_vf_id + vf->vf_id + 2);
+}
+
 static inline struct enic *pmd_priv(struct rte_eth_dev *eth_dev)
 {
 	return eth_dev->data->dev_private;
@@ -397,6 +467,10 @@ void enic_fdir_info_get(struct enic *enic, struct rte_eth_fdir_info *stats);
 int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params);
 int enic_vf_representor_uninit(struct rte_eth_dev *ethdev);
 int enic_fm_allocate_switch_domain(struct enic *pf);
+int enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq);
+void enic_rxmbuf_queue_release(struct enic *enic, struct vnic_rq *rq);
+void enic_free_wq_buf(struct rte_mbuf **buf);
+void enic_free_rq_buf(struct rte_mbuf **mbuf);
 extern const struct rte_flow_ops enic_flow_ops;
 extern const struct rte_flow_ops enic_fm_flow_ops;
 
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 9865642b2..19e920e41 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -50,7 +50,7 @@ static int is_eth_addr_valid(uint8_t *addr)
 	return !is_mcast_addr(addr) && !is_zero_addr(addr);
 }
 
-static void
+void
 enic_rxmbuf_queue_release(__rte_unused struct enic *enic, struct vnic_rq *rq)
 {
 	uint16_t i;
@@ -68,7 +68,7 @@ enic_rxmbuf_queue_release(__rte_unused struct enic *enic, struct vnic_rq *rq)
 	}
 }
 
-static void enic_free_wq_buf(struct rte_mbuf **buf)
+void enic_free_wq_buf(struct rte_mbuf **buf)
 {
 	struct rte_mbuf *mbuf = *buf;
 
@@ -191,8 +191,7 @@ int enic_set_mac_address(struct enic *enic, uint8_t *mac_addr)
 	return err;
 }
 
-static void
-enic_free_rq_buf(struct rte_mbuf **mbuf)
+void enic_free_rq_buf(struct rte_mbuf **mbuf)
 {
 	if (*mbuf == NULL)
 		return;
@@ -275,7 +274,7 @@ void enic_init_vnic_resources(struct enic *enic)
 }
 
 
-static int
+int
 enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
 {
 	struct rte_mbuf *mb;
@@ -806,16 +805,36 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
 	unsigned int socket_id, struct rte_mempool *mp,
 	uint16_t nb_desc, uint16_t free_thresh)
 {
+	struct enic_vf_representor *vf;
 	int rc;
-	uint16_t sop_queue_idx = enic_rte_rq_idx_to_sop_idx(queue_idx);
-	uint16_t data_queue_idx = enic_rte_rq_idx_to_data_idx(queue_idx, enic);
-	struct vnic_rq *rq_sop = &enic->rq[sop_queue_idx];
-	struct vnic_rq *rq_data = &enic->rq[data_queue_idx];
+	uint16_t sop_queue_idx;
+	uint16_t data_queue_idx;
+	uint16_t cq_idx;
+	struct vnic_rq *rq_sop;
+	struct vnic_rq *rq_data;
 	unsigned int mbuf_size, mbufs_per_pkt;
 	unsigned int nb_sop_desc, nb_data_desc;
 	uint16_t min_sop, max_sop, min_data, max_data;
 	uint32_t max_rx_pkt_len;
 
+	/*
+	 * Representor uses a reserved PF queue. Translate representor
+	 * queue number to PF queue number.
+	 */
+	if (enic_is_vf_rep(enic)) {
+		RTE_ASSERT(queue_idx == 0);
+		vf = VF_ENIC_TO_VF_REP(enic);
+		sop_queue_idx = vf->pf_rq_sop_idx;
+		data_queue_idx = vf->pf_rq_data_idx;
+		enic = vf->pf;
+		queue_idx = sop_queue_idx;
+	} else {
+		sop_queue_idx = enic_rte_rq_idx_to_sop_idx(queue_idx);
+		data_queue_idx = enic_rte_rq_idx_to_data_idx(queue_idx, enic);
+	}
+	cq_idx = enic_cq_rq(enic, sop_queue_idx);
+	rq_sop = &enic->rq[sop_queue_idx];
+	rq_data = &enic->rq[data_queue_idx];
 	rq_sop->is_sop = 1;
 	rq_sop->data_queue_idx = data_queue_idx;
 	rq_data->is_sop = 0;
@@ -935,7 +954,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
 		}
 		nb_data_desc = rq_data->ring.desc_count;
 	}
-	rc = vnic_cq_alloc(enic->vdev, &enic->cq[queue_idx], queue_idx,
+	rc = vnic_cq_alloc(enic->vdev, &enic->cq[cq_idx], cq_idx,
 			   socket_id, nb_sop_desc + nb_data_desc,
 			   sizeof(struct cq_enet_rq_desc));
 	if (rc) {
@@ -979,7 +998,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
 	rte_free(rq_sop->mbuf_ring);
 err_free_cq:
 	/* cleanup on error */
-	vnic_cq_free(&enic->cq[queue_idx]);
+	vnic_cq_free(&enic->cq[cq_idx]);
 err_free_rq_data:
 	if (rq_data->in_use)
 		vnic_rq_free(rq_data);
@@ -1007,12 +1026,27 @@ void enic_free_wq(void *txq)
 int enic_alloc_wq(struct enic *enic, uint16_t queue_idx,
 	unsigned int socket_id, uint16_t nb_desc)
 {
+	struct enic_vf_representor *vf;
 	int err;
-	struct vnic_wq *wq = &enic->wq[queue_idx];
-	unsigned int cq_index = enic_cq_wq(enic, queue_idx);
+	struct vnic_wq *wq;
+	unsigned int cq_index;
 	char name[RTE_MEMZONE_NAMESIZE];
 	static int instance;
 
+	/*
+	 * Representor uses a reserved PF queue. Translate representor
+	 * queue number to PF queue number.
+	 */
+	if (enic_is_vf_rep(enic)) {
+		RTE_ASSERT(queue_idx == 0);
+		vf = VF_ENIC_TO_VF_REP(enic);
+		queue_idx = vf->pf_wq_idx;
+		cq_index = vf->pf_wq_cq_idx;
+		enic = vf->pf;
+	} else {
+		cq_index = enic_cq_wq(enic, queue_idx);
+	}
+	wq = &enic->wq[queue_idx];
 	wq->socket_id = socket_id;
 	/*
 	 * rte_eth_tx_queue_setup() checks min, max, and alignment. So just
@@ -1448,6 +1482,17 @@ int enic_set_vnic_res(struct enic *enic)
 	if (eth_dev->data->dev_conf.intr_conf.rxq) {
 		required_intr += eth_dev->data->nb_rx_queues;
 	}
+	ENICPMD_LOG(DEBUG, "Required queues for PF: rq %u wq %u cq %u",
+		    required_rq, required_wq, required_cq);
+	if (enic->vf_required_rq) {
+		/* Queues needed for VF representors */
+		required_rq += enic->vf_required_rq;
+		required_wq += enic->vf_required_wq;
+		required_cq += enic->vf_required_cq;
+		ENICPMD_LOG(DEBUG, "Required queues for VF representors: rq %u wq %u cq %u",
+			    enic->vf_required_rq, enic->vf_required_wq,
+			    enic->vf_required_cq);
+	}
 
 	if (enic->conf_rq_count < required_rq) {
 		dev_err(dev, "Not enough Receive queues. Requested:%u which uses %d RQs on VIC, Configured:%u\n",
@@ -1493,7 +1538,7 @@ enic_reinit_rq(struct enic *enic, unsigned int rq_idx)
 
 	sop_rq = &enic->rq[enic_rte_rq_idx_to_sop_idx(rq_idx)];
 	data_rq = &enic->rq[enic_rte_rq_idx_to_data_idx(rq_idx, enic)];
-	cq_idx = rq_idx;
+	cq_idx = enic_cq_rq(enic, rq_idx);
 
 	vnic_cq_clean(&enic->cq[cq_idx]);
 	vnic_cq_init(&enic->cq[cq_idx],
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index bc2d8868e..cb41bb140 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -24,57 +24,96 @@
 #include "vnic_wq.h"
 #include "vnic_rq.h"
 
-static uint16_t enic_vf_recv_pkts(void *rx_queue __rte_unused,
-				  struct rte_mbuf **rx_pkts __rte_unused,
-				  uint16_t nb_pkts __rte_unused)
+static uint16_t enic_vf_recv_pkts(void *rx_queue,
+				  struct rte_mbuf **rx_pkts,
+				  uint16_t nb_pkts)
 {
-	return 0;
+	return enic_recv_pkts(rx_queue, rx_pkts, nb_pkts);
 }
 
-static uint16_t enic_vf_xmit_pkts(void *tx_queue __rte_unused,
-				  struct rte_mbuf **tx_pkts __rte_unused,
-				  uint16_t nb_pkts __rte_unused)
+static uint16_t enic_vf_xmit_pkts(void *tx_queue,
+				  struct rte_mbuf **tx_pkts,
+				  uint16_t nb_pkts)
 {
-	return 0;
+	return enic_xmit_pkts(tx_queue, tx_pkts, nb_pkts);
 }
 
-static int enic_vf_dev_tx_queue_setup(struct rte_eth_dev *eth_dev __rte_unused,
-	uint16_t queue_idx __rte_unused,
-	uint16_t nb_desc __rte_unused,
-	unsigned int socket_id __rte_unused,
-	const struct rte_eth_txconf *tx_conf __rte_unused)
+static int enic_vf_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
+	uint16_t queue_idx,
+	uint16_t nb_desc,
+	unsigned int socket_id,
+	const struct rte_eth_txconf *tx_conf)
 {
+	struct enic_vf_representor *vf;
+	struct vnic_wq *wq;
+	struct enic *pf;
+	int err;
+
 	ENICPMD_FUNC_TRACE();
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return -E_RTE_SECONDARY;
+	/* Only one queue now */
+	if (queue_idx != 0)
+		return -EINVAL;
+	vf = eth_dev->data->dev_private;
+	pf = vf->pf;
+	wq = &pf->wq[vf->pf_wq_idx];
+	wq->offloads = tx_conf->offloads |
+		eth_dev->data->dev_conf.txmode.offloads;
+	eth_dev->data->tx_queues[0] = (void *)wq;
+	/* Pass vf not pf because of cq index calculation. See enic_alloc_wq */
+	err = enic_alloc_wq(&vf->enic, queue_idx, socket_id, nb_desc);
+	if (err) {
+		ENICPMD_LOG(ERR, "error in allocating wq\n");
+		return err;
+	}
 	return 0;
 }
 
-static void enic_vf_dev_tx_queue_release(void *txq __rte_unused)
+static void enic_vf_dev_tx_queue_release(void *txq)
 {
 	ENICPMD_FUNC_TRACE();
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return;
+	enic_free_wq(txq);
 }
 
-static int enic_vf_dev_rx_queue_setup(struct rte_eth_dev *eth_dev __rte_unused,
-	uint16_t queue_idx __rte_unused,
-	uint16_t nb_desc __rte_unused,
-	unsigned int socket_id __rte_unused,
-	const struct rte_eth_rxconf *rx_conf __rte_unused,
-	struct rte_mempool *mp __rte_unused)
+static int enic_vf_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
+	uint16_t queue_idx,
+	uint16_t nb_desc,
+	unsigned int socket_id,
+	const struct rte_eth_rxconf *rx_conf,
+	struct rte_mempool *mp)
 {
+	struct enic_vf_representor *vf;
+	struct enic *pf;
+	int ret;
+
 	ENICPMD_FUNC_TRACE();
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return -E_RTE_SECONDARY;
+	/* Only 1 queue now */
+	if (queue_idx != 0)
+		return -EINVAL;
+	vf = eth_dev->data->dev_private;
+	pf = vf->pf;
+	eth_dev->data->rx_queues[queue_idx] =
+		(void *)&pf->rq[vf->pf_rq_sop_idx];
+	ret = enic_alloc_rq(&vf->enic, queue_idx, socket_id, mp, nb_desc,
+			    rx_conf->rx_free_thresh);
+	if (ret) {
+		ENICPMD_LOG(ERR, "error in allocating rq\n");
+		return ret;
+	}
 	return 0;
 }
 
-static void enic_vf_dev_rx_queue_release(void *rxq __rte_unused)
+static void enic_vf_dev_rx_queue_release(void *rxq)
 {
 	ENICPMD_FUNC_TRACE();
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return;
+	enic_free_rq(rxq);
 }
 
 static int enic_vf_dev_configure(struct rte_eth_dev *eth_dev __rte_unused)
@@ -88,6 +127,9 @@ static int enic_vf_dev_configure(struct rte_eth_dev *eth_dev __rte_unused)
 static int enic_vf_dev_start(struct rte_eth_dev *eth_dev)
 {
 	struct enic_vf_representor *vf;
+	struct vnic_rq *data_rq;
+	int index, cq_idx;
+	struct enic *pf;
 	int ret;
 
 	ENICPMD_FUNC_TRACE();
@@ -95,6 +137,7 @@ static int enic_vf_dev_start(struct rte_eth_dev *eth_dev)
 		return -E_RTE_SECONDARY;
 
 	vf = eth_dev->data->dev_private;
+	pf = vf->pf;
 	/* Remove all packet filters so no ingress packets go to VF.
 	 * When PF enables switchdev, it will ensure packet filters
 	 * are removed.  So, this is not technically needed.
@@ -105,14 +148,90 @@ static int enic_vf_dev_start(struct rte_eth_dev *eth_dev)
 		ENICPMD_LOG(ERR, "Cannot clear packet filters");
 		return ret;
 	}
+
+	/* Start WQ: see enic_init_vnic_resources */
+	index = vf->pf_wq_idx;
+	cq_idx = vf->pf_wq_cq_idx;
+	vnic_wq_init(&pf->wq[index], cq_idx, 1, 0);
+	vnic_cq_init(&pf->cq[cq_idx],
+		     0 /* flow_control_enable */,
+		     1 /* color_enable */,
+		     0 /* cq_head */,
+		     0 /* cq_tail */,
+		     1 /* cq_tail_color */,
+		     0 /* interrupt_enable */,
+		     0 /* cq_entry_enable */,
+		     1 /* cq_message_enable */,
+		     0 /* interrupt offset */,
+		     (uint64_t)pf->wq[index].cqmsg_rz->iova);
+	/* enic_start_wq */
+	vnic_wq_enable(&pf->wq[index]);
+	eth_dev->data->tx_queue_state[0] = RTE_ETH_QUEUE_STATE_STARTED;
+
+	/* Start RQ: see enic_init_vnic_resources */
+	index = vf->pf_rq_sop_idx;
+	cq_idx = enic_cq_rq(vf->pf, index);
+	vnic_rq_init(&pf->rq[index], cq_idx, 1, 0);
+	data_rq = &pf->rq[vf->pf_rq_data_idx];
+	if (data_rq->in_use)
+		vnic_rq_init(data_rq, cq_idx, 1, 0);
+	vnic_cq_init(&pf->cq[cq_idx],
+		     0 /* flow_control_enable */,
+		     1 /* color_enable */,
+		     0 /* cq_head */,
+		     0 /* cq_tail */,
+		     1 /* cq_tail_color */,
+		     0,
+		     1 /* cq_entry_enable */,
+		     0 /* cq_message_enable */,
+		     0,
+		     0 /* cq_message_addr */);
+	/* enic_enable */
+	ret = enic_alloc_rx_queue_mbufs(pf, &pf->rq[index]);
+	if (ret) {
+		ENICPMD_LOG(ERR, "Failed to alloc sop RX queue mbufs\n");
+		return ret;
+	}
+	ret = enic_alloc_rx_queue_mbufs(pf, data_rq);
+	if (ret) {
+		/* Release the allocated mbufs for the sop rq*/
+		enic_rxmbuf_queue_release(pf, &pf->rq[index]);
+		ENICPMD_LOG(ERR, "Failed to alloc data RX queue mbufs\n");
+		return ret;
+	}
+	enic_start_rq(pf, vf->pf_rq_sop_idx);
+	eth_dev->data->tx_queue_state[0] = RTE_ETH_QUEUE_STATE_STARTED;
+	eth_dev->data->rx_queue_state[0] = RTE_ETH_QUEUE_STATE_STARTED;
 	return 0;
 }
 
-static void enic_vf_dev_stop(struct rte_eth_dev *eth_dev __rte_unused)
+static void enic_vf_dev_stop(struct rte_eth_dev *eth_dev)
 {
+	struct enic_vf_representor *vf;
+	struct vnic_rq *rq;
+	struct enic *pf;
+
 	ENICPMD_FUNC_TRACE();
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return;
+	/* Undo dev_start. Disable/clean WQ */
+	vf = eth_dev->data->dev_private;
+	pf = vf->pf;
+	vnic_wq_disable(&pf->wq[vf->pf_wq_idx]);
+	vnic_wq_clean(&pf->wq[vf->pf_wq_idx], enic_free_wq_buf);
+	vnic_cq_clean(&pf->cq[vf->pf_wq_cq_idx]);
+	/* Disable/clean RQ */
+	rq = &pf->rq[vf->pf_rq_sop_idx];
+	vnic_rq_disable(rq);
+	vnic_rq_clean(rq, enic_free_rq_buf);
+	rq = &pf->rq[vf->pf_rq_data_idx];
+	if (rq->in_use) {
+		vnic_rq_disable(rq);
+		vnic_rq_clean(rq, enic_free_rq_buf);
+	}
+	vnic_cq_clean(&pf->cq[enic_cq_rq(vf->pf, vf->pf_rq_sop_idx)]);
+	eth_dev->data->tx_queue_state[0] = RTE_ETH_QUEUE_STATE_STOPPED;
+	eth_dev->data->rx_queue_state[0] = RTE_ETH_QUEUE_STATE_STOPPED;
 }
 
 /*
@@ -354,6 +473,31 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
 	vf->enic.switchdev_mode = pf->switchdev_mode;
 	/* Only switchdev is supported now */
 	RTE_ASSERT(vf->enic.switchdev_mode);
+	/* Allocate WQ, RQ, CQ for the representor */
+	vf->pf_wq_idx = vf_wq_idx(vf);
+	vf->pf_wq_cq_idx = vf_wq_cq_idx(vf);
+	vf->pf_rq_sop_idx = vf_rq_sop_idx(vf);
+	vf->pf_rq_data_idx = vf_rq_data_idx(vf);
+	/* Remove these assertions once queue allocation has an easy-to-use
+	 * allocator API instead of index number calculations used throughout
+	 * the driver..
+	 */
+	RTE_ASSERT(enic_cq_rq(pf, vf->pf_rq_sop_idx) == vf->pf_rq_sop_idx);
+	RTE_ASSERT(enic_rte_rq_idx_to_sop_idx(vf->pf_rq_sop_idx) ==
+		   vf->pf_rq_sop_idx);
+	/* RX handlers use enic_cq_rq(sop) to get CQ, so do not save it */
+	pf->vf_required_wq++;
+	pf->vf_required_rq += 2; /* sop and data */
+	pf->vf_required_cq += 2; /* 1 for rq sop and 1 for wq */
+	ENICPMD_LOG(DEBUG, "vf_id %u wq %u rq_sop %u rq_data %u wq_cq %u rq_cq %u",
+		vf->vf_id, vf->pf_wq_idx, vf->pf_rq_sop_idx, vf->pf_rq_data_idx,
+		vf->pf_wq_cq_idx, enic_cq_rq(pf, vf->pf_rq_sop_idx));
+	if (enic_cq_rq(pf, vf->pf_rq_sop_idx) >= pf->conf_cq_count) {
+		ENICPMD_LOG(ERR, "Insufficient CQs. Please ensure number of CQs (%u)"
+			    " >= number of RQs (%u) in CIMC or UCSM",
+			    pf->conf_cq_count, pf->conf_rq_count);
+		return -EINVAL;
+	}
 
 	/* Check for non-existent VFs */
 	pdev = RTE_ETH_DEV_TO_PCI(pf->rte_dev);
-- 
2.26.2


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-dev] [PATCH 4/5] net/enic: extend flow handler to support VF representors
  2020-09-09 13:56 [dpdk-dev] [PATCH 0/5] net/enic: add SR-IOV VF representor Hyong Youb Kim
                   ` (2 preceding siblings ...)
  2020-09-09 13:56 ` [dpdk-dev] [PATCH 3/5] net/enic: add single-queue Tx and Rx to " Hyong Youb Kim
@ 2020-09-09 13:56 ` Hyong Youb Kim
  2020-09-09 13:56 ` [dpdk-dev] [PATCH 5/5] net/enic: enable flow API for VF representor Hyong Youb Kim
  2020-09-21 15:35 ` [dpdk-dev] [PATCH 0/5] net/enic: add SR-IOV " Ferruh Yigit
  5 siblings, 0 replies; 7+ messages in thread
From: Hyong Youb Kim @ 2020-09-09 13:56 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, Hyong Youb Kim, John Daley

VF representor ports can create flows on VFs through the PF flowman
(Flow Manager) instance in the firmware. These flows match packets
egressing from VFs and apply flowman actions.

1. Make flow handler aware of VF representors
When a representor port invokes flow APIs, use the PF port's flowman
instance to perform flowman devcmd. If the port ID refers to a
representor, use VF handle instead of PF handle.

2. Serialize flow API calls
Multiple application thread may invoke flow APIs through PF and VF
representor ports simultaneously. This leads to races, as ports all
share the same PF flowman instance. Use a lock to serialize API
calls. Lock is used only when representors exist.

3. Add functions to create flows for implicit representor paths
There is an implicit path between VF and its representor. The
functions below create flow rules to implement that path.
- enic_fm_add_rep2vf_flow()
- enic_fm_add_vf2rep_flow()

The flows created for representor paths are marked as internal. These
are not visible to application, and the flush API does not destroy
them. They are automatically deleted when the representor port stops
(enic_fm_destroy).

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
 drivers/net/enic/enic.h         |   8 +
 drivers/net/enic/enic_fm_flow.c | 432 ++++++++++++++++++++++++++++----
 2 files changed, 396 insertions(+), 44 deletions(-)

diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index d51781d8c..9b25e6aa4 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -101,6 +101,7 @@ struct rte_flow {
 	struct filter_v2 enic_filter;
 	/* Data for flow manager based flow (enic_fm_flow.c) */
 	struct enic_fm_flow *fm;
+	int internal;
 };
 
 /* Per-instance private data structure */
@@ -210,6 +211,8 @@ struct enic {
 
 	/* Flow manager API */
 	struct enic_flowman *fm;
+	uint64_t fm_vnic_handle;
+	uint32_t fm_vnic_uif;
 	/* switchdev */
 	uint8_t switchdev_mode;
 	uint16_t switch_domain_id;
@@ -241,6 +244,9 @@ struct enic_vf_representor {
 	uint16_t pf_wq_cq_idx;   /* CQ for WQ */
 	uint16_t pf_rq_sop_idx;  /* SOP RQ dedicated to VF rep */
 	uint16_t pf_rq_data_idx; /* Data RQ */
+	/* Representor flows managed by flowman */
+	struct rte_flow *vf2rep_flow[2];
+	struct rte_flow *rep2vf_flow[2];
 };
 
 #define VF_ENIC_TO_VF_REP(vf_enic) \
@@ -467,6 +473,8 @@ void enic_fdir_info_get(struct enic *enic, struct rte_eth_fdir_info *stats);
 int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params);
 int enic_vf_representor_uninit(struct rte_eth_dev *ethdev);
 int enic_fm_allocate_switch_domain(struct enic *pf);
+int enic_fm_add_rep2vf_flow(struct enic_vf_representor *vf);
+int enic_fm_add_vf2rep_flow(struct enic_vf_representor *vf);
 int enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq);
 void enic_rxmbuf_queue_release(struct enic *enic, struct vnic_rq *rq);
 void enic_free_wq_buf(struct rte_mbuf **buf);
diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c
index 49eaefdec..e299b3247 100644
--- a/drivers/net/enic/enic_fm_flow.c
+++ b/drivers/net/enic/enic_fm_flow.c
@@ -34,6 +34,15 @@
 
 #define FM_INVALID_HANDLE 0
 
+/* Low priority used for implicit VF -> representor flow */
+#define FM_LOWEST_PRIORITY 100000
+
+/* High priority used for implicit representor -> VF flow */
+#define FM_HIGHEST_PRIORITY 0
+
+/* Tag used for implicit VF <-> representor flows */
+#define FM_VF_REP_TAG 1
+
 /*
  * Flow exact match tables (FET) in the VIC and rte_flow groups.
  * Use a simple scheme to map groups to tables.
@@ -110,8 +119,20 @@ union enic_flowman_cmd_mem {
 	struct fm_action fm_action;
 };
 
+/*
+ * PF has a flowman instance, and VF representors share it with PF.
+ * PF allocates this structure and owns it. VF representors borrow
+ * the PF's structure during API calls (e.g. create, query).
+ */
 struct enic_flowman {
-	struct enic *enic;
+	struct enic *owner_enic; /* PF */
+	struct enic *user_enic;  /* API caller (PF or representor) */
+	/*
+	 * Representors and PF share the same underlying flowman.
+	 * Lock API calls to serialize accesses from them. Only used
+	 * when VF representors are present.
+	 */
+	rte_spinlock_t lock;
 	/* Command buffer */
 	struct {
 		union enic_flowman_cmd_mem *va;
@@ -143,9 +164,20 @@ struct enic_flowman {
 	struct fm_action action;
 	struct fm_action action_tmp; /* enic_fm_reorder_action_op */
 	int action_op_count;
+	/* Tags used for representor flows */
+	uint8_t vf_rep_tag;
 };
 
 static int enic_fm_tbl_free(struct enic_flowman *fm, uint64_t handle);
+/*
+ * API functions (create, destroy, validate, flush) call begin_fm()
+ * upon entering to save the caller enic (PF or VF representor) and
+ * lock. Upon exit, they call end_fm() to unlock.
+ */
+static struct enic_flowman *begin_fm(struct enic *enic);
+static void end_fm(struct enic_flowman *fm);
+/* Delete internal flows created for representor paths */
+static void delete_rep_flows(struct enic *enic);
 
 /*
  * Common arguments passed to copy_item functions. Use this structure
@@ -627,6 +659,12 @@ enic_fm_copy_item_raw(struct copy_item_args *arg)
 	return 0;
 }
 
+static int
+flowman_cmd(struct enic_flowman *fm, uint64_t *args, int nargs)
+{
+	return vnic_dev_flowman_cmd(fm->owner_enic->vdev, args, nargs);
+}
+
 static int
 enic_fet_alloc(struct enic_flowman *fm, uint8_t ingress,
 	       struct fm_key_template *key, int entries,
@@ -665,7 +703,7 @@ enic_fet_alloc(struct enic_flowman *fm, uint8_t ingress,
 
 	args[0] = FM_EXACT_TABLE_ALLOC;
 	args[1] = fm->cmd.pa;
-	ret = vnic_dev_flowman_cmd(fm->enic->vdev, args, 2);
+	ret = flowman_cmd(fm, args, 2);
 	if (ret) {
 		ENICPMD_LOG(ERR, "cannot alloc exact match table: rc=%d", ret);
 		free(fet);
@@ -1096,6 +1134,7 @@ enic_fm_copy_action(struct enic_flowman *fm,
 		COUNT = 1 << 3,
 		ENCAP = 1 << 4,
 		PUSH_VLAN = 1 << 5,
+		PORT_ID = 1 << 6,
 	};
 	struct fm_tcam_match_entry *fmt;
 	struct fm_action_op fm_op;
@@ -1105,6 +1144,7 @@ enic_fm_copy_action(struct enic_flowman *fm,
 	uint64_t vnic_h;
 	uint16_t ovlan;
 	bool first_rq;
+	bool steer;
 	int ret;
 
 	ENICPMD_FUNC_TRACE();
@@ -1112,9 +1152,11 @@ enic_fm_copy_action(struct enic_flowman *fm,
 	need_ovlan_action = false;
 	ovlan = 0;
 	first_rq = true;
-	enic = fm->enic;
+	steer = false;
+	enic = fm->user_enic;
 	overlap = 0;
-	vnic_h = 0; /* 0 = current vNIC */
+	vnic_h = enic->fm_vnic_handle;
+
 	for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
 		switch (actions->type) {
 		case RTE_FLOW_ACTION_TYPE_VOID:
@@ -1195,6 +1237,7 @@ enic_fm_copy_action(struct enic_flowman *fm,
 				return ret;
 			ENICPMD_LOG(DEBUG, "create QUEUE action rq: %u",
 				    fm_op.rq_steer.rq_index);
+			steer = true;
 			break;
 		}
 		case RTE_FLOW_ACTION_TYPE_DROP: {
@@ -1261,16 +1304,16 @@ enic_fm_copy_action(struct enic_flowman *fm,
 				return ret;
 			ENICPMD_LOG(DEBUG, "create QUEUE action rq: %u",
 				    fm_op.rq_steer.rq_index);
+			steer = true;
 			break;
 		}
 		case RTE_FLOW_ACTION_TYPE_PORT_ID: {
 			const struct rte_flow_action_port_id *port;
-			struct rte_pci_device *pdev;
 			struct rte_eth_dev *dev;
 
 			port = actions->conf;
 			if (port->original) {
-				vnic_h = 0; /* This port */
+				vnic_h = enic->fm_vnic_handle; /* This port */
 				break;
 			}
 			ENICPMD_LOG(DEBUG, "port id %u", port->id);
@@ -1285,12 +1328,18 @@ enic_fm_copy_action(struct enic_flowman *fm,
 					RTE_FLOW_ERROR_TYPE_ACTION,
 					NULL, "port_id is not enic");
 			}
-			pdev = RTE_ETH_DEV_TO_PCI(dev);
-			if (enic_fm_find_vnic(enic, &pdev->addr, &vnic_h)) {
+			if (enic->switch_domain_id !=
+			    pmd_priv(dev)->switch_domain_id) {
 				return rte_flow_error_set(error, EINVAL,
 					RTE_FLOW_ERROR_TYPE_ACTION,
-					NULL, "port_id is not vnic");
+					NULL, "destination and source ports are not in the same switch domain");
 			}
+			vnic_h = pmd_priv(dev)->fm_vnic_handle;
+			overlap |= PORT_ID;
+			/*
+			 * Ingress. Nothing more to do. We add an implicit
+			 * steer at the end if needed.
+			 */
 			break;
 		}
 		case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: {
@@ -1366,8 +1415,16 @@ enic_fm_copy_action(struct enic_flowman *fm,
 		}
 	}
 
-	if (!(overlap & (FATE | PASSTHRU | COUNT)))
+	if (!(overlap & (FATE | PASSTHRU | COUNT | PORT_ID)))
 		goto unsupported;
+	/* Egress from VF: need implicit WQ match */
+	if (enic_is_vf_rep(enic) && !ingress) {
+		fmt->ftm_data.fk_wq_id = 0;
+		fmt->ftm_mask.fk_wq_id = 0xffff;
+		fmt->ftm_data.fk_wq_vnic = enic->fm_vnic_handle;
+		ENICPMD_LOG(DEBUG, "add implicit wq id match for vf %d",
+			    VF_ENIC_TO_VF_REP(enic)->vf_id);
+	}
 	if (need_ovlan_action) {
 		memset(&fm_op, 0, sizeof(fm_op));
 		fm_op.fa_op = FMOP_SET_OVLAN;
@@ -1376,6 +1433,19 @@ enic_fm_copy_action(struct enic_flowman *fm,
 		if (ret)
 			return ret;
 	}
+	/* Add steer op for PORT_ID without QUEUE */
+	if ((overlap & PORT_ID) && !steer && ingress) {
+		memset(&fm_op, 0, sizeof(fm_op));
+		/* Always to queue 0 for now as generic RSS is not available */
+		fm_op.fa_op = FMOP_RQ_STEER;
+		fm_op.rq_steer.rq_index = 0;
+		fm_op.rq_steer.vnic_handle = vnic_h;
+		ret = enic_fm_append_action_op(fm, &fm_op, error);
+		if (ret)
+			return ret;
+		ENICPMD_LOG(DEBUG, "add implicit steer op");
+	}
+	/* Add required END */
 	memset(&fm_op, 0, sizeof(fm_op));
 	fm_op.fa_op = FMOP_END;
 	ret = enic_fm_append_action_op(fm, &fm_op, error);
@@ -1618,7 +1688,7 @@ enic_fm_flow_parse(struct enic_flowman *fm,
 					   NULL,
 					   "priorities are not supported");
 			return -rte_errno;
-		} else if (attrs->transfer) {
+		} else if (!fm->owner_enic->switchdev_mode && attrs->transfer) {
 			rte_flow_error_set(error, ENOTSUP,
 					   RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER,
 					   NULL,
@@ -1675,12 +1745,10 @@ enic_fm_more_counters(struct enic_flowman *fm)
 {
 	struct enic_fm_counter *new_stack;
 	struct enic_fm_counter *ctrs;
-	struct enic *enic;
 	int i, rc;
 	uint64_t args[2];
 
 	ENICPMD_FUNC_TRACE();
-	enic = fm->enic;
 	new_stack = rte_realloc(fm->counter_stack, (fm->counters_alloced +
 				FM_COUNTERS_EXPAND) *
 				sizeof(struct enic_fm_counter), 0);
@@ -1692,7 +1760,7 @@ enic_fm_more_counters(struct enic_flowman *fm)
 
 	args[0] = FM_COUNTER_BRK;
 	args[1] = fm->counters_alloced + FM_COUNTERS_EXPAND;
-	rc = vnic_dev_flowman_cmd(enic->vdev, args, 2);
+	rc = flowman_cmd(fm, args, 2);
 	if (rc != 0) {
 		ENICPMD_LOG(ERR, "cannot alloc counters rc=%d", rc);
 		return rc;
@@ -1712,16 +1780,14 @@ enic_fm_more_counters(struct enic_flowman *fm)
 static int
 enic_fm_counter_zero(struct enic_flowman *fm, struct enic_fm_counter *c)
 {
-	struct enic *enic;
 	uint64_t args[3];
 	int ret;
 
 	ENICPMD_FUNC_TRACE();
-	enic = fm->enic;
 	args[0] = FM_COUNTER_QUERY;
 	args[1] = c->handle;
 	args[2] = 1; /* clear */
-	ret = vnic_dev_flowman_cmd(enic->vdev, args, 3);
+	ret = flowman_cmd(fm, args, 3);
 	if (ret) {
 		ENICPMD_LOG(ERR, "counter init: rc=%d handle=0x%x",
 			    ret, c->handle);
@@ -1761,7 +1827,7 @@ enic_fm_action_free(struct enic_flowman *fm, uint64_t handle)
 	ENICPMD_FUNC_TRACE();
 	args[0] = FM_ACTION_FREE;
 	args[1] = handle;
-	rc = vnic_dev_flowman_cmd(fm->enic->vdev, args, 2);
+	rc = flowman_cmd(fm, args, 2);
 	if (rc)
 		ENICPMD_LOG(ERR, "cannot free action: rc=%d handle=0x%" PRIx64,
 			    rc, handle);
@@ -1777,7 +1843,7 @@ enic_fm_entry_free(struct enic_flowman *fm, uint64_t handle)
 	ENICPMD_FUNC_TRACE();
 	args[0] = FM_MATCH_ENTRY_REMOVE;
 	args[1] = handle;
-	rc = vnic_dev_flowman_cmd(fm->enic->vdev, args, 2);
+	rc = flowman_cmd(fm, args, 2);
 	if (rc)
 		ENICPMD_LOG(ERR, "cannot free match entry: rc=%d"
 			    " handle=0x%" PRIx64, rc, handle);
@@ -1881,7 +1947,7 @@ enic_fm_add_tcam_entry(struct enic_flowman *fm,
 	args[0] = FM_TCAM_ENTRY_INSTALL;
 	args[1] = ingress ? fm->ig_tcam_hndl : fm->eg_tcam_hndl;
 	args[2] = fm->cmd.pa;
-	ret = vnic_dev_flowman_cmd(fm->enic->vdev, args, 3);
+	ret = flowman_cmd(fm, args, 3);
 	if (ret != 0) {
 		ENICPMD_LOG(ERR, "cannot add %s TCAM entry: rc=%d",
 			    ingress ? "ingress" : "egress", ret);
@@ -1931,7 +1997,7 @@ enic_fm_add_exact_entry(struct enic_flowman *fm,
 	args[0] = FM_EXACT_ENTRY_INSTALL;
 	args[1] = fet->handle;
 	args[2] = fm->cmd.pa;
-	ret = vnic_dev_flowman_cmd(fm->enic->vdev, args, 3);
+	ret = flowman_cmd(fm, args, 3);
 	if (ret != 0) {
 		ENICPMD_LOG(ERR, "cannot add %s exact entry: group=%u",
 			    fet->ingress ? "ingress" : "egress", fet->group);
@@ -1970,7 +2036,7 @@ __enic_fm_flow_add_entry(struct enic_flowman *fm,
 	memcpy(fma, action_in, sizeof(*fma));
 	args[0] = FM_ACTION_ALLOC;
 	args[1] = fm->cmd.pa;
-	ret = vnic_dev_flowman_cmd(fm->enic->vdev, args, 2);
+	ret = flowman_cmd(fm, args, 2);
 	if (ret != 0) {
 		ENICPMD_LOG(ERR, "allocating TCAM table action rc=%d", ret);
 		rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -2140,7 +2206,7 @@ enic_fm_flow_validate(struct rte_eth_dev *dev,
 	int ret;
 
 	ENICPMD_FUNC_TRACE();
-	fm = pmd_priv(dev)->fm;
+	fm = begin_fm(pmd_priv(dev));
 	if (fm == NULL)
 		return -ENOTSUP;
 	enic_fm_open_scratch(fm);
@@ -2152,6 +2218,7 @@ enic_fm_flow_validate(struct rte_eth_dev *dev,
 					attrs->ingress);
 	}
 	enic_fm_close_scratch(fm);
+	end_fm(fm);
 	return ret;
 }
 
@@ -2162,33 +2229,38 @@ enic_fm_flow_query_count(struct rte_eth_dev *dev,
 {
 	struct rte_flow_query_count *query;
 	struct enic_fm_flow *fm_flow;
-	struct enic *enic;
+	struct enic_flowman *fm;
 	uint64_t args[3];
 	int rc;
 
 	ENICPMD_FUNC_TRACE();
-	enic = pmd_priv(dev);
+	fm = begin_fm(pmd_priv(dev));
 	query = data;
 	fm_flow = flow->fm;
-	if (!fm_flow->counter_valid)
-		return rte_flow_error_set(error, ENOTSUP,
+	if (!fm_flow->counter_valid) {
+		rc = rte_flow_error_set(error, ENOTSUP,
 			RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
 			"enic: flow does not have counter");
+		goto exit;
+	}
 
 	args[0] = FM_COUNTER_QUERY;
 	args[1] = fm_flow->counter->handle;
 	args[2] = query->reset;
-	rc = vnic_dev_flowman_cmd(enic->vdev, args, 3);
+	rc = flowman_cmd(fm, args, 3);
 	if (rc) {
 		ENICPMD_LOG(ERR, "cannot query counter: rc=%d handle=0x%x",
 			    rc, fm_flow->counter->handle);
-		return rc;
+		goto exit;
 	}
 	query->hits_set = 1;
 	query->hits = args[0];
 	query->bytes_set = 1;
 	query->bytes = args[1];
-	return 0;
+	rc = 0;
+exit:
+	end_fm(fm);
+	return rc;
 }
 
 static int
@@ -2237,7 +2309,7 @@ enic_fm_flow_create(struct rte_eth_dev *dev,
 
 	ENICPMD_FUNC_TRACE();
 	enic = pmd_priv(dev);
-	fm = enic->fm;
+	fm = begin_fm(enic);
 	if (fm == NULL) {
 		rte_flow_error_set(error, ENOTSUP,
 			RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
@@ -2275,6 +2347,7 @@ enic_fm_flow_create(struct rte_eth_dev *dev,
 
 error_with_scratch:
 	enic_fm_close_scratch(fm);
+	end_fm(fm);
 	return flow;
 }
 
@@ -2283,12 +2356,15 @@ enic_fm_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 		     __rte_unused struct rte_flow_error *error)
 {
 	struct enic *enic = pmd_priv(dev);
+	struct enic_flowman *fm;
 
 	ENICPMD_FUNC_TRACE();
-	if (enic->fm == NULL)
+	fm = begin_fm(enic);
+	if (fm == NULL)
 		return 0;
 	LIST_REMOVE(flow, next);
-	enic_fm_flow_free(enic->fm, flow);
+	enic_fm_flow_free(fm, flow);
+	end_fm(fm);
 	return 0;
 }
 
@@ -2296,19 +2372,27 @@ static int
 enic_fm_flow_flush(struct rte_eth_dev *dev,
 		   __rte_unused struct rte_flow_error *error)
 {
+	LIST_HEAD(enic_flows, rte_flow) internal;
 	struct enic_fm_flow *fm_flow;
 	struct enic_flowman *fm;
 	struct rte_flow *flow;
 	struct enic *enic = pmd_priv(dev);
 
 	ENICPMD_FUNC_TRACE();
-	if (enic->fm == NULL)
+
+	fm = begin_fm(enic);
+	if (fm == NULL)
 		return 0;
-	fm = enic->fm;
+	/* Destroy all non-internal flows */
+	LIST_INIT(&internal);
 	while (!LIST_EMPTY(&enic->flows)) {
 		flow = LIST_FIRST(&enic->flows);
 		fm_flow = flow->fm;
 		LIST_REMOVE(flow, next);
+		if (flow->internal) {
+			LIST_INSERT_HEAD(&internal, flow, next);
+			continue;
+		}
 		/*
 		 * If tables are null, then vNIC is closing, and the firmware
 		 * has already cleaned up flowman state. So do not try to free
@@ -2321,6 +2405,12 @@ enic_fm_flow_flush(struct rte_eth_dev *dev,
 		}
 		enic_fm_flow_free(fm, flow);
 	}
+	while (!LIST_EMPTY(&internal)) {
+		flow = LIST_FIRST(&internal);
+		LIST_REMOVE(flow, next);
+		LIST_INSERT_HEAD(&enic->flows, flow, next);
+	}
+	end_fm(fm);
 	return 0;
 }
 
@@ -2332,7 +2422,7 @@ enic_fm_tbl_free(struct enic_flowman *fm, uint64_t handle)
 
 	args[0] = FM_MATCH_TABLE_FREE;
 	args[1] = handle;
-	rc = vnic_dev_flowman_cmd(fm->enic->vdev, args, 2);
+	rc = flowman_cmd(fm, args, 2);
 	if (rc)
 		ENICPMD_LOG(ERR, "cannot free table: rc=%d handle=0x%" PRIx64,
 			    rc, handle);
@@ -2344,19 +2434,17 @@ enic_fm_tcam_tbl_alloc(struct enic_flowman *fm, uint32_t direction,
 			uint32_t max_entries, uint64_t *handle)
 {
 	struct fm_tcam_match_table *tcam_tbl;
-	struct enic *enic;
 	uint64_t args[2];
 	int rc;
 
 	ENICPMD_FUNC_TRACE();
-	enic = fm->enic;
 	tcam_tbl = &fm->cmd.va->fm_tcam_match_table;
 	tcam_tbl->ftt_direction = direction;
 	tcam_tbl->ftt_stage = FM_STAGE_LAST;
 	tcam_tbl->ftt_max_entries = max_entries;
 	args[0] = FM_TCAM_TABLE_ALLOC;
 	args[1] = fm->cmd.pa;
-	rc = vnic_dev_flowman_cmd(enic->vdev, args, 2);
+	rc = flowman_cmd(fm, args, 2);
 	if (rc) {
 		ENICPMD_LOG(ERR, "cannot alloc %s TCAM table: rc=%d",
 			    (direction == FM_INGRESS) ? "IG" : "EG", rc);
@@ -2379,14 +2467,12 @@ enic_fm_init_counters(struct enic_flowman *fm)
 static void
 enic_fm_free_all_counters(struct enic_flowman *fm)
 {
-	struct enic *enic;
 	uint64_t args[2];
 	int rc;
 
-	enic = fm->enic;
 	args[0] = FM_COUNTER_BRK;
 	args[1] = 0;
-	rc = vnic_dev_flowman_cmd(enic->vdev, args, 2);
+	rc = flowman_cmd(fm, args, 2);
 	if (rc != 0)
 		ENICPMD_LOG(ERR, "cannot free counters: rc=%d", rc);
 	rte_free(fm->counter_stack);
@@ -2428,6 +2514,7 @@ enic_fm_free_tcam_tables(struct enic_flowman *fm)
 int
 enic_fm_init(struct enic *enic)
 {
+	const struct rte_pci_addr *addr;
 	struct enic_flowman *fm;
 	uint8_t name[RTE_MEMZONE_NAMESIZE];
 	int rc;
@@ -2435,12 +2522,30 @@ enic_fm_init(struct enic *enic)
 	if (enic->flow_filter_mode != FILTER_FLOWMAN)
 		return 0;
 	ENICPMD_FUNC_TRACE();
+	/* Get vnic handle and save for port-id action */
+	if (enic_is_vf_rep(enic))
+		addr = &VF_ENIC_TO_VF_REP(enic)->bdf;
+	else
+		addr = &RTE_ETH_DEV_TO_PCI(enic->rte_dev)->addr;
+	rc = enic_fm_find_vnic(enic, addr, &enic->fm_vnic_handle);
+	if (rc) {
+		ENICPMD_LOG(ERR, "cannot find vnic handle for %x:%x:%x",
+			    addr->bus, addr->devid, addr->function);
+		return rc;
+	}
+	/* Save UIF for egport action */
+	enic->fm_vnic_uif = vnic_dev_uif(enic->vdev);
+	ENICPMD_LOG(DEBUG, "uif %u", enic->fm_vnic_uif);
+	/* Nothing else to do for representor. It will share the PF flowman */
+	if (enic_is_vf_rep(enic))
+		return 0;
 	fm = calloc(1, sizeof(*fm));
 	if (fm == NULL) {
 		ENICPMD_LOG(ERR, "cannot alloc flowman struct");
 		return -ENOMEM;
 	}
-	fm->enic = enic;
+	fm->owner_enic = enic;
+	rte_spinlock_init(&fm->lock);
 	TAILQ_INIT(&fm->fet_list);
 	TAILQ_INIT(&fm->jump_list);
 	/* Allocate host memory for flowman commands */
@@ -2480,6 +2585,7 @@ enic_fm_init(struct enic *enic)
 		goto error_ig_fet;
 	}
 	fm->default_eg_fet->ref = 1;
+	fm->vf_rep_tag = FM_VF_REP_TAG;
 	enic->fm = fm;
 	return 0;
 
@@ -2503,9 +2609,13 @@ enic_fm_destroy(struct enic *enic)
 	struct enic_flowman *fm;
 	struct enic_fm_fet *fet;
 
+	ENICPMD_FUNC_TRACE();
+	if (enic_is_vf_rep(enic)) {
+		delete_rep_flows(enic);
+		return;
+	}
 	if (enic->fm == NULL)
 		return;
-	ENICPMD_FUNC_TRACE();
 	fm = enic->fm;
 	enic_fet_free(fm, fm->default_eg_fet);
 	enic_fet_free(fm, fm->default_ig_fet);
@@ -2582,3 +2692,237 @@ const struct rte_flow_ops enic_fm_flow_ops = {
 	.flush = enic_fm_flow_flush,
 	.query = enic_fm_flow_query,
 };
+
+/* Add a high priority flow that loops representor packets to VF */
+int
+enic_fm_add_rep2vf_flow(struct enic_vf_representor *vf)
+{
+	struct fm_tcam_match_entry *fm_tcam_entry;
+	struct rte_flow *flow0, *flow1;
+	struct fm_action *fm_action;
+	struct rte_flow_error error;
+	struct rte_flow_attr attrs;
+	struct fm_action_op fm_op;
+	struct enic_flowman *fm;
+	struct enic *pf;
+	uint8_t tag;
+
+	pf = vf->pf;
+	fm = pf->fm;
+	tag = fm->vf_rep_tag;
+	enic_fm_open_scratch(fm);
+	fm_tcam_entry = &fm->tcam_entry;
+	fm_action = &fm->action;
+	/* Egress rule: match WQ ID and tag+hairpin */
+	fm_tcam_entry->ftm_data.fk_wq_id = vf->pf_wq_idx;
+	fm_tcam_entry->ftm_mask.fk_wq_id = 0xffff;
+	fm_tcam_entry->ftm_flags |= FMEF_COUNTER;
+	memset(&fm_op, 0, sizeof(fm_op));
+	fm_op.fa_op = FMOP_TAG;
+	fm_op.tag.tag = tag;
+	enic_fm_append_action_op(fm, &fm_op, &error);
+	memset(&fm_op, 0, sizeof(fm_op));
+	fm_op.fa_op = FMOP_EG_HAIRPIN;
+	enic_fm_append_action_op(fm, &fm_op, &error);
+	memset(&fm_op, 0, sizeof(fm_op));
+	fm_op.fa_op = FMOP_END;
+	enic_fm_append_action_op(fm, &fm_op, &error);
+	attrs.group = 0;
+	attrs.ingress = 0;
+	attrs.egress = 1;
+	attrs.priority = FM_HIGHEST_PRIORITY;
+	flow0 = enic_fm_flow_add_entry(fm, fm_tcam_entry, fm_action,
+				       &attrs, &error);
+	enic_fm_close_scratch(fm);
+	if (flow0 == NULL) {
+		ENICPMD_LOG(ERR, "Cannot create flow 0 for representor->VF");
+		return -EINVAL;
+	}
+	LIST_INSERT_HEAD(&pf->flows, flow0, next);
+	/* Make this flow internal, so the user app cannot delete it */
+	flow0->internal = 1;
+	ENICPMD_LOG(DEBUG, "representor->VF %d flow created: wq %d -> tag %d hairpin",
+		    vf->vf_id, vf->pf_wq_idx, tag);
+
+	/* Ingress: steer hairpinned to VF RQ 0 */
+	enic_fm_open_scratch(fm);
+	fm_tcam_entry->ftm_flags |= FMEF_COUNTER;
+	fm_tcam_entry->ftm_data.fk_hdrset[0].fk_metadata |= FKM_EG_HAIRPINNED;
+	fm_tcam_entry->ftm_mask.fk_hdrset[0].fk_metadata |= FKM_EG_HAIRPINNED;
+	fm_tcam_entry->ftm_data.fk_packet_tag = tag;
+	fm_tcam_entry->ftm_mask.fk_packet_tag = 0xff;
+	memset(&fm_op, 0, sizeof(fm_op));
+	fm_op.fa_op = FMOP_RQ_STEER;
+	fm_op.rq_steer.rq_index = 0;
+	fm_op.rq_steer.vnic_handle = vf->enic.fm_vnic_handle;
+	enic_fm_append_action_op(fm, &fm_op, &error);
+	memset(&fm_op, 0, sizeof(fm_op));
+	fm_op.fa_op = FMOP_END;
+	enic_fm_append_action_op(fm, &fm_op, &error);
+	attrs.group = 0;
+	attrs.ingress = 1;
+	attrs.egress = 0;
+	attrs.priority = FM_HIGHEST_PRIORITY;
+	flow1 = enic_fm_flow_add_entry(fm, fm_tcam_entry, fm_action,
+				       &attrs, &error);
+	enic_fm_close_scratch(fm);
+	if (flow1 == NULL) {
+		ENICPMD_LOG(ERR, "Cannot create flow 1 for representor->VF");
+		enic_fm_flow_destroy(pf->rte_dev, flow0, &error);
+		return -EINVAL;
+	}
+	LIST_INSERT_HEAD(&pf->flows, flow1, next);
+	flow1->internal = 1;
+	ENICPMD_LOG(DEBUG, "representor->VF %d flow created: tag %d hairpinned -> VF RQ %d",
+		    vf->vf_id, tag, fm_op.rq_steer.rq_index);
+	vf->rep2vf_flow[0] = flow0;
+	vf->rep2vf_flow[1] = flow1;
+	/* Done with this tag, use a different one next time */
+	fm->vf_rep_tag++;
+	return 0;
+}
+
+/*
+ * Add a low priority flow that matches all packets from VF and loops them
+ * back to the representor.
+ */
+int
+enic_fm_add_vf2rep_flow(struct enic_vf_representor *vf)
+{
+	struct fm_tcam_match_entry *fm_tcam_entry;
+	struct rte_flow *flow0, *flow1;
+	struct fm_action *fm_action;
+	struct rte_flow_error error;
+	struct rte_flow_attr attrs;
+	struct fm_action_op fm_op;
+	struct enic_flowman *fm;
+	struct enic *pf;
+	uint8_t tag;
+
+	pf = vf->pf;
+	fm = pf->fm;
+	tag = fm->vf_rep_tag;
+	enic_fm_open_scratch(fm);
+	fm_tcam_entry = &fm->tcam_entry;
+	fm_action = &fm->action;
+	/* Egress rule: match-any and tag+hairpin */
+	fm_tcam_entry->ftm_data.fk_wq_id = 0;
+	fm_tcam_entry->ftm_mask.fk_wq_id = 0xffff;
+	fm_tcam_entry->ftm_data.fk_wq_vnic = vf->enic.fm_vnic_handle;
+	fm_tcam_entry->ftm_flags |= FMEF_COUNTER;
+	memset(&fm_op, 0, sizeof(fm_op));
+	fm_op.fa_op = FMOP_TAG;
+	fm_op.tag.tag = tag;
+	enic_fm_append_action_op(fm, &fm_op, &error);
+	memset(&fm_op, 0, sizeof(fm_op));
+	fm_op.fa_op = FMOP_EG_HAIRPIN;
+	enic_fm_append_action_op(fm, &fm_op, &error);
+	memset(&fm_op, 0, sizeof(fm_op));
+	fm_op.fa_op = FMOP_END;
+	enic_fm_append_action_op(fm, &fm_op, &error);
+	attrs.group = 0;
+	attrs.ingress = 0;
+	attrs.egress = 1;
+	attrs.priority = FM_LOWEST_PRIORITY;
+	flow0 = enic_fm_flow_add_entry(fm, fm_tcam_entry, fm_action,
+				       &attrs, &error);
+	enic_fm_close_scratch(fm);
+	if (flow0 == NULL) {
+		ENICPMD_LOG(ERR, "Cannot create flow 0 for VF->representor");
+		return -EINVAL;
+	}
+	LIST_INSERT_HEAD(&pf->flows, flow0, next);
+	/* Make this flow internal, so the user app cannot delete it */
+	flow0->internal = 1;
+	ENICPMD_LOG(DEBUG, "VF %d->representor flow created: wq %d (low prio) -> tag %d hairpin",
+		    vf->vf_id, fm_tcam_entry->ftm_data.fk_wq_id, tag);
+
+	/* Ingress: steer hairpinned to VF rep RQ */
+	enic_fm_open_scratch(fm);
+	fm_tcam_entry->ftm_flags |= FMEF_COUNTER;
+	fm_tcam_entry->ftm_data.fk_hdrset[0].fk_metadata |= FKM_EG_HAIRPINNED;
+	fm_tcam_entry->ftm_mask.fk_hdrset[0].fk_metadata |= FKM_EG_HAIRPINNED;
+	fm_tcam_entry->ftm_data.fk_packet_tag = tag;
+	fm_tcam_entry->ftm_mask.fk_packet_tag = 0xff;
+	memset(&fm_op, 0, sizeof(fm_op));
+	fm_op.fa_op = FMOP_RQ_STEER;
+	fm_op.rq_steer.rq_index = vf->pf_rq_sop_idx;
+	fm_op.rq_steer.vnic_handle = pf->fm_vnic_handle;
+	enic_fm_append_action_op(fm, &fm_op, &error);
+	memset(&fm_op, 0, sizeof(fm_op));
+	fm_op.fa_op = FMOP_END;
+	enic_fm_append_action_op(fm, &fm_op, &error);
+	attrs.group = 0;
+	attrs.ingress = 1;
+	attrs.egress = 0;
+	attrs.priority = FM_HIGHEST_PRIORITY;
+	flow1 = enic_fm_flow_add_entry(fm, fm_tcam_entry, fm_action,
+				       &attrs, &error);
+	enic_fm_close_scratch(fm);
+	if (flow1 == NULL) {
+		ENICPMD_LOG(ERR, "Cannot create flow 1 for VF->representor");
+		enic_fm_flow_destroy(pf->rte_dev, flow0, &error);
+		return -EINVAL;
+	}
+	LIST_INSERT_HEAD(&pf->flows, flow1, next);
+	flow1->internal = 1;
+	ENICPMD_LOG(DEBUG, "VF %d->representor flow created: tag %d hairpinned -> PF RQ %d",
+		    vf->vf_id, tag, vf->pf_rq_sop_idx);
+	vf->vf2rep_flow[0] = flow0;
+	vf->vf2rep_flow[1] = flow1;
+	/* Done with this tag, use a different one next time */
+	fm->vf_rep_tag++;
+	return 0;
+}
+
+/* Destroy representor flows created by enic_fm_add_{rep2vf,vf2rep}_flow */
+static void
+delete_rep_flows(struct enic *enic)
+{
+	struct enic_vf_representor *vf;
+	struct rte_flow_error error;
+	struct rte_eth_dev *dev;
+	uint32_t i;
+
+	RTE_ASSERT(enic_is_vf_rep(enic));
+	vf = VF_ENIC_TO_VF_REP(enic);
+	dev = vf->pf->rte_dev;
+	for (i = 0; i < ARRAY_SIZE(vf->vf2rep_flow); i++) {
+		if (vf->vf2rep_flow[i])
+			enic_fm_flow_destroy(dev, vf->vf2rep_flow[i], &error);
+	}
+	for (i = 0; i < ARRAY_SIZE(vf->rep2vf_flow); i++) {
+		if (vf->rep2vf_flow[i])
+			enic_fm_flow_destroy(dev, vf->rep2vf_flow[i], &error);
+	}
+}
+
+static struct enic_flowman *
+begin_fm(struct enic *enic)
+{
+	struct enic_vf_representor *vf;
+	struct enic_flowman *fm;
+
+	/* Representor uses PF flowman */
+	if (enic_is_vf_rep(enic)) {
+		vf = VF_ENIC_TO_VF_REP(enic);
+		fm = vf->pf->fm;
+	} else {
+		fm = enic->fm;
+	}
+	/* Save the API caller and lock if representors exist */
+	if (fm) {
+		if (fm->owner_enic->switchdev_mode)
+			rte_spinlock_lock(&fm->lock);
+		fm->user_enic = enic;
+	}
+	return fm;
+}
+
+static void
+end_fm(struct enic_flowman *fm)
+{
+	fm->user_enic = NULL;
+	if (fm->owner_enic->switchdev_mode)
+		rte_spinlock_unlock(&fm->lock);
+}
-- 
2.26.2


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-dev] [PATCH 5/5] net/enic: enable flow API for VF representor
  2020-09-09 13:56 [dpdk-dev] [PATCH 0/5] net/enic: add SR-IOV VF representor Hyong Youb Kim
                   ` (3 preceding siblings ...)
  2020-09-09 13:56 ` [dpdk-dev] [PATCH 4/5] net/enic: extend flow handler to support VF representors Hyong Youb Kim
@ 2020-09-09 13:56 ` Hyong Youb Kim
  2020-09-21 15:35 ` [dpdk-dev] [PATCH 0/5] net/enic: add SR-IOV " Ferruh Yigit
  5 siblings, 0 replies; 7+ messages in thread
From: Hyong Youb Kim @ 2020-09-09 13:56 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, Hyong Youb Kim, John Daley

Use Flow Manager (flowman) to support flow API for
representors. Representor's flow handlers simply invoke PF handlers
and pass the representor's flowman structure. The PF flowman handlers
are aware of representors and perform appropriate devcmds to create
flows on the NIC.

Also use flowman to create internal flows for implicit VF-representor
path. With that, representor Tx/Rx is now functional.

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
 doc/guides/rel_notes/release_20_11.rst |   4 +
 drivers/net/enic/enic_vf_representor.c | 160 +++++++++++++++++++++++++
 2 files changed, 164 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index df227a177..180ab8fa0 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -134,3 +134,7 @@ Tested Platforms
    This section is a comment. Do not overwrite or remove it.
    Also, make sure to start the actual text at the margin.
    =======================================================
+
+* **Updated Cisco enic driver.**
+
+  * Added support for VF representors with single-queue Tx/Rx and flow API
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index cb41bb140..5d34e1b46 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -124,6 +124,33 @@ static int enic_vf_dev_configure(struct rte_eth_dev *eth_dev __rte_unused)
 	return 0;
 }
 
+static int
+setup_rep_vf_fwd(struct enic_vf_representor *vf)
+{
+	int ret;
+
+	ENICPMD_FUNC_TRACE();
+	/* Representor -> VF rule
+	 * Egress packets from this representor are on the representor's WQ.
+	 * So, loop back that WQ to VF.
+	 */
+	ret = enic_fm_add_rep2vf_flow(vf);
+	if (ret) {
+		ENICPMD_LOG(ERR, "Cannot create representor->VF flow");
+		return ret;
+	}
+	/* VF -> representor rule
+	 * Packets from VF loop back to the representor, unless they match
+	 * user-added flows.
+	 */
+	ret = enic_fm_add_vf2rep_flow(vf);
+	if (ret) {
+		ENICPMD_LOG(ERR, "Cannot create VF->representor flow");
+		return ret;
+	}
+	return 0;
+}
+
 static int enic_vf_dev_start(struct rte_eth_dev *eth_dev)
 {
 	struct enic_vf_representor *vf;
@@ -138,6 +165,16 @@ static int enic_vf_dev_start(struct rte_eth_dev *eth_dev)
 
 	vf = eth_dev->data->dev_private;
 	pf = vf->pf;
+	/* Get representor flowman for flow API and representor path */
+	ret = enic_fm_init(&vf->enic);
+	if (ret)
+		return ret;
+	/* Set up implicit flow rules to forward between representor and VF */
+	ret = setup_rep_vf_fwd(vf);
+	if (ret) {
+		ENICPMD_LOG(ERR, "Cannot set up representor-VF flows");
+		return ret;
+	}
 	/* Remove all packet filters so no ingress packets go to VF.
 	 * When PF enables switchdev, it will ensure packet filters
 	 * are removed.  So, this is not technically needed.
@@ -232,6 +269,8 @@ static void enic_vf_dev_stop(struct rte_eth_dev *eth_dev)
 	vnic_cq_clean(&pf->cq[enic_cq_rq(vf->pf, vf->pf_rq_sop_idx)]);
 	eth_dev->data->tx_queue_state[0] = RTE_ETH_QUEUE_STATE_STOPPED;
 	eth_dev->data->rx_queue_state[0] = RTE_ETH_QUEUE_STATE_STOPPED;
+	/* Clean up representor flowman */
+	enic_fm_destroy(&vf->enic);
 }
 
 /*
@@ -245,6 +284,126 @@ static void enic_vf_dev_close(struct rte_eth_dev *eth_dev __rte_unused)
 		return;
 }
 
+static int
+adjust_flow_attr(const struct rte_flow_attr *attrs,
+		 struct rte_flow_attr *vf_attrs,
+		 struct rte_flow_error *error)
+{
+	if (!attrs) {
+		return rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR,
+				NULL, "no attribute specified");
+	}
+	/*
+	 * Swap ingress and egress as the firmware view of direction
+	 * is the opposite of the representor.
+	 */
+	*vf_attrs = *attrs;
+	if (attrs->ingress && !attrs->egress) {
+		vf_attrs->ingress = 0;
+		vf_attrs->egress = 1;
+		return 0;
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
+			"representor only supports ingress");
+}
+
+static int
+enic_vf_flow_validate(struct rte_eth_dev *dev,
+		      const struct rte_flow_attr *attrs,
+		      const struct rte_flow_item pattern[],
+		      const struct rte_flow_action actions[],
+		      struct rte_flow_error *error)
+{
+	struct rte_flow_attr vf_attrs;
+	int ret;
+
+	ret = adjust_flow_attr(attrs, &vf_attrs, error);
+	if (ret)
+		return ret;
+	attrs = &vf_attrs;
+	return enic_fm_flow_ops.validate(dev, attrs, pattern, actions, error);
+}
+
+static struct rte_flow *
+enic_vf_flow_create(struct rte_eth_dev *dev,
+		    const struct rte_flow_attr *attrs,
+		    const struct rte_flow_item pattern[],
+		    const struct rte_flow_action actions[],
+		    struct rte_flow_error *error)
+{
+	struct rte_flow_attr vf_attrs;
+
+	if (adjust_flow_attr(attrs, &vf_attrs, error))
+		return NULL;
+	attrs = &vf_attrs;
+	return enic_fm_flow_ops.create(dev, attrs, pattern, actions, error);
+}
+
+static int
+enic_vf_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
+		     struct rte_flow_error *error)
+{
+	return enic_fm_flow_ops.destroy(dev, flow, error);
+}
+
+static int
+enic_vf_flow_query(struct rte_eth_dev *dev,
+		   struct rte_flow *flow,
+		   const struct rte_flow_action *actions,
+		   void *data,
+		   struct rte_flow_error *error)
+{
+	return enic_fm_flow_ops.query(dev, flow, actions, data, error);
+}
+
+static int
+enic_vf_flow_flush(struct rte_eth_dev *dev,
+		   struct rte_flow_error *error)
+{
+	return enic_fm_flow_ops.flush(dev, error);
+}
+
+static const struct rte_flow_ops enic_vf_flow_ops = {
+	.validate = enic_vf_flow_validate,
+	.create = enic_vf_flow_create,
+	.destroy = enic_vf_flow_destroy,
+	.flush = enic_vf_flow_flush,
+	.query = enic_vf_flow_query,
+};
+
+static int
+enic_vf_filter_ctrl(struct rte_eth_dev *eth_dev,
+		    enum rte_filter_type filter_type,
+		    enum rte_filter_op filter_op,
+		    void *arg)
+{
+	struct enic_vf_representor *vf;
+	int ret = 0;
+
+	ENICPMD_FUNC_TRACE();
+	vf = eth_dev->data->dev_private;
+	switch (filter_type) {
+	case RTE_ETH_FILTER_GENERIC:
+		if (filter_op != RTE_ETH_FILTER_GET)
+			return -EINVAL;
+		if (vf->enic.flow_filter_mode == FILTER_FLOWMAN) {
+			*(const void **)arg = &enic_vf_flow_ops;
+		} else {
+			ENICPMD_LOG(WARNING, "VF representors require flowman support for rte_flow API");
+			ret = -EINVAL;
+		}
+		break;
+	default:
+		ENICPMD_LOG(WARNING, "Filter type (%d) not supported",
+			    filter_type);
+		ret = -EINVAL;
+		break;
+	}
+	return ret;
+}
+
 static int enic_vf_link_update(struct rte_eth_dev *eth_dev,
 	int wait_to_complete __rte_unused)
 {
@@ -404,6 +563,7 @@ static const struct eth_dev_ops enic_vf_representor_dev_ops = {
 	.dev_start            = enic_vf_dev_start,
 	.dev_stop             = enic_vf_dev_stop,
 	.dev_close            = enic_vf_dev_close,
+	.filter_ctrl          = enic_vf_filter_ctrl,
 	.link_update          = enic_vf_link_update,
 	.promiscuous_enable   = enic_vf_promiscuous_enable,
 	.promiscuous_disable  = enic_vf_promiscuous_disable,
-- 
2.26.2


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH 0/5] net/enic: add SR-IOV VF representor
  2020-09-09 13:56 [dpdk-dev] [PATCH 0/5] net/enic: add SR-IOV VF representor Hyong Youb Kim
                   ` (4 preceding siblings ...)
  2020-09-09 13:56 ` [dpdk-dev] [PATCH 5/5] net/enic: enable flow API for VF representor Hyong Youb Kim
@ 2020-09-21 15:35 ` Ferruh Yigit
  5 siblings, 0 replies; 7+ messages in thread
From: Ferruh Yigit @ 2020-09-21 15:35 UTC (permalink / raw)
  To: Hyong Youb Kim; +Cc: dev

On 9/9/2020 2:56 PM, Hyong Youb Kim wrote:
> This series adds VF representors to the driver. It enables
> single-queue representors and implements enough flow features to run
> OVS-DPDK offload for default vlan+mac based switching.
> 
> The flow API handlers and devcmd functions (firmware commands) are now
> aware of representors. Representors reserve PF Tx/Rx queues for their
> implicit paths to/from VFs. Packet forwarding rules for these implicit
> paths are set up using firmware's Flow Manager (flowman), which is
> also used for rte_flow API.
> 
> Thanks.
> -Hyong
> 
> Hyong Youb Kim (5):
>    net/enic: extend vnic dev API for VF representors
>    net/enic: add minimal VF representor
>    net/enic: add single-queue Tx and Rx to VF representor
>    net/enic: extend flow handler to support VF representors
>    net/enic: enable flow API for VF representor
> 

Series applied to dpdk-next-net/main, thanks.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-09-21 15:35 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-09 13:56 [dpdk-dev] [PATCH 0/5] net/enic: add SR-IOV VF representor Hyong Youb Kim
2020-09-09 13:56 ` [dpdk-dev] [PATCH 1/5] net/enic: extend vnic dev API for VF representors Hyong Youb Kim
2020-09-09 13:56 ` [dpdk-dev] [PATCH 2/5] net/enic: add minimal VF representor Hyong Youb Kim
2020-09-09 13:56 ` [dpdk-dev] [PATCH 3/5] net/enic: add single-queue Tx and Rx to " Hyong Youb Kim
2020-09-09 13:56 ` [dpdk-dev] [PATCH 4/5] net/enic: extend flow handler to support VF representors Hyong Youb Kim
2020-09-09 13:56 ` [dpdk-dev] [PATCH 5/5] net/enic: enable flow API for VF representor Hyong Youb Kim
2020-09-21 15:35 ` [dpdk-dev] [PATCH 0/5] net/enic: add SR-IOV " Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).