* [PATCH 0/4] add vcpf pmd support
@ 2025-09-22 9:48 Shetty, Praveen
2025-09-22 9:48 ` [PATCH 1/4] net/intel: add vCPF PMD support Shetty, Praveen
` (3 more replies)
0 siblings, 4 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-22 9:48 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh; +Cc: dev
Virtual Control Plane Function (vCPF) is a SR-IOV Virtual Function of
the CPF(PF) device.vCPF is used to support multiple control plane functions.
This patchset is for extending the CPFL PMD to support the new vCPF device.
In this implementaion, both CPFL and the vCPF devices share most of the
initialization routine and share the common data path implementation, which
eliminates code duplication and improving the maintainability of the driver code.
Praveen Shetty (4):
net/intel: add vCPF PMD support
net/idpf: add splitq jumbo packet handling
net/intel: add config queue support to vCPF
net/cpfl: add cpchnl get vport info support
drivers/net/intel/cpfl/cpfl_cpchnl.h | 7 +-
drivers/net/intel/cpfl/cpfl_ethdev.c | 354 ++++++++++++++++--
drivers/net/intel/cpfl/cpfl_ethdev.h | 109 +++++-
drivers/net/intel/cpfl/cpfl_vchnl.c | 143 ++++++-
drivers/net/intel/idpf/base/idpf_osdep.h | 3 +
drivers/net/intel/idpf/base/virtchnl2.h | 3 +-
drivers/net/intel/idpf/idpf_common_device.c | 4 +-
drivers/net/intel/idpf/idpf_common_device.h | 3 +
drivers/net/intel/idpf/idpf_common_rxtx.c | 50 ++-
drivers/net/intel/idpf/idpf_common_virtchnl.c | 38 ++
drivers/net/intel/idpf/idpf_common_virtchnl.h | 3 +
11 files changed, 629 insertions(+), 88 deletions(-)
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH 1/4] net/intel: add vCPF PMD support
2025-09-22 9:48 [PATCH 0/4] add vcpf pmd support Shetty, Praveen
@ 2025-09-22 9:48 ` Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 0/4] add vcpf pmd support Shetty, Praveen
` (2 more replies)
2025-09-22 9:48 ` [PATCH 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
` (2 subsequent siblings)
3 siblings, 3 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-22 9:48 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh; +Cc: dev, Praveen Shetty
From: Praveen Shetty <praveen.shetty@intel.com>
This patch adds the registration support for a new vCPF PMD.
vCPF PMD is responsible for enabling control and data path
functionality for the CPF VF devices.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
---
drivers/net/intel/cpfl/cpfl_ethdev.c | 17 +++++++++++++++++
drivers/net/intel/cpfl/cpfl_ethdev.h | 1 +
drivers/net/intel/idpf/idpf_common_device.c | 4 ++--
drivers/net/intel/idpf/idpf_common_device.h | 1 +
4 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index 6d7b23ad7a..d6227c99b5 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -1854,6 +1854,7 @@ cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter)
switch (mbx_op) {
case idpf_mbq_opc_send_msg_to_peer_pf:
+ case idpf_mbq_opc_send_msg_to_peer_drv:
if (vc_op == VIRTCHNL2_OP_EVENT) {
cpfl_handle_vchnl_event_msg(adapter, adapter->base.mbx_resp,
ctlq_msg.data_len);
@@ -2610,6 +2611,11 @@ static const struct rte_pci_id pci_id_cpfl_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static const struct rte_pci_id pci_id_vcpf_map[] = {
+ { RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IXD_DEV_ID_VCPF) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
static struct cpfl_adapter_ext *
cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
{
@@ -2866,6 +2872,14 @@ static struct rte_pci_driver rte_cpfl_pmd = {
.remove = cpfl_pci_remove,
};
+static struct rte_pci_driver rte_vcpf_pmd = {
+ .id_table = pci_id_vcpf_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING |
+ RTE_PCI_DRV_PROBE_AGAIN,
+ .probe = cpfl_pci_probe,
+ .remove = cpfl_pci_remove,
+};
+
/**
* Driver initialization routine.
* Invoked once at EAL init time.
@@ -2874,6 +2888,9 @@ static struct rte_pci_driver rte_cpfl_pmd = {
RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PCI(net_vcpf, rte_vcpf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_vcpf, pci_id_vcpf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_vcpf, "* igb_uio | vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
CPFL_TX_SINGLE_Q "=<0|1> "
CPFL_RX_SINGLE_Q "=<0|1> "
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index d4e1176ab1..2cfcdd6206 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -60,6 +60,7 @@
/* Device IDs */
#define IDPF_DEV_ID_CPF 0x1453
+#define IXD_DEV_ID_VCPF 0x1203
#define VIRTCHNL2_QUEUE_GROUP_P2P 0x100
#define CPFL_HOST_ID_NUM 2
diff --git a/drivers/net/intel/idpf/idpf_common_device.c b/drivers/net/intel/idpf/idpf_common_device.c
index ff1fbcd2b4..8c637a2fb6 100644
--- a/drivers/net/intel/idpf/idpf_common_device.c
+++ b/drivers/net/intel/idpf/idpf_common_device.c
@@ -130,7 +130,7 @@ idpf_init_mbx(struct idpf_hw *hw)
struct idpf_ctlq_info *ctlq;
int ret = 0;
- if (hw->device_id == IDPF_DEV_ID_SRIOV)
+ if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id == IXD_DEV_ID_VCPF)
ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, vf_ctlq_info);
else
ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, pf_ctlq_info);
@@ -389,7 +389,7 @@ idpf_adapter_init(struct idpf_adapter *adapter)
struct idpf_hw *hw = &adapter->hw;
int ret;
- if (hw->device_id == IDPF_DEV_ID_SRIOV) {
+ if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id == IXD_DEV_ID_VCPF) {
ret = idpf_check_vf_reset_done(hw);
} else {
idpf_reset_pf(hw);
diff --git a/drivers/net/intel/idpf/idpf_common_device.h b/drivers/net/intel/idpf/idpf_common_device.h
index 5f3e4a4fcf..d536ce7e15 100644
--- a/drivers/net/intel/idpf/idpf_common_device.h
+++ b/drivers/net/intel/idpf/idpf_common_device.h
@@ -11,6 +11,7 @@
#include "idpf_common_logs.h"
#define IDPF_DEV_ID_SRIOV 0x145C
+#define IXD_DEV_ID_VCPF 0x1203
#define IDPF_RSS_KEY_LEN 52
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH 2/4] net/idpf: add splitq jumbo packet handling
2025-09-22 9:48 [PATCH 0/4] add vcpf pmd support Shetty, Praveen
2025-09-22 9:48 ` [PATCH 1/4] net/intel: add vCPF PMD support Shetty, Praveen
@ 2025-09-22 9:48 ` Shetty, Praveen
2025-09-22 9:48 ` [PATCH 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
2025-09-22 9:48 ` [PATCH 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-22 9:48 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Shukla, Dhananjay, atulpatel261194
From: Praveen Shetty <praveen.shetty@intel.com>
This patch will add the jumbo packets handling in the
idpf_dp_splitq_recv_pkts function.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Signed-off-by: Shukla, Dhananjay <dhananjay.shukla@intel.com>
Signed-off-by: atulpatel261194 <Atul.Patel@intel.com>
---
drivers/net/intel/idpf/idpf_common_rxtx.c | 50 ++++++++++++++++++-----
1 file changed, 40 insertions(+), 10 deletions(-)
diff --git a/drivers/net/intel/idpf/idpf_common_rxtx.c b/drivers/net/intel/idpf/idpf_common_rxtx.c
index eb25b091d8..412aff8f5f 100644
--- a/drivers/net/intel/idpf/idpf_common_rxtx.c
+++ b/drivers/net/intel/idpf/idpf_common_rxtx.c
@@ -623,10 +623,12 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
uint16_t pktlen_gen_bufq_id;
- struct idpf_rx_queue *rxq;
+ struct idpf_rx_queue *rxq = rx_queue;
const uint32_t *ptype_tbl;
uint8_t status_err0_qw1;
struct idpf_adapter *ad;
+ struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+ struct rte_mbuf *last_seg = rxq->pkt_last_seg;
struct rte_mbuf *rxm;
uint16_t rx_id_bufq1;
uint16_t rx_id_bufq2;
@@ -659,6 +661,7 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pktlen_gen_bufq_id =
rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
+ status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
gen_id = (pktlen_gen_bufq_id &
VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
@@ -697,16 +700,39 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->pkt_len = pkt_len;
rxm->data_len = pkt_len;
rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+ /*
+ * If this is the first buffer of the received packet, set the
+ * pointer to the first mbuf of the packet and initialize its
+ * context. Otherwise, update the total length and the number
+ * of segments of the current scattered packet, and update the
+ * pointer to the last mbuf of the current packet.
+ */
+ if (!first_seg) {
+ first_seg = rxm;
+ first_seg->nb_segs = 1;
+ first_seg->pkt_len = pkt_len;
+ } else {
+ first_seg->pkt_len =
+ (uint16_t)(first_seg->pkt_len +
+ pkt_len);
+ first_seg->nb_segs++;
+ last_seg->next = rxm;
+ }
+
+ if (!(status_err0_qw1 & (1 << VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S))) {
+ last_seg = rxm;
+ continue;
+ }
+
rxm->next = NULL;
- rxm->nb_segs = 1;
- rxm->port = rxq->port_id;
- rxm->ol_flags = 0;
- rxm->packet_type =
+ first_seg->port = rxq->port_id;
+ first_seg->ol_flags = 0;
+ first_seg->packet_type =
ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
-
- status_err0_qw1 = rx_desc->status_err0_qw1;
+ status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
if (idpf_timestamp_dynflag > 0 &&
@@ -719,16 +745,20 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*RTE_MBUF_DYNFIELD(rxm,
idpf_timestamp_dynfield_offset,
rte_mbuf_timestamp_t *) = ts_ns;
- rxm->ol_flags |= idpf_timestamp_dynflag;
+ first_seg->ol_flags |= idpf_timestamp_dynflag;
}
- rxm->ol_flags |= pkt_flags;
+ first_seg->ol_flags |= pkt_flags;
- rx_pkts[nb_rx++] = rxm;
+ rx_pkts[nb_rx++] = first_seg;
+
+ first_seg = NULL;
}
if (nb_rx > 0) {
rxq->rx_tail = rx_id;
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
rxq->bufq1->rx_next_avail = rx_id_bufq1;
if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH 3/4] net/intel: add config queue support to vCPF
2025-09-22 9:48 [PATCH 0/4] add vcpf pmd support Shetty, Praveen
2025-09-22 9:48 ` [PATCH 1/4] net/intel: add vCPF PMD support Shetty, Praveen
2025-09-22 9:48 ` [PATCH 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
@ 2025-09-22 9:48 ` Shetty, Praveen
2025-09-22 9:48 ` [PATCH 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-22 9:48 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh; +Cc: dev, Praveen Shetty
From: Praveen Shetty <praveen.shetty@intel.com>
A "configuration queue" is a software term to denote
a hardware mailbox queue dedicated to NSS programming.
While the hardware does not have a construct of a
"configuration queue", software does to state clearly
the distinction between a queue software dedicates to
regular mailbox processing (e.g. CPChnl or Virtchnl)
and a queue software dedicates to NSS programming
(e.g. SEM/LEM rule programming).
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
---
drivers/net/intel/cpfl/cpfl_ethdev.c | 274 +++++++++++++++---
drivers/net/intel/cpfl/cpfl_ethdev.h | 38 ++-
drivers/net/intel/cpfl/cpfl_vchnl.c | 143 ++++++++-
drivers/net/intel/idpf/base/idpf_osdep.h | 3 +
drivers/net/intel/idpf/base/virtchnl2.h | 3 +-
drivers/net/intel/idpf/idpf_common_device.h | 2 +
drivers/net/intel/idpf/idpf_common_virtchnl.c | 38 +++
drivers/net/intel/idpf/idpf_common_virtchnl.h | 3 +
8 files changed, 449 insertions(+), 55 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index d6227c99b5..4dfdf3133f 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -29,6 +29,9 @@
#define CPFL_FLOW_PARSER "flow_parser"
#endif
+#define vCPF_FID 0
+#define CPFL_FID 6
+
rte_spinlock_t cpfl_adapter_lock;
/* A list for all adapters, one adapter matches one PCI device */
struct cpfl_adapter_list cpfl_adapter_list;
@@ -1699,7 +1702,8 @@ cpfl_handle_vchnl_event_msg(struct cpfl_adapter_ext *adapter, uint8_t *msg, uint
}
/* ignore if it is ctrl vport */
- if (adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF &&
+ adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
return;
vport = cpfl_find_vport(adapter, vc_event->vport_id);
@@ -1903,18 +1907,30 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
{
int i, ret;
- for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
+ for (i = 0; i < adapter->num_tx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[0].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, false);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
+
if (ret) {
PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
return ret;
}
}
- for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
- VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
+ for (i = 0; i < adapter->num_rx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[1].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, false);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
+
if (ret) {
PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
return ret;
@@ -1922,6 +1938,7 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
}
return 0;
+
}
static int
@@ -1941,8 +1958,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
return ret;
}
- for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
+ for (i = 0; i < adapter->num_tx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[0].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, true);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to enable Tx config queue.");
@@ -1950,8 +1972,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
}
}
- for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
+ for (i = 0; i < adapter->num_rx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[1].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, true);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to enable Rx config queue.");
@@ -1971,14 +1998,20 @@ cpfl_remove_cfgqs(struct cpfl_adapter_ext *adapter)
create_cfgq_info = adapter->cfgq_info;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
- if (adapter->ctlqp[i])
+ for (i = 0; i < adapter->num_cfgq; i++) {
+ if (adapter->ctlqp[i]) {
cpfl_vport_ctlq_remove(hw, adapter->ctlqp[i]);
+ adapter->ctlqp[i] = NULL;
+ }
if (create_cfgq_info[i].ring_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem);
if (create_cfgq_info[i].buf_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem);
}
+ if (adapter->ctlqp) {
+ rte_free(adapter->ctlqp);
+ adapter->ctlqp = NULL;
+ }
}
static int
@@ -1988,7 +2021,16 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
int ret = 0;
int i = 0;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ adapter->ctlqp = rte_zmalloc("ctlqp", adapter->num_cfgq *
+ sizeof(struct idpf_ctlq_info *),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!adapter->ctlqp) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for control queues");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < adapter->num_cfgq; i++) {
cfg_cq = NULL;
ret = cpfl_vport_ctlq_add((struct idpf_hw *)(&adapter->base.hw),
&adapter->cfgq_info[i],
@@ -2007,6 +2049,62 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
return ret;
}
+static
+int vcpf_save_chunk_in_cfgq(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_add_queues *add_q =
+ (struct virtchnl2_add_queues *)adapter->addq_recv_info;
+ struct vcpf_cfg_queue *cfgq;
+ struct virtchnl2_queue_reg_chunk *q_chnk;
+ u16 rx, tx, num_chunks, num_q, struct_size;
+ u32 q_id, q_type;
+
+ rx = 0; tx = 0;
+
+ cfgq = rte_zmalloc("cfgq", adapter->num_cfgq *
+ sizeof(struct vcpf_cfg_queue),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!cfgq) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for cfgq");
+ return -ENOMEM;
+ }
+
+ struct_size = idpf_struct_size(add_q, chunks.chunks, (add_q->chunks.num_chunks - 1));
+ adapter->cfgq_in.cfgq_add = rte_zmalloc("config_queues", struct_size, 0);
+ rte_memcpy(adapter->cfgq_in.cfgq_add, add_q, struct_size);
+
+ num_chunks = add_q->chunks.num_chunks;
+ for (u16 i = 0; i < num_chunks; i++) {
+ num_q = add_q->chunks.chunks[i].num_queues;
+ q_chnk = &add_q->chunks.chunks[i];
+ for (u16 j = 0; j < num_q; j++) {
+ if (rx > adapter->num_cfgq || tx > adapter->num_cfgq)
+ break;
+ q_id = q_chnk->start_queue_id + j;
+ q_type = q_chnk->type;
+ if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_TX) {
+ cfgq[0].qid = q_id;
+ cfgq[0].qtail_reg_start = q_chnk->qtail_reg_start;
+ cfgq[0].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
+ q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_TX;
+ tx++;
+ } else if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_RX) {
+ cfgq[1].qid = q_id;
+ cfgq[1].qtail_reg_start = q_chnk->qtail_reg_start;
+ cfgq[1].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
+ q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_RX;
+ rx++;
+ }
+ }
+ }
+
+ adapter->cfgq_in.cfgq = cfgq;
+ adapter->cfgq_in.num_cfgq = adapter->num_cfgq;
+
+ return 0;
+}
+
#define CPFL_CFGQ_RING_LEN 512
#define CPFL_CFGQ_DESCRIPTOR_SIZE 32
#define CPFL_CFGQ_BUFFER_SIZE 256
@@ -2017,32 +2115,71 @@ cpfl_cfgq_setup(struct cpfl_adapter_ext *adapter)
{
struct cpfl_ctlq_create_info *create_cfgq_info;
struct cpfl_vport *vport;
+ struct vcpf_cfgq_info *cfgq_info = &adapter->cfgq_in;
int i, err;
uint32_t ring_size = CPFL_CFGQ_RING_SIZE * sizeof(struct idpf_ctlq_desc);
uint32_t buf_size = CPFL_CFGQ_RING_SIZE * CPFL_CFGQ_BUFFER_SIZE;
+ uint64_t tx_qtail_start;
+ uint64_t rx_qtail_start;
+ uint32_t tx_qtail_spacing;
+ uint32_t rx_qtail_spacing;
vport = &adapter->ctrl_vport;
+
+ tx_qtail_start = vport->base.chunks_info.tx_qtail_start;
+ tx_qtail_spacing = vport->base.chunks_info.tx_qtail_spacing;
+ rx_qtail_start = vport->base.chunks_info.rx_qtail_start;
+ rx_qtail_spacing = vport->base.chunks_info.rx_qtail_spacing;
+
+ adapter->cfgq_info = rte_zmalloc("cfgq_info", adapter->num_cfgq *
+ sizeof(struct cpfl_ctlq_create_info),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!adapter->cfgq_info) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for cfgq_info");
+ return -ENOMEM;
+ }
+
create_cfgq_info = adapter->cfgq_info;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ for (i = 0; i < adapter->num_cfgq; i++) {
if (i % 2 == 0) {
- /* Setup Tx config queue */
- create_cfgq_info[i].id = vport->base.chunks_info.tx_start_qid + i / 2;
+ /* Setup Tx config queue */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].id = cfgq_info->cfgq[i].qid;
+ else
+ create_cfgq_info[i].id = vport->base.chunks_info.tx_start_qid +
+ i / 2;
+
create_cfgq_info[i].type = IDPF_CTLQ_TYPE_CONFIG_TX;
create_cfgq_info[i].len = CPFL_CFGQ_RING_SIZE;
create_cfgq_info[i].buf_size = CPFL_CFGQ_BUFFER_SIZE;
memset(&create_cfgq_info[i].reg, 0, sizeof(struct idpf_ctlq_reg));
- create_cfgq_info[i].reg.tail = vport->base.chunks_info.tx_qtail_start +
- i / 2 * vport->base.chunks_info.tx_qtail_spacing;
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].reg.tail = cfgq_info->cfgq[i].qtail_reg_start;
+ else
+ create_cfgq_info[i].reg.tail = tx_qtail_start +
+ i / 2 * tx_qtail_spacing;
+
} else {
- /* Setup Rx config queue */
- create_cfgq_info[i].id = vport->base.chunks_info.rx_start_qid + i / 2;
+ /* Setup Rx config queue */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].id = cfgq_info->cfgq[i].qid;
+ else
+ create_cfgq_info[i].id = vport->base.chunks_info.rx_start_qid +
+ i / 2;
+
create_cfgq_info[i].type = IDPF_CTLQ_TYPE_CONFIG_RX;
create_cfgq_info[i].len = CPFL_CFGQ_RING_SIZE;
create_cfgq_info[i].buf_size = CPFL_CFGQ_BUFFER_SIZE;
memset(&create_cfgq_info[i].reg, 0, sizeof(struct idpf_ctlq_reg));
- create_cfgq_info[i].reg.tail = vport->base.chunks_info.rx_qtail_start +
- i / 2 * vport->base.chunks_info.rx_qtail_spacing;
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].reg.tail = cfgq_info->cfgq[i].qtail_reg_start;
+ else
+ create_cfgq_info[i].reg.tail = rx_qtail_start +
+ i / 2 * rx_qtail_spacing;
+
+
if (!idpf_alloc_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem,
buf_size)) {
err = -ENOMEM;
@@ -2050,19 +2187,24 @@ cpfl_cfgq_setup(struct cpfl_adapter_ext *adapter)
}
}
if (!idpf_alloc_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem,
- ring_size)) {
+ ring_size)) {
err = -ENOMEM;
goto free_mem;
}
}
+
return 0;
free_mem:
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ for (i = 0; i < adapter->num_cfgq; i++) {
if (create_cfgq_info[i].ring_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem);
if (create_cfgq_info[i].buf_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem);
}
+ if (adapter->cfgq_info) {
+ rte_free(adapter->cfgq_info);
+ adapter->cfgq_info = NULL;
+ }
return err;
}
@@ -2107,7 +2249,10 @@ cpfl_ctrl_path_close(struct cpfl_adapter_ext *adapter)
{
cpfl_stop_cfgqs(adapter);
cpfl_remove_cfgqs(adapter);
- idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ else
+ vcpf_del_queues(adapter);
}
static int
@@ -2115,22 +2260,39 @@ cpfl_ctrl_path_open(struct cpfl_adapter_ext *adapter)
{
int ret;
- ret = cpfl_vc_create_ctrl_vport(adapter);
- if (ret) {
- PMD_INIT_LOG(ERR, "Failed to create control vport");
- return ret;
- }
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF) {
+ ret = cpfl_vc_create_ctrl_vport(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to create control vport");
+ return ret;
+ }
- ret = cpfl_init_ctrl_vport(adapter);
- if (ret) {
- PMD_INIT_LOG(ERR, "Failed to init control vport");
- goto err_init_ctrl_vport;
+ ret = cpfl_init_ctrl_vport(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init control vport");
+ goto err_init_ctrl_vport;
+ }
+ } else {
+ ret = vcpf_add_queues(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to add queues");
+ return ret;
+ }
+
+ ret = vcpf_save_chunk_in_cfgq(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to save config queue chunk");
+ return ret;
+ }
}
ret = cpfl_cfgq_setup(adapter);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to setup control queues");
- goto err_cfgq_setup;
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ goto err_cfgq_setup;
+ else
+ goto err_del_cfg;
}
ret = cpfl_add_cfgqs(adapter);
@@ -2153,9 +2315,13 @@ cpfl_ctrl_path_open(struct cpfl_adapter_ext *adapter)
cpfl_remove_cfgqs(adapter);
err_cfgq_setup:
err_init_ctrl_vport:
- idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+err_del_cfg:
+ vcpf_del_queues(adapter);
return ret;
+
}
static struct virtchnl2_get_capabilities req_caps = {
@@ -2291,12 +2457,29 @@ get_running_host_id(void)
return host_id;
}
+static uint8_t
+set_config_queue_details(struct cpfl_adapter_ext *adapter, struct rte_pci_addr *pci_addr)
+{
+ if (pci_addr->function == CPFL_FID) {
+ adapter->num_cfgq = CPFL_CFGQ_NUM;
+ adapter->num_rx_cfgq = CPFL_RX_CFGQ_NUM;
+ adapter->num_tx_cfgq = CPFL_TX_CFGQ_NUM;
+ } else if (pci_addr->function == vCPF_FID) {
+ adapter->num_cfgq = VCPF_CFGQ_NUM;
+ adapter->num_rx_cfgq = VCPF_RX_CFGQ_NUM;
+ adapter->num_tx_cfgq = VCPF_TX_CFGQ_NUM;
+ }
+
+ return 0;
+}
+
static int
cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
struct cpfl_devargs *devargs)
{
struct idpf_adapter *base = &adapter->base;
struct idpf_hw *hw = &base->hw;
+ struct rte_pci_addr *pci_addr = &pci_dev->addr;
int ret = 0;
#ifndef RTE_HAS_JANSSON
@@ -2348,10 +2531,23 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
goto err_vports_alloc;
}
- ret = cpfl_ctrl_path_open(adapter);
+ /* set the number of config queues to be requested */
+ ret = set_config_queue_details(adapter, pci_addr);
if (ret) {
- PMD_INIT_LOG(ERR, "Failed to setup control path");
- goto err_create_ctrl_vport;
+ PMD_INIT_LOG(ERR, "Failed to set the config queue details");
+ return -1;
+ }
+
+ if (pci_addr->function == vCPF_FID || pci_addr->function == CPFL_FID) {
+ ret = cpfl_ctrl_path_open(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup control path");
+ if (pci_addr->function == CPFL_FID)
+ goto err_create_ctrl_vport;
+ else
+ return ret;
+ }
+
}
#ifdef RTE_HAS_JANSSON
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index 2cfcdd6206..81f223eef5 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -90,6 +90,9 @@
#define CPFL_FPCP_CFGQ_TX 0
#define CPFL_FPCP_CFGQ_RX 1
#define CPFL_CFGQ_NUM 8
+#define VCPF_RX_CFGQ_NUM 1
+#define VCPF_TX_CFGQ_NUM 1
+#define VCPF_CFGQ_NUM 2
/* bit[15:14] type
* bit[13] host/accelerator core
@@ -201,6 +204,30 @@ struct cpfl_metadata {
struct cpfl_metadata_chunk chunks[CPFL_META_LENGTH];
};
+/**
+ * struct vcpf_cfg_queue - config queue information
+ * @qid: rx/tx queue id
+ * @qtail_reg_start: rx/tx tail queue register start
+ * @qtail_reg_spacing: rx/tx tail queue register spacing
+ */
+struct vcpf_cfg_queue {
+ u32 qid;
+ u64 qtail_reg_start;
+ u32 qtail_reg_spacing;
+};
+
+/**
+ * struct vcpf_cfgq_info - config queue information
+ * @num_cfgq: number of config queues
+ * @cfgq_add: config queue add information
+ * @cfgq: config queue information
+ */
+struct vcpf_cfgq_info {
+ u16 num_cfgq;
+ struct virtchnl2_add_queues *cfgq_add;
+ struct vcpf_cfg_queue *cfgq;
+};
+
struct cpfl_adapter_ext {
TAILQ_ENTRY(cpfl_adapter_ext) next;
struct idpf_adapter base;
@@ -230,8 +257,13 @@ struct cpfl_adapter_ext {
/* ctrl vport and ctrl queues. */
struct cpfl_vport ctrl_vport;
uint8_t ctrl_vport_recv_info[IDPF_DFLT_MBX_BUF_SIZE];
- struct idpf_ctlq_info *ctlqp[CPFL_CFGQ_NUM];
- struct cpfl_ctlq_create_info cfgq_info[CPFL_CFGQ_NUM];
+ struct idpf_ctlq_info **ctlqp;
+ struct cpfl_ctlq_create_info *cfgq_info;
+ struct vcpf_cfgq_info cfgq_in;
+ uint8_t addq_recv_info[IDPF_DFLT_MBX_BUF_SIZE];
+ uint16_t num_cfgq;
+ uint16_t num_rx_cfgq;
+ uint16_t num_tx_cfgq;
uint8_t host_id;
};
@@ -252,6 +284,8 @@ int cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter);
int cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter);
int cpfl_alloc_dma_mem_batch(struct idpf_dma_mem *orig_dma, struct idpf_dma_mem *dma,
uint32_t size, int batch_size);
+int vcpf_add_queues(struct cpfl_adapter_ext *adapter);
+int vcpf_del_queues(struct cpfl_adapter_ext *adapter);
#define CPFL_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/intel/cpfl/cpfl_vchnl.c b/drivers/net/intel/cpfl/cpfl_vchnl.c
index 7d277a0e8e..9c842b60df 100644
--- a/drivers/net/intel/cpfl/cpfl_vchnl.c
+++ b/drivers/net/intel/cpfl/cpfl_vchnl.c
@@ -106,6 +106,106 @@ cpfl_vc_create_ctrl_vport(struct cpfl_adapter_ext *adapter)
return err;
}
+#define VCPF_CFQ_MB_INDEX 0xFF
+int
+vcpf_add_queues(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_add_queues add_cfgq;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&add_cfgq, 0, sizeof(struct virtchnl2_add_queues));
+ u16 num_cfgq = 1;
+
+ add_cfgq.num_tx_q = rte_cpu_to_le_16(num_cfgq);
+ add_cfgq.num_rx_q = rte_cpu_to_le_16(num_cfgq);
+ add_cfgq.mbx_q_index = VCPF_CFQ_MB_INDEX;
+
+ add_cfgq.vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ add_cfgq.num_tx_complq = 0;
+ add_cfgq.num_rx_bufq = 0;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_ADD_QUEUES;
+ args.in_args = (uint8_t *)&add_cfgq;
+ args.in_args_size = sizeof(add_cfgq);
+ args.out_buffer = adapter->base.mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_vc_cmd_execute(&adapter->base, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command VIRTCHNL2_OP_ADD_QUEUES");
+ return err;
+ }
+
+ rte_memcpy(adapter->addq_recv_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+
+ return err;
+}
+
+int
+vcpf_del_queues(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_del_ena_dis_queues *del_cfgq;
+ u16 num_chunks;
+ struct idpf_cmd_info args;
+ int i, err, size;
+
+ num_chunks = adapter->cfgq_in.cfgq_add->chunks.num_chunks;
+ size = idpf_struct_size(del_cfgq, chunks.chunks, (num_chunks - 1));
+ del_cfgq = rte_zmalloc("del_cfgq", size, 0);
+ if (!del_cfgq) {
+ PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_del_ena_dis_queues");
+ err = -ENOMEM;
+ return err;
+ }
+
+ del_cfgq->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ del_cfgq->chunks.num_chunks = num_chunks;
+
+ /* fill config queue chunk data */
+ for (i = 0; i < num_chunks; i++) {
+ del_cfgq->chunks.chunks[i].type =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].type;
+ del_cfgq->chunks.chunks[i].start_queue_id =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].start_queue_id;
+ del_cfgq->chunks.chunks[i].num_queues =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].num_queues;
+ }
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_DEL_QUEUES;
+ args.in_args = (uint8_t *)del_cfgq;
+ args.in_args_size = idpf_struct_size(del_cfgq, chunks.chunks,
+ (del_cfgq->chunks.num_chunks - 1));
+ args.out_buffer = adapter->base.mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_vc_cmd_execute(&adapter->base, &args);
+ rte_free(del_cfgq);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command VIRTCHNL2_OP_DEL_QUEUES");
+ return err;
+ }
+
+ if (adapter->cfgq_info) {
+ rte_free(adapter->cfgq_info);
+ adapter->cfgq_info = NULL;
+ }
+ adapter->cfgq_in.num_cfgq = 0;
+ if (adapter->cfgq_in.cfgq_add) {
+ rte_free(adapter->cfgq_in.cfgq_add);
+ adapter->cfgq_in.cfgq_add = NULL;
+ }
+ if (adapter->cfgq_in.cfgq) {
+ rte_free(adapter->cfgq_in.cfgq);
+ adapter->cfgq_in.cfgq = NULL;
+ }
+ return err;
+}
+
int
cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
{
@@ -116,13 +216,16 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
uint16_t num_qs;
int size, err, i;
- if (vport->base.rxq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
- PMD_DRV_LOG(ERR, "This rxq model isn't supported.");
- err = -EINVAL;
- return err;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF) {
+ if (vport->base.rxq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ PMD_DRV_LOG(ERR, "This rxq model isn't supported.");
+ err = -EINVAL;
+ return err;
+ }
}
- num_qs = CPFL_RX_CFGQ_NUM;
+ num_qs = adapter->num_rx_cfgq;
+
size = sizeof(*vc_rxqs) + (num_qs - 1) *
sizeof(struct virtchnl2_rxq_info);
vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
@@ -131,7 +234,12 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
err = -ENOMEM;
return err;
}
- vc_rxqs->vport_id = vport->base.vport_id;
+
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vc_rxqs->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ else
+ vc_rxqs->vport_id = vport->base.vport_id;
+
vc_rxqs->num_qinfo = num_qs;
for (i = 0; i < num_qs; i++) {
@@ -141,7 +249,8 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
rxq_info->queue_id = adapter->cfgq_info[2 * i + 1].id;
rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
rxq_info->data_buffer_size = adapter->cfgq_info[2 * i + 1].buf_size;
- rxq_info->max_pkt_size = vport->base.max_pkt_len;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF)
+ rxq_info->max_pkt_size = vport->base.max_pkt_len;
rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
rxq_info->ring_len = adapter->cfgq_info[2 * i + 1].len;
@@ -172,13 +281,16 @@ cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter)
uint16_t num_qs;
int size, err, i;
- if (vport->base.txq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
- PMD_DRV_LOG(ERR, "This txq model isn't supported.");
- err = -EINVAL;
- return err;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF) {
+ if (vport->base.txq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ PMD_DRV_LOG(ERR, "This txq model isn't supported.");
+ err = -EINVAL;
+ return err;
+ }
}
- num_qs = CPFL_TX_CFGQ_NUM;
+ num_qs = adapter->num_tx_cfgq;
+
size = sizeof(*vc_txqs) + (num_qs - 1) *
sizeof(struct virtchnl2_txq_info);
vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
@@ -187,7 +299,12 @@ cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter)
err = -ENOMEM;
return err;
}
- vc_txqs->vport_id = vport->base.vport_id;
+
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vc_txqs->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ else
+ vc_txqs->vport_id = vport->base.vport_id;
+
vc_txqs->num_qinfo = num_qs;
for (i = 0; i < num_qs; i++) {
diff --git a/drivers/net/intel/idpf/base/idpf_osdep.h b/drivers/net/intel/idpf/base/idpf_osdep.h
index 7b43df3079..47b95d0da6 100644
--- a/drivers/net/intel/idpf/base/idpf_osdep.h
+++ b/drivers/net/intel/idpf/base/idpf_osdep.h
@@ -361,6 +361,9 @@ idpf_hweight32(u32 num)
#endif
+#define idpf_struct_size(ptr, field, num) \
+ (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num))
+
enum idpf_mac_type {
IDPF_MAC_UNKNOWN = 0,
IDPF_MAC_PF,
diff --git a/drivers/net/intel/idpf/base/virtchnl2.h b/drivers/net/intel/idpf/base/virtchnl2.h
index cf010c0504..6cfb4f56fa 100644
--- a/drivers/net/intel/idpf/base/virtchnl2.h
+++ b/drivers/net/intel/idpf/base/virtchnl2.h
@@ -1024,7 +1024,8 @@ struct virtchnl2_add_queues {
__le16 num_tx_complq;
__le16 num_rx_q;
__le16 num_rx_bufq;
- u8 pad[4];
+ u8 mbx_q_index;
+ u8 pad[3];
struct virtchnl2_queue_reg_chunks chunks;
};
diff --git a/drivers/net/intel/idpf/idpf_common_device.h b/drivers/net/intel/idpf/idpf_common_device.h
index d536ce7e15..f962a3f805 100644
--- a/drivers/net/intel/idpf/idpf_common_device.h
+++ b/drivers/net/intel/idpf/idpf_common_device.h
@@ -45,6 +45,8 @@
(sizeof(struct virtchnl2_ptype) + \
(((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
+#define VCPF_CFGQ_VPORT_ID 0xFFFFFFFF
+
struct idpf_adapter {
struct idpf_hw hw;
struct virtchnl2_version_info virtchnl_version;
diff --git a/drivers/net/intel/idpf/idpf_common_virtchnl.c b/drivers/net/intel/idpf/idpf_common_virtchnl.c
index bab854e191..e927d7415a 100644
--- a/drivers/net/intel/idpf/idpf_common_virtchnl.c
+++ b/drivers/net/intel/idpf/idpf_common_virtchnl.c
@@ -787,6 +787,44 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
return err;
}
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ena_dis_one_queue_vcpf)
+int
+idpf_vc_ena_dis_one_queue_vcpf(struct idpf_adapter *adapter, uint16_t qid,
+ uint32_t type, bool on)
+{
+ struct virtchnl2_del_ena_dis_queues *queue_select;
+ struct virtchnl2_queue_chunk *queue_chunk;
+ struct idpf_cmd_info args;
+ int err, len;
+
+ len = sizeof(struct virtchnl2_del_ena_dis_queues);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (queue_select == NULL)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = 1;
+ queue_select->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+
+ queue_chunk->type = type;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+
+ args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
+ VIRTCHNL2_OP_DISABLE_QUEUES;
+ args.in_args = (uint8_t *)queue_select;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+ err = idpf_vc_cmd_execute(adapter, &args);
+ if (err != 0)
+ DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+ on ? "ENABLE" : "DISABLE");
+
+ rte_free(queue_select);
+ return err;
+}
+
RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_switch)
int
idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
diff --git a/drivers/net/intel/idpf/idpf_common_virtchnl.h b/drivers/net/intel/idpf/idpf_common_virtchnl.h
index 68cba9111c..90fce65676 100644
--- a/drivers/net/intel/idpf/idpf_common_virtchnl.h
+++ b/drivers/net/intel/idpf/idpf_common_virtchnl.h
@@ -76,6 +76,9 @@ __rte_internal
int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
uint32_t type, bool on);
__rte_internal
+int idpf_vc_ena_dis_one_queue_vcpf(struct idpf_adapter *adapter, uint16_t qid,
+ uint32_t type, bool on);
+__rte_internal
int idpf_vc_queue_grps_del(struct idpf_vport *vport,
uint16_t num_q_grps,
struct virtchnl2_queue_group_id *qg_ids);
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH 4/4] net/cpfl: add cpchnl get vport info support
2025-09-22 9:48 [PATCH 0/4] add vcpf pmd support Shetty, Praveen
` (2 preceding siblings ...)
2025-09-22 9:48 ` [PATCH 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
@ 2025-09-22 9:48 ` Shetty, Praveen
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-22 9:48 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Shukla, Dhananjay, atulpatel261194
From: Praveen Shetty <praveen.shetty@intel.com>
vCPF will only receive the relative queue id from the FW.
CPCHNL2_OP_GET_VPORT_INFO cpchnl message is used
to get the absolute rx/tx queue id and vsi of its own vport.
This patch will add the support to call CPCHNL2_OP_GET_VPORT_INFO
cpchnl message from the vCPF PMD.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Signed-off-by: Shukla, Dhananjay <dhananjay.shukla@intel.com>
Signed-off-by: atulpatel261194 <Atul.Patel@intel.com>
---
drivers/net/intel/cpfl/cpfl_cpchnl.h | 7 +--
drivers/net/intel/cpfl/cpfl_ethdev.c | 63 +++++++++++++++++++++++++
drivers/net/intel/cpfl/cpfl_ethdev.h | 70 +++++++++++++++++++++-------
3 files changed, 119 insertions(+), 21 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_cpchnl.h b/drivers/net/intel/cpfl/cpfl_cpchnl.h
index 0c9dfcdbf1..7b01468a83 100644
--- a/drivers/net/intel/cpfl/cpfl_cpchnl.h
+++ b/drivers/net/intel/cpfl/cpfl_cpchnl.h
@@ -133,11 +133,8 @@ CPCHNL2_CHECK_STRUCT_LEN(3792, cpchnl2_queue_groups);
* @brief function types
*/
enum cpchnl2_func_type {
- CPCHNL2_FTYPE_LAN_VF = 0x0,
- CPCHNL2_FTYPE_LAN_RSV1 = 0x1,
- CPCHNL2_FTYPE_LAN_PF = 0x2,
- CPCHNL2_FTYPE_LAN_RSV2 = 0x3,
- CPCHNL2_FTYPE_LAN_MAX
+ CPCHNL2_FTYPE_LAN_PF = 0,
+ CPCHNL2_FTYPE_LAN_VF = 1,
};
/**
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index 4dfdf3133f..a1490f6b2c 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -1902,6 +1902,43 @@ cpfl_dev_alarm_handler(void *param)
rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
}
+static
+int vcpf_save_vport_info_response(struct cpfl_vport *cpfl_vport,
+ struct cpchnl2_get_vport_info_response *response)
+{
+ struct cpchnl2_vport_info *info;
+ struct vcpf_vport_info *vport_info;
+ struct cpchnl2_queue_group_info *qgp;
+ struct cpchnl2_queue_chunk *q_chnk;
+ u16 num_queue_groups;
+ u16 num_chunks;
+ u32 q_type;
+
+ info = &response->info;
+ vport_info = &cpfl_vport->vport_info;
+ vport_info->vport_index = info->vport_index;
+ vport_info->vsi_id = info->vsi_id;
+
+ num_queue_groups = response->queue_groups.num_queue_groups;
+ for (u16 i = 0; i < num_queue_groups; i++) {
+ qgp = &response->queue_groups.groups[i];
+ num_chunks = qgp->chunks.num_chunks;
+ /* rx q and tx q are stored in first 2 chunks */
+ for (u16 j = 0; j < (num_chunks - 2); j++) {
+ q_chnk = &qgp->chunks.chunks[j];
+ q_type = q_chnk->type;
+ if (q_type == VIRTCHNL2_QUEUE_TYPE_TX) {
+ vport_info->abs_start_txq_id = q_chnk->start_queue_id;
+ vport_info->num_tx_q = q_chnk->num_queues;
+ } else if (q_type == VIRTCHNL2_QUEUE_TYPE_RX) {
+ vport_info->abs_start_rxq_id = q_chnk->start_queue_id;
+ vport_info->num_rx_q = q_chnk->num_queues;
+ }
+ }
+ }
+ return 0;
+}
+
static int
cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
{
@@ -2720,7 +2757,11 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
/* for sending create vport virtchnl msg prepare */
struct virtchnl2_create_vport create_vport_info;
struct virtchnl2_add_queue_groups p2p_queue_grps_info;
+ struct cpchnl2_get_vport_info_response response;
uint8_t p2p_q_vc_out_info[IDPF_DFLT_MBX_BUF_SIZE] = {0};
+ struct cpfl_vport_id vi;
+ struct cpchnl2_vport_id v_id;
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
int ret = 0;
dev->dev_ops = &cpfl_eth_dev_ops;
@@ -2790,6 +2831,28 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
cpfl_p2p_queue_grps_del(vport);
}
}
+ /* get the vport info */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF) {
+ pci_dev = RTE_DEV_TO_PCI(dev->device);
+ vi.func_type = CPCHNL2_FTYPE_LAN_VF;
+ vi.pf_id = CPFL_HOST0_CPF_ID;
+ vi.vf_id = pci_dev->addr.function;
+
+ v_id.vport_id = cpfl_vport->base.vport_info.info.vport_id;
+ v_id.vport_type = cpfl_vport->base.vport_info.info.vport_type;
+
+ ret = cpfl_cc_vport_info_get(adapter, &v_id, &vi, &response);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to send vport info cpchnl message.");
+ return -1;
+ }
+
+ ret = vcpf_save_vport_info_response(cpfl_vport, &response);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to save cpchnl response.");
+ return -1;
+ }
+ }
return 0;
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index 81f223eef5..7f5944e2bc 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -165,10 +165,20 @@ struct cpfl_itf {
void *data;
};
+struct vcpf_vport_info {
+ u16 vport_index;
+ u16 vsi_id;
+ u32 abs_start_txq_id;
+ u32 num_tx_q;
+ u32 abs_start_rxq_id;
+ u32 num_rx_q;
+};
+
struct cpfl_vport {
struct cpfl_itf itf;
struct idpf_vport base;
struct p2p_queue_chunks_info *p2p_q_chunks_info;
+ struct vcpf_vport_info vport_info;
struct rte_mempool *p2p_mp;
@@ -320,6 +330,7 @@ cpfl_get_vsi_id(struct cpfl_itf *itf)
uint32_t vport_id;
int ret;
struct cpfl_vport_id vport_identity;
+ u16 vsi_id;
if (!itf)
return CPFL_INVALID_HW_ID;
@@ -329,24 +340,30 @@ cpfl_get_vsi_id(struct cpfl_itf *itf)
return repr->vport_info->vport.info.vsi_id;
} else if (itf->type == CPFL_ITF_TYPE_VPORT) {
- vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
-
- vport_identity.func_type = CPCHNL2_FTYPE_LAN_PF;
- /* host: CPFL_HOST0_CPF_ID, acc: CPFL_ACC_CPF_ID */
- vport_identity.pf_id = (itf->adapter->host_id == CPFL_HOST_ID_ACC) ?
- CPFL_ACC_CPF_ID : CPFL_HOST0_CPF_ID;
- vport_identity.vf_id = 0;
- vport_identity.vport_id = vport_id;
- ret = rte_hash_lookup_data(itf->adapter->vport_map_hash,
- &vport_identity,
- (void **)&info);
- if (ret < 0) {
- PMD_DRV_LOG(ERR, "vport id not exist");
- goto err;
+ if (itf->adapter->base.hw.device_id == IDPF_DEV_ID_CPF) {
+ vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
+
+ vport_identity.func_type = CPCHNL2_FTYPE_LAN_PF;
+ /* host: CPFL_HOST0_CPF_ID, acc: CPFL_ACC_CPF_ID */
+ vport_identity.pf_id = (itf->adapter->host_id == CPFL_HOST_ID_ACC) ?
+ CPFL_ACC_CPF_ID : CPFL_HOST0_CPF_ID;
+ vport_identity.vf_id = 0;
+ vport_identity.vport_id = vport_id;
+ ret = rte_hash_lookup_data(itf->adapter->vport_map_hash,
+ &vport_identity,
+ (void **)&info);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "vport id not exist");
+ goto err;
+ }
+
+ vsi_id = info->vport.info.vsi_id;
+ } else {
+ if (itf->adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vsi_id = (uint16_t)((struct cpfl_vport *)itf)->vport_info.vsi_id;
}
-
- return info->vport.info.vsi_id;
}
+ return vsi_id;
err:
return CPFL_INVALID_HW_ID;
@@ -375,4 +392,25 @@ cpfl_get_itf_by_port_id(uint16_t port_id)
return CPFL_DEV_TO_ITF(dev);
}
+
+static inline uint32_t
+vcpf_get_abs_qid(uint16_t port_id, uint32_t queue_type)
+{
+ struct cpfl_itf *itf = cpfl_get_itf_by_port_id(port_id);
+ struct cpfl_vport *vport;
+ if (!itf)
+ return CPFL_INVALID_HW_ID;
+ if (itf->type == CPFL_ITF_TYPE_VPORT) {
+ vport = (void *)itf;
+ if (itf->adapter->base.hw.device_id == IXD_DEV_ID_VCPF) {
+ switch (queue_type) {
+ case VIRTCHNL2_QUEUE_TYPE_TX:
+ return vport->vport_info.abs_start_txq_id;
+ case VIRTCHNL2_QUEUE_TYPE_RX:
+ return vport->vport_info.abs_start_rxq_id;
+ }
+ }
+ }
+ return 0;
+}
#endif /* _CPFL_ETHDEV_H_ */
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v2 0/4] add vcpf pmd support
2025-09-22 9:48 ` [PATCH 1/4] net/intel: add vCPF PMD support Shetty, Praveen
@ 2025-09-22 14:10 ` Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 1/4] net/intel: add vCPF PMD support Shetty, Praveen
` (3 more replies)
2025-09-30 13:55 ` [PATCH v4 0/4] add vcpf pmd support Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 0/4] add vcpf pmd support Shetty, Praveen
2 siblings, 4 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-22 14:10 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh; +Cc: dev
Virtual Control Plane Function (vCPF) is a SR-IOV Virtual Function of
the CPF(PF) device.vCPF is used to support multiple control plane functions.
This patchset is for extending the CPFL PMD to support the new vCPF device.
In this implementaion, both CPFL and the vCPF devices share most of the
initialization routine and share the common data path implementation, which
eliminates code duplication and improving the maintainability of the driver code.
---
v2:
- fixed test case failure
---
Praveen Shetty (4):
net/intel: add vCPF PMD support
net/idpf: add splitq jumbo packet handling
net/intel: add config queue support to vCPF
net/cpfl: add cpchnl get vport info support
drivers/net/intel/cpfl/cpfl_cpchnl.h | 7 +-
drivers/net/intel/cpfl/cpfl_ethdev.c | 354 ++++++++++++++++--
drivers/net/intel/cpfl/cpfl_ethdev.h | 109 +++++-
drivers/net/intel/cpfl/cpfl_vchnl.c | 143 ++++++-
drivers/net/intel/idpf/base/idpf_osdep.h | 3 +
drivers/net/intel/idpf/base/virtchnl2.h | 3 +-
drivers/net/intel/idpf/idpf_common_device.c | 4 +-
drivers/net/intel/idpf/idpf_common_device.h | 3 +
drivers/net/intel/idpf/idpf_common_rxtx.c | 50 ++-
drivers/net/intel/idpf/idpf_common_virtchnl.c | 38 ++
drivers/net/intel/idpf/idpf_common_virtchnl.h | 3 +
11 files changed, 629 insertions(+), 88 deletions(-)
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v2 1/4] net/intel: add vCPF PMD support
2025-09-22 14:10 ` [PATCH v2 0/4] add vcpf pmd support Shetty, Praveen
@ 2025-09-22 14:10 ` Shetty, Praveen
2025-09-23 12:54 ` [PATCH v3 0/4] add vcpf pmd support Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
` (2 subsequent siblings)
3 siblings, 1 reply; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-22 14:10 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Atul Patel, Dhananjay Shukla
From: Praveen Shetty <praveen.shetty@intel.com>
This patch adds the registration support for a new vCPF PMD.
vCPF PMD is responsible for enabling control and data path
functionality for the CPF VF devices.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Tested-by: Atul Patel <atul.patel@intel.com>
Tested-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
---
drivers/net/intel/cpfl/cpfl_ethdev.c | 17 +++++++++++++++++
drivers/net/intel/cpfl/cpfl_ethdev.h | 1 +
drivers/net/intel/idpf/idpf_common_device.c | 4 ++--
drivers/net/intel/idpf/idpf_common_device.h | 1 +
4 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index 6d7b23ad7a..d6227c99b5 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -1854,6 +1854,7 @@ cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter)
switch (mbx_op) {
case idpf_mbq_opc_send_msg_to_peer_pf:
+ case idpf_mbq_opc_send_msg_to_peer_drv:
if (vc_op == VIRTCHNL2_OP_EVENT) {
cpfl_handle_vchnl_event_msg(adapter, adapter->base.mbx_resp,
ctlq_msg.data_len);
@@ -2610,6 +2611,11 @@ static const struct rte_pci_id pci_id_cpfl_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static const struct rte_pci_id pci_id_vcpf_map[] = {
+ { RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IXD_DEV_ID_VCPF) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
static struct cpfl_adapter_ext *
cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
{
@@ -2866,6 +2872,14 @@ static struct rte_pci_driver rte_cpfl_pmd = {
.remove = cpfl_pci_remove,
};
+static struct rte_pci_driver rte_vcpf_pmd = {
+ .id_table = pci_id_vcpf_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING |
+ RTE_PCI_DRV_PROBE_AGAIN,
+ .probe = cpfl_pci_probe,
+ .remove = cpfl_pci_remove,
+};
+
/**
* Driver initialization routine.
* Invoked once at EAL init time.
@@ -2874,6 +2888,9 @@ static struct rte_pci_driver rte_cpfl_pmd = {
RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PCI(net_vcpf, rte_vcpf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_vcpf, pci_id_vcpf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_vcpf, "* igb_uio | vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
CPFL_TX_SINGLE_Q "=<0|1> "
CPFL_RX_SINGLE_Q "=<0|1> "
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index d4e1176ab1..2cfcdd6206 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -60,6 +60,7 @@
/* Device IDs */
#define IDPF_DEV_ID_CPF 0x1453
+#define IXD_DEV_ID_VCPF 0x1203
#define VIRTCHNL2_QUEUE_GROUP_P2P 0x100
#define CPFL_HOST_ID_NUM 2
diff --git a/drivers/net/intel/idpf/idpf_common_device.c b/drivers/net/intel/idpf/idpf_common_device.c
index ff1fbcd2b4..8c637a2fb6 100644
--- a/drivers/net/intel/idpf/idpf_common_device.c
+++ b/drivers/net/intel/idpf/idpf_common_device.c
@@ -130,7 +130,7 @@ idpf_init_mbx(struct idpf_hw *hw)
struct idpf_ctlq_info *ctlq;
int ret = 0;
- if (hw->device_id == IDPF_DEV_ID_SRIOV)
+ if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id == IXD_DEV_ID_VCPF)
ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, vf_ctlq_info);
else
ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, pf_ctlq_info);
@@ -389,7 +389,7 @@ idpf_adapter_init(struct idpf_adapter *adapter)
struct idpf_hw *hw = &adapter->hw;
int ret;
- if (hw->device_id == IDPF_DEV_ID_SRIOV) {
+ if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id == IXD_DEV_ID_VCPF) {
ret = idpf_check_vf_reset_done(hw);
} else {
idpf_reset_pf(hw);
diff --git a/drivers/net/intel/idpf/idpf_common_device.h b/drivers/net/intel/idpf/idpf_common_device.h
index 5f3e4a4fcf..d536ce7e15 100644
--- a/drivers/net/intel/idpf/idpf_common_device.h
+++ b/drivers/net/intel/idpf/idpf_common_device.h
@@ -11,6 +11,7 @@
#include "idpf_common_logs.h"
#define IDPF_DEV_ID_SRIOV 0x145C
+#define IXD_DEV_ID_VCPF 0x1203
#define IDPF_RSS_KEY_LEN 52
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v2 2/4] net/idpf: add splitq jumbo packet handling
2025-09-22 14:10 ` [PATCH v2 0/4] add vcpf pmd support Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 1/4] net/intel: add vCPF PMD support Shetty, Praveen
@ 2025-09-22 14:10 ` Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-22 14:10 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Dhananjay Shukla, atulpatel261194
From: Praveen Shetty <praveen.shetty@intel.com>
This patch will add the jumbo packets handling in the
idpf_dp_splitq_recv_pkts function.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Signed-off-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Signed-off-by: atulpatel261194 <Atul.Patel@intel.com>
---
drivers/net/intel/idpf/idpf_common_rxtx.c | 50 ++++++++++++++++++-----
1 file changed, 40 insertions(+), 10 deletions(-)
diff --git a/drivers/net/intel/idpf/idpf_common_rxtx.c b/drivers/net/intel/idpf/idpf_common_rxtx.c
index eb25b091d8..412aff8f5f 100644
--- a/drivers/net/intel/idpf/idpf_common_rxtx.c
+++ b/drivers/net/intel/idpf/idpf_common_rxtx.c
@@ -623,10 +623,12 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
uint16_t pktlen_gen_bufq_id;
- struct idpf_rx_queue *rxq;
+ struct idpf_rx_queue *rxq = rx_queue;
const uint32_t *ptype_tbl;
uint8_t status_err0_qw1;
struct idpf_adapter *ad;
+ struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+ struct rte_mbuf *last_seg = rxq->pkt_last_seg;
struct rte_mbuf *rxm;
uint16_t rx_id_bufq1;
uint16_t rx_id_bufq2;
@@ -659,6 +661,7 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pktlen_gen_bufq_id =
rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
+ status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
gen_id = (pktlen_gen_bufq_id &
VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
@@ -697,16 +700,39 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->pkt_len = pkt_len;
rxm->data_len = pkt_len;
rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+ /*
+ * If this is the first buffer of the received packet, set the
+ * pointer to the first mbuf of the packet and initialize its
+ * context. Otherwise, update the total length and the number
+ * of segments of the current scattered packet, and update the
+ * pointer to the last mbuf of the current packet.
+ */
+ if (!first_seg) {
+ first_seg = rxm;
+ first_seg->nb_segs = 1;
+ first_seg->pkt_len = pkt_len;
+ } else {
+ first_seg->pkt_len =
+ (uint16_t)(first_seg->pkt_len +
+ pkt_len);
+ first_seg->nb_segs++;
+ last_seg->next = rxm;
+ }
+
+ if (!(status_err0_qw1 & (1 << VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S))) {
+ last_seg = rxm;
+ continue;
+ }
+
rxm->next = NULL;
- rxm->nb_segs = 1;
- rxm->port = rxq->port_id;
- rxm->ol_flags = 0;
- rxm->packet_type =
+ first_seg->port = rxq->port_id;
+ first_seg->ol_flags = 0;
+ first_seg->packet_type =
ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
-
- status_err0_qw1 = rx_desc->status_err0_qw1;
+ status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
if (idpf_timestamp_dynflag > 0 &&
@@ -719,16 +745,20 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*RTE_MBUF_DYNFIELD(rxm,
idpf_timestamp_dynfield_offset,
rte_mbuf_timestamp_t *) = ts_ns;
- rxm->ol_flags |= idpf_timestamp_dynflag;
+ first_seg->ol_flags |= idpf_timestamp_dynflag;
}
- rxm->ol_flags |= pkt_flags;
+ first_seg->ol_flags |= pkt_flags;
- rx_pkts[nb_rx++] = rxm;
+ rx_pkts[nb_rx++] = first_seg;
+
+ first_seg = NULL;
}
if (nb_rx > 0) {
rxq->rx_tail = rx_id;
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
rxq->bufq1->rx_next_avail = rx_id_bufq1;
if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v2 3/4] net/intel: add config queue support to vCPF
2025-09-22 14:10 ` [PATCH v2 0/4] add vcpf pmd support Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 1/4] net/intel: add vCPF PMD support Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
@ 2025-09-22 14:10 ` Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-22 14:10 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Dhananjay Shukla, Atul Patel
From: Praveen Shetty <praveen.shetty@intel.com>
A "configuration queue" is a software term to denote
a hardware mailbox queue dedicated to NSS programming.
While the hardware does not have a construct of a
"configuration queue", software does to state clearly
the distinction between a queue software dedicates to
regular mailbox processing (e.g. CPChnl or Virtchnl)
and a queue software dedicates to NSS programming
(e.g. SEM/LEM rule programming).
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Tested-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Tested-by: Atul Patel <atul.patel@intel.com>
---
drivers/net/intel/cpfl/cpfl_ethdev.c | 274 +++++++++++++++---
drivers/net/intel/cpfl/cpfl_ethdev.h | 38 ++-
drivers/net/intel/cpfl/cpfl_vchnl.c | 143 ++++++++-
drivers/net/intel/idpf/base/idpf_osdep.h | 3 +
drivers/net/intel/idpf/base/virtchnl2.h | 3 +-
drivers/net/intel/idpf/idpf_common_device.h | 2 +
drivers/net/intel/idpf/idpf_common_virtchnl.c | 38 +++
drivers/net/intel/idpf/idpf_common_virtchnl.h | 3 +
8 files changed, 449 insertions(+), 55 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index d6227c99b5..c411a2a024 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -29,6 +29,9 @@
#define CPFL_FLOW_PARSER "flow_parser"
#endif
+#define VCPF_FID 0
+#define CPFL_FID 6
+
rte_spinlock_t cpfl_adapter_lock;
/* A list for all adapters, one adapter matches one PCI device */
struct cpfl_adapter_list cpfl_adapter_list;
@@ -1699,7 +1702,8 @@ cpfl_handle_vchnl_event_msg(struct cpfl_adapter_ext *adapter, uint8_t *msg, uint
}
/* ignore if it is ctrl vport */
- if (adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF &&
+ adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
return;
vport = cpfl_find_vport(adapter, vc_event->vport_id);
@@ -1903,18 +1907,30 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
{
int i, ret;
- for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
+ for (i = 0; i < adapter->num_tx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[0].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, false);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
+
if (ret) {
PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
return ret;
}
}
- for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
- VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
+ for (i = 0; i < adapter->num_rx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[1].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, false);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
+
if (ret) {
PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
return ret;
@@ -1922,6 +1938,7 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
}
return 0;
+
}
static int
@@ -1941,8 +1958,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
return ret;
}
- for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
+ for (i = 0; i < adapter->num_tx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[0].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, true);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to enable Tx config queue.");
@@ -1950,8 +1972,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
}
}
- for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
+ for (i = 0; i < adapter->num_rx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[1].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, true);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to enable Rx config queue.");
@@ -1971,14 +1998,20 @@ cpfl_remove_cfgqs(struct cpfl_adapter_ext *adapter)
create_cfgq_info = adapter->cfgq_info;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
- if (adapter->ctlqp[i])
+ for (i = 0; i < adapter->num_cfgq; i++) {
+ if (adapter->ctlqp[i]) {
cpfl_vport_ctlq_remove(hw, adapter->ctlqp[i]);
+ adapter->ctlqp[i] = NULL;
+ }
if (create_cfgq_info[i].ring_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem);
if (create_cfgq_info[i].buf_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem);
}
+ if (adapter->ctlqp) {
+ rte_free(adapter->ctlqp);
+ adapter->ctlqp = NULL;
+ }
}
static int
@@ -1988,7 +2021,16 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
int ret = 0;
int i = 0;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ adapter->ctlqp = rte_zmalloc("ctlqp", adapter->num_cfgq *
+ sizeof(struct idpf_ctlq_info *),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!adapter->ctlqp) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for control queues");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < adapter->num_cfgq; i++) {
cfg_cq = NULL;
ret = cpfl_vport_ctlq_add((struct idpf_hw *)(&adapter->base.hw),
&adapter->cfgq_info[i],
@@ -2007,6 +2049,62 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
return ret;
}
+static
+int vcpf_save_chunk_in_cfgq(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_add_queues *add_q =
+ (struct virtchnl2_add_queues *)adapter->addq_recv_info;
+ struct vcpf_cfg_queue *cfgq;
+ struct virtchnl2_queue_reg_chunk *q_chnk;
+ u16 rx, tx, num_chunks, num_q, struct_size;
+ u32 q_id, q_type;
+
+ rx = 0; tx = 0;
+
+ cfgq = rte_zmalloc("cfgq", adapter->num_cfgq *
+ sizeof(struct vcpf_cfg_queue),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!cfgq) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for cfgq");
+ return -ENOMEM;
+ }
+
+ struct_size = idpf_struct_size(add_q, chunks.chunks, (add_q->chunks.num_chunks - 1));
+ adapter->cfgq_in.cfgq_add = rte_zmalloc("config_queues", struct_size, 0);
+ rte_memcpy(adapter->cfgq_in.cfgq_add, add_q, struct_size);
+
+ num_chunks = add_q->chunks.num_chunks;
+ for (u16 i = 0; i < num_chunks; i++) {
+ num_q = add_q->chunks.chunks[i].num_queues;
+ q_chnk = &add_q->chunks.chunks[i];
+ for (u16 j = 0; j < num_q; j++) {
+ if (rx > adapter->num_cfgq || tx > adapter->num_cfgq)
+ break;
+ q_id = q_chnk->start_queue_id + j;
+ q_type = q_chnk->type;
+ if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_TX) {
+ cfgq[0].qid = q_id;
+ cfgq[0].qtail_reg_start = q_chnk->qtail_reg_start;
+ cfgq[0].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
+ q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_TX;
+ tx++;
+ } else if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_RX) {
+ cfgq[1].qid = q_id;
+ cfgq[1].qtail_reg_start = q_chnk->qtail_reg_start;
+ cfgq[1].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
+ q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_RX;
+ rx++;
+ }
+ }
+ }
+
+ adapter->cfgq_in.cfgq = cfgq;
+ adapter->cfgq_in.num_cfgq = adapter->num_cfgq;
+
+ return 0;
+}
+
#define CPFL_CFGQ_RING_LEN 512
#define CPFL_CFGQ_DESCRIPTOR_SIZE 32
#define CPFL_CFGQ_BUFFER_SIZE 256
@@ -2017,32 +2115,71 @@ cpfl_cfgq_setup(struct cpfl_adapter_ext *adapter)
{
struct cpfl_ctlq_create_info *create_cfgq_info;
struct cpfl_vport *vport;
+ struct vcpf_cfgq_info *cfgq_info = &adapter->cfgq_in;
int i, err;
uint32_t ring_size = CPFL_CFGQ_RING_SIZE * sizeof(struct idpf_ctlq_desc);
uint32_t buf_size = CPFL_CFGQ_RING_SIZE * CPFL_CFGQ_BUFFER_SIZE;
+ uint64_t tx_qtail_start;
+ uint64_t rx_qtail_start;
+ uint32_t tx_qtail_spacing;
+ uint32_t rx_qtail_spacing;
vport = &adapter->ctrl_vport;
+
+ tx_qtail_start = vport->base.chunks_info.tx_qtail_start;
+ tx_qtail_spacing = vport->base.chunks_info.tx_qtail_spacing;
+ rx_qtail_start = vport->base.chunks_info.rx_qtail_start;
+ rx_qtail_spacing = vport->base.chunks_info.rx_qtail_spacing;
+
+ adapter->cfgq_info = rte_zmalloc("cfgq_info", adapter->num_cfgq *
+ sizeof(struct cpfl_ctlq_create_info),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!adapter->cfgq_info) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for cfgq_info");
+ return -ENOMEM;
+ }
+
create_cfgq_info = adapter->cfgq_info;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ for (i = 0; i < adapter->num_cfgq; i++) {
if (i % 2 == 0) {
- /* Setup Tx config queue */
- create_cfgq_info[i].id = vport->base.chunks_info.tx_start_qid + i / 2;
+ /* Setup Tx config queue */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].id = cfgq_info->cfgq[i].qid;
+ else
+ create_cfgq_info[i].id = vport->base.chunks_info.tx_start_qid +
+ i / 2;
+
create_cfgq_info[i].type = IDPF_CTLQ_TYPE_CONFIG_TX;
create_cfgq_info[i].len = CPFL_CFGQ_RING_SIZE;
create_cfgq_info[i].buf_size = CPFL_CFGQ_BUFFER_SIZE;
memset(&create_cfgq_info[i].reg, 0, sizeof(struct idpf_ctlq_reg));
- create_cfgq_info[i].reg.tail = vport->base.chunks_info.tx_qtail_start +
- i / 2 * vport->base.chunks_info.tx_qtail_spacing;
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].reg.tail = cfgq_info->cfgq[i].qtail_reg_start;
+ else
+ create_cfgq_info[i].reg.tail = tx_qtail_start +
+ i / 2 * tx_qtail_spacing;
+
} else {
- /* Setup Rx config queue */
- create_cfgq_info[i].id = vport->base.chunks_info.rx_start_qid + i / 2;
+ /* Setup Rx config queue */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].id = cfgq_info->cfgq[i].qid;
+ else
+ create_cfgq_info[i].id = vport->base.chunks_info.rx_start_qid +
+ i / 2;
+
create_cfgq_info[i].type = IDPF_CTLQ_TYPE_CONFIG_RX;
create_cfgq_info[i].len = CPFL_CFGQ_RING_SIZE;
create_cfgq_info[i].buf_size = CPFL_CFGQ_BUFFER_SIZE;
memset(&create_cfgq_info[i].reg, 0, sizeof(struct idpf_ctlq_reg));
- create_cfgq_info[i].reg.tail = vport->base.chunks_info.rx_qtail_start +
- i / 2 * vport->base.chunks_info.rx_qtail_spacing;
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].reg.tail = cfgq_info->cfgq[i].qtail_reg_start;
+ else
+ create_cfgq_info[i].reg.tail = rx_qtail_start +
+ i / 2 * rx_qtail_spacing;
+
+
if (!idpf_alloc_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem,
buf_size)) {
err = -ENOMEM;
@@ -2050,19 +2187,24 @@ cpfl_cfgq_setup(struct cpfl_adapter_ext *adapter)
}
}
if (!idpf_alloc_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem,
- ring_size)) {
+ ring_size)) {
err = -ENOMEM;
goto free_mem;
}
}
+
return 0;
free_mem:
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ for (i = 0; i < adapter->num_cfgq; i++) {
if (create_cfgq_info[i].ring_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem);
if (create_cfgq_info[i].buf_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem);
}
+ if (adapter->cfgq_info) {
+ rte_free(adapter->cfgq_info);
+ adapter->cfgq_info = NULL;
+ }
return err;
}
@@ -2107,7 +2249,10 @@ cpfl_ctrl_path_close(struct cpfl_adapter_ext *adapter)
{
cpfl_stop_cfgqs(adapter);
cpfl_remove_cfgqs(adapter);
- idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ else
+ vcpf_del_queues(adapter);
}
static int
@@ -2115,22 +2260,39 @@ cpfl_ctrl_path_open(struct cpfl_adapter_ext *adapter)
{
int ret;
- ret = cpfl_vc_create_ctrl_vport(adapter);
- if (ret) {
- PMD_INIT_LOG(ERR, "Failed to create control vport");
- return ret;
- }
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF) {
+ ret = cpfl_vc_create_ctrl_vport(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to create control vport");
+ return ret;
+ }
- ret = cpfl_init_ctrl_vport(adapter);
- if (ret) {
- PMD_INIT_LOG(ERR, "Failed to init control vport");
- goto err_init_ctrl_vport;
+ ret = cpfl_init_ctrl_vport(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init control vport");
+ goto err_init_ctrl_vport;
+ }
+ } else {
+ ret = vcpf_add_queues(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to add queues");
+ return ret;
+ }
+
+ ret = vcpf_save_chunk_in_cfgq(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to save config queue chunk");
+ return ret;
+ }
}
ret = cpfl_cfgq_setup(adapter);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to setup control queues");
- goto err_cfgq_setup;
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ goto err_cfgq_setup;
+ else
+ goto err_del_cfg;
}
ret = cpfl_add_cfgqs(adapter);
@@ -2153,9 +2315,13 @@ cpfl_ctrl_path_open(struct cpfl_adapter_ext *adapter)
cpfl_remove_cfgqs(adapter);
err_cfgq_setup:
err_init_ctrl_vport:
- idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+err_del_cfg:
+ vcpf_del_queues(adapter);
return ret;
+
}
static struct virtchnl2_get_capabilities req_caps = {
@@ -2291,12 +2457,29 @@ get_running_host_id(void)
return host_id;
}
+static uint8_t
+set_config_queue_details(struct cpfl_adapter_ext *adapter, struct rte_pci_addr *pci_addr)
+{
+ if (pci_addr->function == CPFL_FID) {
+ adapter->num_cfgq = CPFL_CFGQ_NUM;
+ adapter->num_rx_cfgq = CPFL_RX_CFGQ_NUM;
+ adapter->num_tx_cfgq = CPFL_TX_CFGQ_NUM;
+ } else if (pci_addr->function == VCPF_FID) {
+ adapter->num_cfgq = VCPF_CFGQ_NUM;
+ adapter->num_rx_cfgq = VCPF_RX_CFGQ_NUM;
+ adapter->num_tx_cfgq = VCPF_TX_CFGQ_NUM;
+ }
+
+ return 0;
+}
+
static int
cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
struct cpfl_devargs *devargs)
{
struct idpf_adapter *base = &adapter->base;
struct idpf_hw *hw = &base->hw;
+ struct rte_pci_addr *pci_addr = &pci_dev->addr;
int ret = 0;
#ifndef RTE_HAS_JANSSON
@@ -2348,10 +2531,23 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
goto err_vports_alloc;
}
- ret = cpfl_ctrl_path_open(adapter);
+ /* set the number of config queues to be requested */
+ ret = set_config_queue_details(adapter, pci_addr);
if (ret) {
- PMD_INIT_LOG(ERR, "Failed to setup control path");
- goto err_create_ctrl_vport;
+ PMD_INIT_LOG(ERR, "Failed to set the config queue details");
+ return -1;
+ }
+
+ if (pci_addr->function == VCPF_FID || pci_addr->function == CPFL_FID) {
+ ret = cpfl_ctrl_path_open(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup control path");
+ if (pci_addr->function == CPFL_FID)
+ goto err_create_ctrl_vport;
+ else
+ return ret;
+ }
+
}
#ifdef RTE_HAS_JANSSON
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index 2cfcdd6206..81f223eef5 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -90,6 +90,9 @@
#define CPFL_FPCP_CFGQ_TX 0
#define CPFL_FPCP_CFGQ_RX 1
#define CPFL_CFGQ_NUM 8
+#define VCPF_RX_CFGQ_NUM 1
+#define VCPF_TX_CFGQ_NUM 1
+#define VCPF_CFGQ_NUM 2
/* bit[15:14] type
* bit[13] host/accelerator core
@@ -201,6 +204,30 @@ struct cpfl_metadata {
struct cpfl_metadata_chunk chunks[CPFL_META_LENGTH];
};
+/**
+ * struct vcpf_cfg_queue - config queue information
+ * @qid: rx/tx queue id
+ * @qtail_reg_start: rx/tx tail queue register start
+ * @qtail_reg_spacing: rx/tx tail queue register spacing
+ */
+struct vcpf_cfg_queue {
+ u32 qid;
+ u64 qtail_reg_start;
+ u32 qtail_reg_spacing;
+};
+
+/**
+ * struct vcpf_cfgq_info - config queue information
+ * @num_cfgq: number of config queues
+ * @cfgq_add: config queue add information
+ * @cfgq: config queue information
+ */
+struct vcpf_cfgq_info {
+ u16 num_cfgq;
+ struct virtchnl2_add_queues *cfgq_add;
+ struct vcpf_cfg_queue *cfgq;
+};
+
struct cpfl_adapter_ext {
TAILQ_ENTRY(cpfl_adapter_ext) next;
struct idpf_adapter base;
@@ -230,8 +257,13 @@ struct cpfl_adapter_ext {
/* ctrl vport and ctrl queues. */
struct cpfl_vport ctrl_vport;
uint8_t ctrl_vport_recv_info[IDPF_DFLT_MBX_BUF_SIZE];
- struct idpf_ctlq_info *ctlqp[CPFL_CFGQ_NUM];
- struct cpfl_ctlq_create_info cfgq_info[CPFL_CFGQ_NUM];
+ struct idpf_ctlq_info **ctlqp;
+ struct cpfl_ctlq_create_info *cfgq_info;
+ struct vcpf_cfgq_info cfgq_in;
+ uint8_t addq_recv_info[IDPF_DFLT_MBX_BUF_SIZE];
+ uint16_t num_cfgq;
+ uint16_t num_rx_cfgq;
+ uint16_t num_tx_cfgq;
uint8_t host_id;
};
@@ -252,6 +284,8 @@ int cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter);
int cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter);
int cpfl_alloc_dma_mem_batch(struct idpf_dma_mem *orig_dma, struct idpf_dma_mem *dma,
uint32_t size, int batch_size);
+int vcpf_add_queues(struct cpfl_adapter_ext *adapter);
+int vcpf_del_queues(struct cpfl_adapter_ext *adapter);
#define CPFL_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/intel/cpfl/cpfl_vchnl.c b/drivers/net/intel/cpfl/cpfl_vchnl.c
index 7d277a0e8e..9c842b60df 100644
--- a/drivers/net/intel/cpfl/cpfl_vchnl.c
+++ b/drivers/net/intel/cpfl/cpfl_vchnl.c
@@ -106,6 +106,106 @@ cpfl_vc_create_ctrl_vport(struct cpfl_adapter_ext *adapter)
return err;
}
+#define VCPF_CFQ_MB_INDEX 0xFF
+int
+vcpf_add_queues(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_add_queues add_cfgq;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&add_cfgq, 0, sizeof(struct virtchnl2_add_queues));
+ u16 num_cfgq = 1;
+
+ add_cfgq.num_tx_q = rte_cpu_to_le_16(num_cfgq);
+ add_cfgq.num_rx_q = rte_cpu_to_le_16(num_cfgq);
+ add_cfgq.mbx_q_index = VCPF_CFQ_MB_INDEX;
+
+ add_cfgq.vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ add_cfgq.num_tx_complq = 0;
+ add_cfgq.num_rx_bufq = 0;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_ADD_QUEUES;
+ args.in_args = (uint8_t *)&add_cfgq;
+ args.in_args_size = sizeof(add_cfgq);
+ args.out_buffer = adapter->base.mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_vc_cmd_execute(&adapter->base, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command VIRTCHNL2_OP_ADD_QUEUES");
+ return err;
+ }
+
+ rte_memcpy(adapter->addq_recv_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+
+ return err;
+}
+
+int
+vcpf_del_queues(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_del_ena_dis_queues *del_cfgq;
+ u16 num_chunks;
+ struct idpf_cmd_info args;
+ int i, err, size;
+
+ num_chunks = adapter->cfgq_in.cfgq_add->chunks.num_chunks;
+ size = idpf_struct_size(del_cfgq, chunks.chunks, (num_chunks - 1));
+ del_cfgq = rte_zmalloc("del_cfgq", size, 0);
+ if (!del_cfgq) {
+ PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_del_ena_dis_queues");
+ err = -ENOMEM;
+ return err;
+ }
+
+ del_cfgq->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ del_cfgq->chunks.num_chunks = num_chunks;
+
+ /* fill config queue chunk data */
+ for (i = 0; i < num_chunks; i++) {
+ del_cfgq->chunks.chunks[i].type =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].type;
+ del_cfgq->chunks.chunks[i].start_queue_id =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].start_queue_id;
+ del_cfgq->chunks.chunks[i].num_queues =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].num_queues;
+ }
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_DEL_QUEUES;
+ args.in_args = (uint8_t *)del_cfgq;
+ args.in_args_size = idpf_struct_size(del_cfgq, chunks.chunks,
+ (del_cfgq->chunks.num_chunks - 1));
+ args.out_buffer = adapter->base.mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_vc_cmd_execute(&adapter->base, &args);
+ rte_free(del_cfgq);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command VIRTCHNL2_OP_DEL_QUEUES");
+ return err;
+ }
+
+ if (adapter->cfgq_info) {
+ rte_free(adapter->cfgq_info);
+ adapter->cfgq_info = NULL;
+ }
+ adapter->cfgq_in.num_cfgq = 0;
+ if (adapter->cfgq_in.cfgq_add) {
+ rte_free(adapter->cfgq_in.cfgq_add);
+ adapter->cfgq_in.cfgq_add = NULL;
+ }
+ if (adapter->cfgq_in.cfgq) {
+ rte_free(adapter->cfgq_in.cfgq);
+ adapter->cfgq_in.cfgq = NULL;
+ }
+ return err;
+}
+
int
cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
{
@@ -116,13 +216,16 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
uint16_t num_qs;
int size, err, i;
- if (vport->base.rxq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
- PMD_DRV_LOG(ERR, "This rxq model isn't supported.");
- err = -EINVAL;
- return err;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF) {
+ if (vport->base.rxq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ PMD_DRV_LOG(ERR, "This rxq model isn't supported.");
+ err = -EINVAL;
+ return err;
+ }
}
- num_qs = CPFL_RX_CFGQ_NUM;
+ num_qs = adapter->num_rx_cfgq;
+
size = sizeof(*vc_rxqs) + (num_qs - 1) *
sizeof(struct virtchnl2_rxq_info);
vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
@@ -131,7 +234,12 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
err = -ENOMEM;
return err;
}
- vc_rxqs->vport_id = vport->base.vport_id;
+
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vc_rxqs->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ else
+ vc_rxqs->vport_id = vport->base.vport_id;
+
vc_rxqs->num_qinfo = num_qs;
for (i = 0; i < num_qs; i++) {
@@ -141,7 +249,8 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
rxq_info->queue_id = adapter->cfgq_info[2 * i + 1].id;
rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
rxq_info->data_buffer_size = adapter->cfgq_info[2 * i + 1].buf_size;
- rxq_info->max_pkt_size = vport->base.max_pkt_len;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF)
+ rxq_info->max_pkt_size = vport->base.max_pkt_len;
rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
rxq_info->ring_len = adapter->cfgq_info[2 * i + 1].len;
@@ -172,13 +281,16 @@ cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter)
uint16_t num_qs;
int size, err, i;
- if (vport->base.txq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
- PMD_DRV_LOG(ERR, "This txq model isn't supported.");
- err = -EINVAL;
- return err;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF) {
+ if (vport->base.txq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ PMD_DRV_LOG(ERR, "This txq model isn't supported.");
+ err = -EINVAL;
+ return err;
+ }
}
- num_qs = CPFL_TX_CFGQ_NUM;
+ num_qs = adapter->num_tx_cfgq;
+
size = sizeof(*vc_txqs) + (num_qs - 1) *
sizeof(struct virtchnl2_txq_info);
vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
@@ -187,7 +299,12 @@ cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter)
err = -ENOMEM;
return err;
}
- vc_txqs->vport_id = vport->base.vport_id;
+
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vc_txqs->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ else
+ vc_txqs->vport_id = vport->base.vport_id;
+
vc_txqs->num_qinfo = num_qs;
for (i = 0; i < num_qs; i++) {
diff --git a/drivers/net/intel/idpf/base/idpf_osdep.h b/drivers/net/intel/idpf/base/idpf_osdep.h
index 7b43df3079..47b95d0da6 100644
--- a/drivers/net/intel/idpf/base/idpf_osdep.h
+++ b/drivers/net/intel/idpf/base/idpf_osdep.h
@@ -361,6 +361,9 @@ idpf_hweight32(u32 num)
#endif
+#define idpf_struct_size(ptr, field, num) \
+ (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num))
+
enum idpf_mac_type {
IDPF_MAC_UNKNOWN = 0,
IDPF_MAC_PF,
diff --git a/drivers/net/intel/idpf/base/virtchnl2.h b/drivers/net/intel/idpf/base/virtchnl2.h
index cf010c0504..6cfb4f56fa 100644
--- a/drivers/net/intel/idpf/base/virtchnl2.h
+++ b/drivers/net/intel/idpf/base/virtchnl2.h
@@ -1024,7 +1024,8 @@ struct virtchnl2_add_queues {
__le16 num_tx_complq;
__le16 num_rx_q;
__le16 num_rx_bufq;
- u8 pad[4];
+ u8 mbx_q_index;
+ u8 pad[3];
struct virtchnl2_queue_reg_chunks chunks;
};
diff --git a/drivers/net/intel/idpf/idpf_common_device.h b/drivers/net/intel/idpf/idpf_common_device.h
index d536ce7e15..f962a3f805 100644
--- a/drivers/net/intel/idpf/idpf_common_device.h
+++ b/drivers/net/intel/idpf/idpf_common_device.h
@@ -45,6 +45,8 @@
(sizeof(struct virtchnl2_ptype) + \
(((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
+#define VCPF_CFGQ_VPORT_ID 0xFFFFFFFF
+
struct idpf_adapter {
struct idpf_hw hw;
struct virtchnl2_version_info virtchnl_version;
diff --git a/drivers/net/intel/idpf/idpf_common_virtchnl.c b/drivers/net/intel/idpf/idpf_common_virtchnl.c
index bab854e191..e927d7415a 100644
--- a/drivers/net/intel/idpf/idpf_common_virtchnl.c
+++ b/drivers/net/intel/idpf/idpf_common_virtchnl.c
@@ -787,6 +787,44 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
return err;
}
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ena_dis_one_queue_vcpf)
+int
+idpf_vc_ena_dis_one_queue_vcpf(struct idpf_adapter *adapter, uint16_t qid,
+ uint32_t type, bool on)
+{
+ struct virtchnl2_del_ena_dis_queues *queue_select;
+ struct virtchnl2_queue_chunk *queue_chunk;
+ struct idpf_cmd_info args;
+ int err, len;
+
+ len = sizeof(struct virtchnl2_del_ena_dis_queues);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (queue_select == NULL)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = 1;
+ queue_select->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+
+ queue_chunk->type = type;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+
+ args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
+ VIRTCHNL2_OP_DISABLE_QUEUES;
+ args.in_args = (uint8_t *)queue_select;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+ err = idpf_vc_cmd_execute(adapter, &args);
+ if (err != 0)
+ DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+ on ? "ENABLE" : "DISABLE");
+
+ rte_free(queue_select);
+ return err;
+}
+
RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_switch)
int
idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
diff --git a/drivers/net/intel/idpf/idpf_common_virtchnl.h b/drivers/net/intel/idpf/idpf_common_virtchnl.h
index 68cba9111c..90fce65676 100644
--- a/drivers/net/intel/idpf/idpf_common_virtchnl.h
+++ b/drivers/net/intel/idpf/idpf_common_virtchnl.h
@@ -76,6 +76,9 @@ __rte_internal
int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
uint32_t type, bool on);
__rte_internal
+int idpf_vc_ena_dis_one_queue_vcpf(struct idpf_adapter *adapter, uint16_t qid,
+ uint32_t type, bool on);
+__rte_internal
int idpf_vc_queue_grps_del(struct idpf_vport *vport,
uint16_t num_q_grps,
struct virtchnl2_queue_group_id *qg_ids);
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v2 4/4] net/cpfl: add cpchnl get vport info support
2025-09-22 14:10 ` [PATCH v2 0/4] add vcpf pmd support Shetty, Praveen
` (2 preceding siblings ...)
2025-09-22 14:10 ` [PATCH v2 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
@ 2025-09-22 14:10 ` Shetty, Praveen
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-22 14:10 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Dhananjay Shukla, Atul Patel
From: Praveen Shetty <praveen.shetty@intel.com>
vCPF will only receive the relative queue id from the FW.
CPCHNL2_OP_GET_VPORT_INFO cpchnl message is used
to get the absolute rx/tx queue id and vsi of its own vport.
This patch will add the support to call CPCHNL2_OP_GET_VPORT_INFO
cpchnl message from the vCPF PMD.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Signed-off-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Signed-off-by: Atul Patel <Atul.Patel@intel.com>
---
drivers/net/intel/cpfl/cpfl_cpchnl.h | 7 +--
drivers/net/intel/cpfl/cpfl_ethdev.c | 63 +++++++++++++++++++++++++
drivers/net/intel/cpfl/cpfl_ethdev.h | 70 +++++++++++++++++++++-------
3 files changed, 119 insertions(+), 21 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_cpchnl.h b/drivers/net/intel/cpfl/cpfl_cpchnl.h
index 0c9dfcdbf1..7b01468a83 100644
--- a/drivers/net/intel/cpfl/cpfl_cpchnl.h
+++ b/drivers/net/intel/cpfl/cpfl_cpchnl.h
@@ -133,11 +133,8 @@ CPCHNL2_CHECK_STRUCT_LEN(3792, cpchnl2_queue_groups);
* @brief function types
*/
enum cpchnl2_func_type {
- CPCHNL2_FTYPE_LAN_VF = 0x0,
- CPCHNL2_FTYPE_LAN_RSV1 = 0x1,
- CPCHNL2_FTYPE_LAN_PF = 0x2,
- CPCHNL2_FTYPE_LAN_RSV2 = 0x3,
- CPCHNL2_FTYPE_LAN_MAX
+ CPCHNL2_FTYPE_LAN_PF = 0,
+ CPCHNL2_FTYPE_LAN_VF = 1,
};
/**
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index c411a2a024..fa783b33e7 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -1902,6 +1902,43 @@ cpfl_dev_alarm_handler(void *param)
rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
}
+static
+int vcpf_save_vport_info_response(struct cpfl_vport *cpfl_vport,
+ struct cpchnl2_get_vport_info_response *response)
+{
+ struct cpchnl2_vport_info *info;
+ struct vcpf_vport_info *vport_info;
+ struct cpchnl2_queue_group_info *qgp;
+ struct cpchnl2_queue_chunk *q_chnk;
+ u16 num_queue_groups;
+ u16 num_chunks;
+ u32 q_type;
+
+ info = &response->info;
+ vport_info = &cpfl_vport->vport_info;
+ vport_info->vport_index = info->vport_index;
+ vport_info->vsi_id = info->vsi_id;
+
+ num_queue_groups = response->queue_groups.num_queue_groups;
+ for (u16 i = 0; i < num_queue_groups; i++) {
+ qgp = &response->queue_groups.groups[i];
+ num_chunks = qgp->chunks.num_chunks;
+ /* rx q and tx q are stored in first 2 chunks */
+ for (u16 j = 0; j < (num_chunks - 2); j++) {
+ q_chnk = &qgp->chunks.chunks[j];
+ q_type = q_chnk->type;
+ if (q_type == VIRTCHNL2_QUEUE_TYPE_TX) {
+ vport_info->abs_start_txq_id = q_chnk->start_queue_id;
+ vport_info->num_tx_q = q_chnk->num_queues;
+ } else if (q_type == VIRTCHNL2_QUEUE_TYPE_RX) {
+ vport_info->abs_start_rxq_id = q_chnk->start_queue_id;
+ vport_info->num_rx_q = q_chnk->num_queues;
+ }
+ }
+ }
+ return 0;
+}
+
static int
cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
{
@@ -2720,7 +2757,11 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
/* for sending create vport virtchnl msg prepare */
struct virtchnl2_create_vport create_vport_info;
struct virtchnl2_add_queue_groups p2p_queue_grps_info;
+ struct cpchnl2_get_vport_info_response response;
uint8_t p2p_q_vc_out_info[IDPF_DFLT_MBX_BUF_SIZE] = {0};
+ struct cpfl_vport_id vi;
+ struct cpchnl2_vport_id v_id;
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
int ret = 0;
dev->dev_ops = &cpfl_eth_dev_ops;
@@ -2790,6 +2831,28 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
cpfl_p2p_queue_grps_del(vport);
}
}
+ /* get the vport info */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF) {
+ pci_dev = RTE_DEV_TO_PCI(dev->device);
+ vi.func_type = CPCHNL2_FTYPE_LAN_VF;
+ vi.pf_id = CPFL_HOST0_CPF_ID;
+ vi.vf_id = pci_dev->addr.function;
+
+ v_id.vport_id = cpfl_vport->base.vport_info.info.vport_id;
+ v_id.vport_type = cpfl_vport->base.vport_info.info.vport_type;
+
+ ret = cpfl_cc_vport_info_get(adapter, &v_id, &vi, &response);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to send vport info cpchnl message.");
+ return -1;
+ }
+
+ ret = vcpf_save_vport_info_response(cpfl_vport, &response);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to save cpchnl response.");
+ return -1;
+ }
+ }
return 0;
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index 81f223eef5..90b9e05819 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -165,10 +165,20 @@ struct cpfl_itf {
void *data;
};
+struct vcpf_vport_info {
+ u16 vport_index;
+ u16 vsi_id;
+ u32 abs_start_txq_id;
+ u32 num_tx_q;
+ u32 abs_start_rxq_id;
+ u32 num_rx_q;
+};
+
struct cpfl_vport {
struct cpfl_itf itf;
struct idpf_vport base;
struct p2p_queue_chunks_info *p2p_q_chunks_info;
+ struct vcpf_vport_info vport_info;
struct rte_mempool *p2p_mp;
@@ -320,6 +330,7 @@ cpfl_get_vsi_id(struct cpfl_itf *itf)
uint32_t vport_id;
int ret;
struct cpfl_vport_id vport_identity;
+ u16 vsi_id = 0;
if (!itf)
return CPFL_INVALID_HW_ID;
@@ -329,24 +340,30 @@ cpfl_get_vsi_id(struct cpfl_itf *itf)
return repr->vport_info->vport.info.vsi_id;
} else if (itf->type == CPFL_ITF_TYPE_VPORT) {
- vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
-
- vport_identity.func_type = CPCHNL2_FTYPE_LAN_PF;
- /* host: CPFL_HOST0_CPF_ID, acc: CPFL_ACC_CPF_ID */
- vport_identity.pf_id = (itf->adapter->host_id == CPFL_HOST_ID_ACC) ?
- CPFL_ACC_CPF_ID : CPFL_HOST0_CPF_ID;
- vport_identity.vf_id = 0;
- vport_identity.vport_id = vport_id;
- ret = rte_hash_lookup_data(itf->adapter->vport_map_hash,
- &vport_identity,
- (void **)&info);
- if (ret < 0) {
- PMD_DRV_LOG(ERR, "vport id not exist");
- goto err;
+ if (itf->adapter->base.hw.device_id == IDPF_DEV_ID_CPF) {
+ vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
+
+ vport_identity.func_type = CPCHNL2_FTYPE_LAN_PF;
+ /* host: CPFL_HOST0_CPF_ID, acc: CPFL_ACC_CPF_ID */
+ vport_identity.pf_id = (itf->adapter->host_id == CPFL_HOST_ID_ACC) ?
+ CPFL_ACC_CPF_ID : CPFL_HOST0_CPF_ID;
+ vport_identity.vf_id = 0;
+ vport_identity.vport_id = vport_id;
+ ret = rte_hash_lookup_data(itf->adapter->vport_map_hash,
+ &vport_identity,
+ (void **)&info);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "vport id not exist");
+ goto err;
+ }
+
+ vsi_id = info->vport.info.vsi_id;
+ } else {
+ if (itf->adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vsi_id = (uint16_t)((struct cpfl_vport *)itf)->vport_info.vsi_id;
}
-
- return info->vport.info.vsi_id;
}
+ return vsi_id;
err:
return CPFL_INVALID_HW_ID;
@@ -375,4 +392,25 @@ cpfl_get_itf_by_port_id(uint16_t port_id)
return CPFL_DEV_TO_ITF(dev);
}
+
+static inline uint32_t
+vcpf_get_abs_qid(uint16_t port_id, uint32_t queue_type)
+{
+ struct cpfl_itf *itf = cpfl_get_itf_by_port_id(port_id);
+ struct cpfl_vport *vport;
+ if (!itf)
+ return CPFL_INVALID_HW_ID;
+ if (itf->type == CPFL_ITF_TYPE_VPORT) {
+ vport = (void *)itf;
+ if (itf->adapter->base.hw.device_id == IXD_DEV_ID_VCPF) {
+ switch (queue_type) {
+ case VIRTCHNL2_QUEUE_TYPE_TX:
+ return vport->vport_info.abs_start_txq_id;
+ case VIRTCHNL2_QUEUE_TYPE_RX:
+ return vport->vport_info.abs_start_rxq_id;
+ }
+ }
+ }
+ return 0;
+}
#endif /* _CPFL_ETHDEV_H_ */
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v3 0/4] add vcpf pmd support
2025-09-22 14:10 ` [PATCH v2 1/4] net/intel: add vCPF PMD support Shetty, Praveen
@ 2025-09-23 12:54 ` Shetty, Praveen
2025-09-23 12:54 ` [PATCH v3 1/4] net/intel: add vCPF PMD support Shetty, Praveen
` (3 more replies)
0 siblings, 4 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-23 12:54 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh; +Cc: dev
Virtual Control Plane Function (vCPF) is a SR-IOV Virtual Function of
the CPF(PF) device.vCPF is used to support multiple control plane functions.
This patchset is for extending the CPFL PMD to support the new vCPF device.
In this implementaion, both CPFL and the vCPF devices share most of the
initialization routine and share the common data path implementation, which
eliminates code duplication and improving the maintainability of the driver code.
---
v3:
- fixed cpchnl2_func_type enum for PF device
v2:
- fixed test case failure
---
Praveen Shetty (4):
net/intel: add vCPF PMD support
net/idpf: add splitq jumbo packet handling
net/intel: add config queue support to vCPF
net/cpfl: add cpchnl get vport info support
drivers/net/intel/cpfl/cpfl_cpchnl.h | 8 +
drivers/net/intel/cpfl/cpfl_ethdev.c | 354 ++++++++++++++++--
drivers/net/intel/cpfl/cpfl_ethdev.h | 109 +++++-
drivers/net/intel/cpfl/cpfl_vchnl.c | 143 ++++++-
drivers/net/intel/idpf/base/idpf_osdep.h | 3 +
drivers/net/intel/idpf/base/virtchnl2.h | 3 +-
drivers/net/intel/idpf/idpf_common_device.c | 4 +-
drivers/net/intel/idpf/idpf_common_device.h | 3 +
drivers/net/intel/idpf/idpf_common_rxtx.c | 50 ++-
drivers/net/intel/idpf/idpf_common_virtchnl.c | 38 ++
drivers/net/intel/idpf/idpf_common_virtchnl.h | 3 +
11 files changed, 635 insertions(+), 83 deletions(-)
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v3 1/4] net/intel: add vCPF PMD support
2025-09-23 12:54 ` [PATCH v3 0/4] add vcpf pmd support Shetty, Praveen
@ 2025-09-23 12:54 ` Shetty, Praveen
2025-09-29 12:18 ` Bruce Richardson
2025-09-23 12:54 ` [PATCH v3 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
` (2 subsequent siblings)
3 siblings, 1 reply; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-23 12:54 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Atul Patel, Dhananjay Shukla
From: Praveen Shetty <praveen.shetty@intel.com>
This patch adds the registration support for a new vCPF PMD.
vCPF PMD is responsible for enabling control and data path
functionality for the CPF VF devices.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Tested-by: Atul Patel <atul.patel@intel.com>
Tested-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
---
drivers/net/intel/cpfl/cpfl_ethdev.c | 17 +++++++++++++++++
drivers/net/intel/cpfl/cpfl_ethdev.h | 1 +
drivers/net/intel/idpf/idpf_common_device.c | 4 ++--
drivers/net/intel/idpf/idpf_common_device.h | 1 +
4 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index 6d7b23ad7a..d6227c99b5 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -1854,6 +1854,7 @@ cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter)
switch (mbx_op) {
case idpf_mbq_opc_send_msg_to_peer_pf:
+ case idpf_mbq_opc_send_msg_to_peer_drv:
if (vc_op == VIRTCHNL2_OP_EVENT) {
cpfl_handle_vchnl_event_msg(adapter, adapter->base.mbx_resp,
ctlq_msg.data_len);
@@ -2610,6 +2611,11 @@ static const struct rte_pci_id pci_id_cpfl_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static const struct rte_pci_id pci_id_vcpf_map[] = {
+ { RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IXD_DEV_ID_VCPF) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
static struct cpfl_adapter_ext *
cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
{
@@ -2866,6 +2872,14 @@ static struct rte_pci_driver rte_cpfl_pmd = {
.remove = cpfl_pci_remove,
};
+static struct rte_pci_driver rte_vcpf_pmd = {
+ .id_table = pci_id_vcpf_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING |
+ RTE_PCI_DRV_PROBE_AGAIN,
+ .probe = cpfl_pci_probe,
+ .remove = cpfl_pci_remove,
+};
+
/**
* Driver initialization routine.
* Invoked once at EAL init time.
@@ -2874,6 +2888,9 @@ static struct rte_pci_driver rte_cpfl_pmd = {
RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PCI(net_vcpf, rte_vcpf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_vcpf, pci_id_vcpf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_vcpf, "* igb_uio | vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
CPFL_TX_SINGLE_Q "=<0|1> "
CPFL_RX_SINGLE_Q "=<0|1> "
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index d4e1176ab1..2cfcdd6206 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -60,6 +60,7 @@
/* Device IDs */
#define IDPF_DEV_ID_CPF 0x1453
+#define IXD_DEV_ID_VCPF 0x1203
#define VIRTCHNL2_QUEUE_GROUP_P2P 0x100
#define CPFL_HOST_ID_NUM 2
diff --git a/drivers/net/intel/idpf/idpf_common_device.c b/drivers/net/intel/idpf/idpf_common_device.c
index ff1fbcd2b4..8c637a2fb6 100644
--- a/drivers/net/intel/idpf/idpf_common_device.c
+++ b/drivers/net/intel/idpf/idpf_common_device.c
@@ -130,7 +130,7 @@ idpf_init_mbx(struct idpf_hw *hw)
struct idpf_ctlq_info *ctlq;
int ret = 0;
- if (hw->device_id == IDPF_DEV_ID_SRIOV)
+ if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id == IXD_DEV_ID_VCPF)
ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, vf_ctlq_info);
else
ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, pf_ctlq_info);
@@ -389,7 +389,7 @@ idpf_adapter_init(struct idpf_adapter *adapter)
struct idpf_hw *hw = &adapter->hw;
int ret;
- if (hw->device_id == IDPF_DEV_ID_SRIOV) {
+ if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id == IXD_DEV_ID_VCPF) {
ret = idpf_check_vf_reset_done(hw);
} else {
idpf_reset_pf(hw);
diff --git a/drivers/net/intel/idpf/idpf_common_device.h b/drivers/net/intel/idpf/idpf_common_device.h
index 5f3e4a4fcf..d536ce7e15 100644
--- a/drivers/net/intel/idpf/idpf_common_device.h
+++ b/drivers/net/intel/idpf/idpf_common_device.h
@@ -11,6 +11,7 @@
#include "idpf_common_logs.h"
#define IDPF_DEV_ID_SRIOV 0x145C
+#define IXD_DEV_ID_VCPF 0x1203
#define IDPF_RSS_KEY_LEN 52
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v3 2/4] net/idpf: add splitq jumbo packet handling
2025-09-23 12:54 ` [PATCH v3 0/4] add vcpf pmd support Shetty, Praveen
2025-09-23 12:54 ` [PATCH v3 1/4] net/intel: add vCPF PMD support Shetty, Praveen
@ 2025-09-23 12:54 ` Shetty, Praveen
2025-09-29 12:32 ` Bruce Richardson
2025-09-23 12:54 ` [PATCH v3 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
2025-09-23 12:54 ` [PATCH v3 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
3 siblings, 1 reply; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-23 12:54 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Dhananjay Shukla, atulpatel261194
From: Praveen Shetty <praveen.shetty@intel.com>
This patch will add the jumbo packets handling in the
idpf_dp_splitq_recv_pkts function.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Signed-off-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Signed-off-by: atulpatel261194 <Atul.Patel@intel.com>
---
drivers/net/intel/idpf/idpf_common_rxtx.c | 50 ++++++++++++++++++-----
1 file changed, 40 insertions(+), 10 deletions(-)
diff --git a/drivers/net/intel/idpf/idpf_common_rxtx.c b/drivers/net/intel/idpf/idpf_common_rxtx.c
index eb25b091d8..412aff8f5f 100644
--- a/drivers/net/intel/idpf/idpf_common_rxtx.c
+++ b/drivers/net/intel/idpf/idpf_common_rxtx.c
@@ -623,10 +623,12 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
uint16_t pktlen_gen_bufq_id;
- struct idpf_rx_queue *rxq;
+ struct idpf_rx_queue *rxq = rx_queue;
const uint32_t *ptype_tbl;
uint8_t status_err0_qw1;
struct idpf_adapter *ad;
+ struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+ struct rte_mbuf *last_seg = rxq->pkt_last_seg;
struct rte_mbuf *rxm;
uint16_t rx_id_bufq1;
uint16_t rx_id_bufq2;
@@ -659,6 +661,7 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pktlen_gen_bufq_id =
rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
+ status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
gen_id = (pktlen_gen_bufq_id &
VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
@@ -697,16 +700,39 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->pkt_len = pkt_len;
rxm->data_len = pkt_len;
rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+ /*
+ * If this is the first buffer of the received packet, set the
+ * pointer to the first mbuf of the packet and initialize its
+ * context. Otherwise, update the total length and the number
+ * of segments of the current scattered packet, and update the
+ * pointer to the last mbuf of the current packet.
+ */
+ if (!first_seg) {
+ first_seg = rxm;
+ first_seg->nb_segs = 1;
+ first_seg->pkt_len = pkt_len;
+ } else {
+ first_seg->pkt_len =
+ (uint16_t)(first_seg->pkt_len +
+ pkt_len);
+ first_seg->nb_segs++;
+ last_seg->next = rxm;
+ }
+
+ if (!(status_err0_qw1 & (1 << VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S))) {
+ last_seg = rxm;
+ continue;
+ }
+
rxm->next = NULL;
- rxm->nb_segs = 1;
- rxm->port = rxq->port_id;
- rxm->ol_flags = 0;
- rxm->packet_type =
+ first_seg->port = rxq->port_id;
+ first_seg->ol_flags = 0;
+ first_seg->packet_type =
ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
-
- status_err0_qw1 = rx_desc->status_err0_qw1;
+ status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
if (idpf_timestamp_dynflag > 0 &&
@@ -719,16 +745,20 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*RTE_MBUF_DYNFIELD(rxm,
idpf_timestamp_dynfield_offset,
rte_mbuf_timestamp_t *) = ts_ns;
- rxm->ol_flags |= idpf_timestamp_dynflag;
+ first_seg->ol_flags |= idpf_timestamp_dynflag;
}
- rxm->ol_flags |= pkt_flags;
+ first_seg->ol_flags |= pkt_flags;
- rx_pkts[nb_rx++] = rxm;
+ rx_pkts[nb_rx++] = first_seg;
+
+ first_seg = NULL;
}
if (nb_rx > 0) {
rxq->rx_tail = rx_id;
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
rxq->bufq1->rx_next_avail = rx_id_bufq1;
if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v3 3/4] net/intel: add config queue support to vCPF
2025-09-23 12:54 ` [PATCH v3 0/4] add vcpf pmd support Shetty, Praveen
2025-09-23 12:54 ` [PATCH v3 1/4] net/intel: add vCPF PMD support Shetty, Praveen
2025-09-23 12:54 ` [PATCH v3 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
@ 2025-09-23 12:54 ` Shetty, Praveen
2025-09-29 13:40 ` Bruce Richardson
2025-09-23 12:54 ` [PATCH v3 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
3 siblings, 1 reply; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-23 12:54 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Dhananjay Shukla, Atul Patel
From: Praveen Shetty <praveen.shetty@intel.com>
A "configuration queue" is a software term to denote
a hardware mailbox queue dedicated to NSS programming.
While the hardware does not have a construct of a
"configuration queue", software does to state clearly
the distinction between a queue software dedicates to
regular mailbox processing (e.g. CPChnl or Virtchnl)
and a queue software dedicates to NSS programming
(e.g. SEM/LEM rule programming).
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Tested-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Tested-by: Atul Patel <atul.patel@intel.com>
---
drivers/net/intel/cpfl/cpfl_ethdev.c | 274 +++++++++++++++---
drivers/net/intel/cpfl/cpfl_ethdev.h | 38 ++-
drivers/net/intel/cpfl/cpfl_vchnl.c | 143 ++++++++-
drivers/net/intel/idpf/base/idpf_osdep.h | 3 +
drivers/net/intel/idpf/base/virtchnl2.h | 3 +-
drivers/net/intel/idpf/idpf_common_device.h | 2 +
drivers/net/intel/idpf/idpf_common_virtchnl.c | 38 +++
drivers/net/intel/idpf/idpf_common_virtchnl.h | 3 +
8 files changed, 449 insertions(+), 55 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index d6227c99b5..c411a2a024 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -29,6 +29,9 @@
#define CPFL_FLOW_PARSER "flow_parser"
#endif
+#define VCPF_FID 0
+#define CPFL_FID 6
+
rte_spinlock_t cpfl_adapter_lock;
/* A list for all adapters, one adapter matches one PCI device */
struct cpfl_adapter_list cpfl_adapter_list;
@@ -1699,7 +1702,8 @@ cpfl_handle_vchnl_event_msg(struct cpfl_adapter_ext *adapter, uint8_t *msg, uint
}
/* ignore if it is ctrl vport */
- if (adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF &&
+ adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
return;
vport = cpfl_find_vport(adapter, vc_event->vport_id);
@@ -1903,18 +1907,30 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
{
int i, ret;
- for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
+ for (i = 0; i < adapter->num_tx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[0].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, false);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
+
if (ret) {
PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
return ret;
}
}
- for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
- VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
+ for (i = 0; i < adapter->num_rx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[1].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, false);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
+
if (ret) {
PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
return ret;
@@ -1922,6 +1938,7 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
}
return 0;
+
}
static int
@@ -1941,8 +1958,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
return ret;
}
- for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
+ for (i = 0; i < adapter->num_tx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[0].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, true);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to enable Tx config queue.");
@@ -1950,8 +1972,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
}
}
- for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
+ for (i = 0; i < adapter->num_rx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[1].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, true);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to enable Rx config queue.");
@@ -1971,14 +1998,20 @@ cpfl_remove_cfgqs(struct cpfl_adapter_ext *adapter)
create_cfgq_info = adapter->cfgq_info;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
- if (adapter->ctlqp[i])
+ for (i = 0; i < adapter->num_cfgq; i++) {
+ if (adapter->ctlqp[i]) {
cpfl_vport_ctlq_remove(hw, adapter->ctlqp[i]);
+ adapter->ctlqp[i] = NULL;
+ }
if (create_cfgq_info[i].ring_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem);
if (create_cfgq_info[i].buf_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem);
}
+ if (adapter->ctlqp) {
+ rte_free(adapter->ctlqp);
+ adapter->ctlqp = NULL;
+ }
}
static int
@@ -1988,7 +2021,16 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
int ret = 0;
int i = 0;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ adapter->ctlqp = rte_zmalloc("ctlqp", adapter->num_cfgq *
+ sizeof(struct idpf_ctlq_info *),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!adapter->ctlqp) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for control queues");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < adapter->num_cfgq; i++) {
cfg_cq = NULL;
ret = cpfl_vport_ctlq_add((struct idpf_hw *)(&adapter->base.hw),
&adapter->cfgq_info[i],
@@ -2007,6 +2049,62 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
return ret;
}
+static
+int vcpf_save_chunk_in_cfgq(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_add_queues *add_q =
+ (struct virtchnl2_add_queues *)adapter->addq_recv_info;
+ struct vcpf_cfg_queue *cfgq;
+ struct virtchnl2_queue_reg_chunk *q_chnk;
+ u16 rx, tx, num_chunks, num_q, struct_size;
+ u32 q_id, q_type;
+
+ rx = 0; tx = 0;
+
+ cfgq = rte_zmalloc("cfgq", adapter->num_cfgq *
+ sizeof(struct vcpf_cfg_queue),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!cfgq) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for cfgq");
+ return -ENOMEM;
+ }
+
+ struct_size = idpf_struct_size(add_q, chunks.chunks, (add_q->chunks.num_chunks - 1));
+ adapter->cfgq_in.cfgq_add = rte_zmalloc("config_queues", struct_size, 0);
+ rte_memcpy(adapter->cfgq_in.cfgq_add, add_q, struct_size);
+
+ num_chunks = add_q->chunks.num_chunks;
+ for (u16 i = 0; i < num_chunks; i++) {
+ num_q = add_q->chunks.chunks[i].num_queues;
+ q_chnk = &add_q->chunks.chunks[i];
+ for (u16 j = 0; j < num_q; j++) {
+ if (rx > adapter->num_cfgq || tx > adapter->num_cfgq)
+ break;
+ q_id = q_chnk->start_queue_id + j;
+ q_type = q_chnk->type;
+ if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_TX) {
+ cfgq[0].qid = q_id;
+ cfgq[0].qtail_reg_start = q_chnk->qtail_reg_start;
+ cfgq[0].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
+ q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_TX;
+ tx++;
+ } else if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_RX) {
+ cfgq[1].qid = q_id;
+ cfgq[1].qtail_reg_start = q_chnk->qtail_reg_start;
+ cfgq[1].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
+ q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_RX;
+ rx++;
+ }
+ }
+ }
+
+ adapter->cfgq_in.cfgq = cfgq;
+ adapter->cfgq_in.num_cfgq = adapter->num_cfgq;
+
+ return 0;
+}
+
#define CPFL_CFGQ_RING_LEN 512
#define CPFL_CFGQ_DESCRIPTOR_SIZE 32
#define CPFL_CFGQ_BUFFER_SIZE 256
@@ -2017,32 +2115,71 @@ cpfl_cfgq_setup(struct cpfl_adapter_ext *adapter)
{
struct cpfl_ctlq_create_info *create_cfgq_info;
struct cpfl_vport *vport;
+ struct vcpf_cfgq_info *cfgq_info = &adapter->cfgq_in;
int i, err;
uint32_t ring_size = CPFL_CFGQ_RING_SIZE * sizeof(struct idpf_ctlq_desc);
uint32_t buf_size = CPFL_CFGQ_RING_SIZE * CPFL_CFGQ_BUFFER_SIZE;
+ uint64_t tx_qtail_start;
+ uint64_t rx_qtail_start;
+ uint32_t tx_qtail_spacing;
+ uint32_t rx_qtail_spacing;
vport = &adapter->ctrl_vport;
+
+ tx_qtail_start = vport->base.chunks_info.tx_qtail_start;
+ tx_qtail_spacing = vport->base.chunks_info.tx_qtail_spacing;
+ rx_qtail_start = vport->base.chunks_info.rx_qtail_start;
+ rx_qtail_spacing = vport->base.chunks_info.rx_qtail_spacing;
+
+ adapter->cfgq_info = rte_zmalloc("cfgq_info", adapter->num_cfgq *
+ sizeof(struct cpfl_ctlq_create_info),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!adapter->cfgq_info) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for cfgq_info");
+ return -ENOMEM;
+ }
+
create_cfgq_info = adapter->cfgq_info;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ for (i = 0; i < adapter->num_cfgq; i++) {
if (i % 2 == 0) {
- /* Setup Tx config queue */
- create_cfgq_info[i].id = vport->base.chunks_info.tx_start_qid + i / 2;
+ /* Setup Tx config queue */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].id = cfgq_info->cfgq[i].qid;
+ else
+ create_cfgq_info[i].id = vport->base.chunks_info.tx_start_qid +
+ i / 2;
+
create_cfgq_info[i].type = IDPF_CTLQ_TYPE_CONFIG_TX;
create_cfgq_info[i].len = CPFL_CFGQ_RING_SIZE;
create_cfgq_info[i].buf_size = CPFL_CFGQ_BUFFER_SIZE;
memset(&create_cfgq_info[i].reg, 0, sizeof(struct idpf_ctlq_reg));
- create_cfgq_info[i].reg.tail = vport->base.chunks_info.tx_qtail_start +
- i / 2 * vport->base.chunks_info.tx_qtail_spacing;
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].reg.tail = cfgq_info->cfgq[i].qtail_reg_start;
+ else
+ create_cfgq_info[i].reg.tail = tx_qtail_start +
+ i / 2 * tx_qtail_spacing;
+
} else {
- /* Setup Rx config queue */
- create_cfgq_info[i].id = vport->base.chunks_info.rx_start_qid + i / 2;
+ /* Setup Rx config queue */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].id = cfgq_info->cfgq[i].qid;
+ else
+ create_cfgq_info[i].id = vport->base.chunks_info.rx_start_qid +
+ i / 2;
+
create_cfgq_info[i].type = IDPF_CTLQ_TYPE_CONFIG_RX;
create_cfgq_info[i].len = CPFL_CFGQ_RING_SIZE;
create_cfgq_info[i].buf_size = CPFL_CFGQ_BUFFER_SIZE;
memset(&create_cfgq_info[i].reg, 0, sizeof(struct idpf_ctlq_reg));
- create_cfgq_info[i].reg.tail = vport->base.chunks_info.rx_qtail_start +
- i / 2 * vport->base.chunks_info.rx_qtail_spacing;
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].reg.tail = cfgq_info->cfgq[i].qtail_reg_start;
+ else
+ create_cfgq_info[i].reg.tail = rx_qtail_start +
+ i / 2 * rx_qtail_spacing;
+
+
if (!idpf_alloc_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem,
buf_size)) {
err = -ENOMEM;
@@ -2050,19 +2187,24 @@ cpfl_cfgq_setup(struct cpfl_adapter_ext *adapter)
}
}
if (!idpf_alloc_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem,
- ring_size)) {
+ ring_size)) {
err = -ENOMEM;
goto free_mem;
}
}
+
return 0;
free_mem:
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ for (i = 0; i < adapter->num_cfgq; i++) {
if (create_cfgq_info[i].ring_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem);
if (create_cfgq_info[i].buf_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem);
}
+ if (adapter->cfgq_info) {
+ rte_free(adapter->cfgq_info);
+ adapter->cfgq_info = NULL;
+ }
return err;
}
@@ -2107,7 +2249,10 @@ cpfl_ctrl_path_close(struct cpfl_adapter_ext *adapter)
{
cpfl_stop_cfgqs(adapter);
cpfl_remove_cfgqs(adapter);
- idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ else
+ vcpf_del_queues(adapter);
}
static int
@@ -2115,22 +2260,39 @@ cpfl_ctrl_path_open(struct cpfl_adapter_ext *adapter)
{
int ret;
- ret = cpfl_vc_create_ctrl_vport(adapter);
- if (ret) {
- PMD_INIT_LOG(ERR, "Failed to create control vport");
- return ret;
- }
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF) {
+ ret = cpfl_vc_create_ctrl_vport(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to create control vport");
+ return ret;
+ }
- ret = cpfl_init_ctrl_vport(adapter);
- if (ret) {
- PMD_INIT_LOG(ERR, "Failed to init control vport");
- goto err_init_ctrl_vport;
+ ret = cpfl_init_ctrl_vport(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init control vport");
+ goto err_init_ctrl_vport;
+ }
+ } else {
+ ret = vcpf_add_queues(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to add queues");
+ return ret;
+ }
+
+ ret = vcpf_save_chunk_in_cfgq(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to save config queue chunk");
+ return ret;
+ }
}
ret = cpfl_cfgq_setup(adapter);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to setup control queues");
- goto err_cfgq_setup;
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ goto err_cfgq_setup;
+ else
+ goto err_del_cfg;
}
ret = cpfl_add_cfgqs(adapter);
@@ -2153,9 +2315,13 @@ cpfl_ctrl_path_open(struct cpfl_adapter_ext *adapter)
cpfl_remove_cfgqs(adapter);
err_cfgq_setup:
err_init_ctrl_vport:
- idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+err_del_cfg:
+ vcpf_del_queues(adapter);
return ret;
+
}
static struct virtchnl2_get_capabilities req_caps = {
@@ -2291,12 +2457,29 @@ get_running_host_id(void)
return host_id;
}
+static uint8_t
+set_config_queue_details(struct cpfl_adapter_ext *adapter, struct rte_pci_addr *pci_addr)
+{
+ if (pci_addr->function == CPFL_FID) {
+ adapter->num_cfgq = CPFL_CFGQ_NUM;
+ adapter->num_rx_cfgq = CPFL_RX_CFGQ_NUM;
+ adapter->num_tx_cfgq = CPFL_TX_CFGQ_NUM;
+ } else if (pci_addr->function == VCPF_FID) {
+ adapter->num_cfgq = VCPF_CFGQ_NUM;
+ adapter->num_rx_cfgq = VCPF_RX_CFGQ_NUM;
+ adapter->num_tx_cfgq = VCPF_TX_CFGQ_NUM;
+ }
+
+ return 0;
+}
+
static int
cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
struct cpfl_devargs *devargs)
{
struct idpf_adapter *base = &adapter->base;
struct idpf_hw *hw = &base->hw;
+ struct rte_pci_addr *pci_addr = &pci_dev->addr;
int ret = 0;
#ifndef RTE_HAS_JANSSON
@@ -2348,10 +2531,23 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
goto err_vports_alloc;
}
- ret = cpfl_ctrl_path_open(adapter);
+ /* set the number of config queues to be requested */
+ ret = set_config_queue_details(adapter, pci_addr);
if (ret) {
- PMD_INIT_LOG(ERR, "Failed to setup control path");
- goto err_create_ctrl_vport;
+ PMD_INIT_LOG(ERR, "Failed to set the config queue details");
+ return -1;
+ }
+
+ if (pci_addr->function == VCPF_FID || pci_addr->function == CPFL_FID) {
+ ret = cpfl_ctrl_path_open(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup control path");
+ if (pci_addr->function == CPFL_FID)
+ goto err_create_ctrl_vport;
+ else
+ return ret;
+ }
+
}
#ifdef RTE_HAS_JANSSON
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index 2cfcdd6206..81f223eef5 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -90,6 +90,9 @@
#define CPFL_FPCP_CFGQ_TX 0
#define CPFL_FPCP_CFGQ_RX 1
#define CPFL_CFGQ_NUM 8
+#define VCPF_RX_CFGQ_NUM 1
+#define VCPF_TX_CFGQ_NUM 1
+#define VCPF_CFGQ_NUM 2
/* bit[15:14] type
* bit[13] host/accelerator core
@@ -201,6 +204,30 @@ struct cpfl_metadata {
struct cpfl_metadata_chunk chunks[CPFL_META_LENGTH];
};
+/**
+ * struct vcpf_cfg_queue - config queue information
+ * @qid: rx/tx queue id
+ * @qtail_reg_start: rx/tx tail queue register start
+ * @qtail_reg_spacing: rx/tx tail queue register spacing
+ */
+struct vcpf_cfg_queue {
+ u32 qid;
+ u64 qtail_reg_start;
+ u32 qtail_reg_spacing;
+};
+
+/**
+ * struct vcpf_cfgq_info - config queue information
+ * @num_cfgq: number of config queues
+ * @cfgq_add: config queue add information
+ * @cfgq: config queue information
+ */
+struct vcpf_cfgq_info {
+ u16 num_cfgq;
+ struct virtchnl2_add_queues *cfgq_add;
+ struct vcpf_cfg_queue *cfgq;
+};
+
struct cpfl_adapter_ext {
TAILQ_ENTRY(cpfl_adapter_ext) next;
struct idpf_adapter base;
@@ -230,8 +257,13 @@ struct cpfl_adapter_ext {
/* ctrl vport and ctrl queues. */
struct cpfl_vport ctrl_vport;
uint8_t ctrl_vport_recv_info[IDPF_DFLT_MBX_BUF_SIZE];
- struct idpf_ctlq_info *ctlqp[CPFL_CFGQ_NUM];
- struct cpfl_ctlq_create_info cfgq_info[CPFL_CFGQ_NUM];
+ struct idpf_ctlq_info **ctlqp;
+ struct cpfl_ctlq_create_info *cfgq_info;
+ struct vcpf_cfgq_info cfgq_in;
+ uint8_t addq_recv_info[IDPF_DFLT_MBX_BUF_SIZE];
+ uint16_t num_cfgq;
+ uint16_t num_rx_cfgq;
+ uint16_t num_tx_cfgq;
uint8_t host_id;
};
@@ -252,6 +284,8 @@ int cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter);
int cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter);
int cpfl_alloc_dma_mem_batch(struct idpf_dma_mem *orig_dma, struct idpf_dma_mem *dma,
uint32_t size, int batch_size);
+int vcpf_add_queues(struct cpfl_adapter_ext *adapter);
+int vcpf_del_queues(struct cpfl_adapter_ext *adapter);
#define CPFL_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/intel/cpfl/cpfl_vchnl.c b/drivers/net/intel/cpfl/cpfl_vchnl.c
index 7d277a0e8e..9c842b60df 100644
--- a/drivers/net/intel/cpfl/cpfl_vchnl.c
+++ b/drivers/net/intel/cpfl/cpfl_vchnl.c
@@ -106,6 +106,106 @@ cpfl_vc_create_ctrl_vport(struct cpfl_adapter_ext *adapter)
return err;
}
+#define VCPF_CFQ_MB_INDEX 0xFF
+int
+vcpf_add_queues(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_add_queues add_cfgq;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&add_cfgq, 0, sizeof(struct virtchnl2_add_queues));
+ u16 num_cfgq = 1;
+
+ add_cfgq.num_tx_q = rte_cpu_to_le_16(num_cfgq);
+ add_cfgq.num_rx_q = rte_cpu_to_le_16(num_cfgq);
+ add_cfgq.mbx_q_index = VCPF_CFQ_MB_INDEX;
+
+ add_cfgq.vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ add_cfgq.num_tx_complq = 0;
+ add_cfgq.num_rx_bufq = 0;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_ADD_QUEUES;
+ args.in_args = (uint8_t *)&add_cfgq;
+ args.in_args_size = sizeof(add_cfgq);
+ args.out_buffer = adapter->base.mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_vc_cmd_execute(&adapter->base, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command VIRTCHNL2_OP_ADD_QUEUES");
+ return err;
+ }
+
+ rte_memcpy(adapter->addq_recv_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+
+ return err;
+}
+
+int
+vcpf_del_queues(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_del_ena_dis_queues *del_cfgq;
+ u16 num_chunks;
+ struct idpf_cmd_info args;
+ int i, err, size;
+
+ num_chunks = adapter->cfgq_in.cfgq_add->chunks.num_chunks;
+ size = idpf_struct_size(del_cfgq, chunks.chunks, (num_chunks - 1));
+ del_cfgq = rte_zmalloc("del_cfgq", size, 0);
+ if (!del_cfgq) {
+ PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_del_ena_dis_queues");
+ err = -ENOMEM;
+ return err;
+ }
+
+ del_cfgq->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ del_cfgq->chunks.num_chunks = num_chunks;
+
+ /* fill config queue chunk data */
+ for (i = 0; i < num_chunks; i++) {
+ del_cfgq->chunks.chunks[i].type =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].type;
+ del_cfgq->chunks.chunks[i].start_queue_id =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].start_queue_id;
+ del_cfgq->chunks.chunks[i].num_queues =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].num_queues;
+ }
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_DEL_QUEUES;
+ args.in_args = (uint8_t *)del_cfgq;
+ args.in_args_size = idpf_struct_size(del_cfgq, chunks.chunks,
+ (del_cfgq->chunks.num_chunks - 1));
+ args.out_buffer = adapter->base.mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_vc_cmd_execute(&adapter->base, &args);
+ rte_free(del_cfgq);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command VIRTCHNL2_OP_DEL_QUEUES");
+ return err;
+ }
+
+ if (adapter->cfgq_info) {
+ rte_free(adapter->cfgq_info);
+ adapter->cfgq_info = NULL;
+ }
+ adapter->cfgq_in.num_cfgq = 0;
+ if (adapter->cfgq_in.cfgq_add) {
+ rte_free(adapter->cfgq_in.cfgq_add);
+ adapter->cfgq_in.cfgq_add = NULL;
+ }
+ if (adapter->cfgq_in.cfgq) {
+ rte_free(adapter->cfgq_in.cfgq);
+ adapter->cfgq_in.cfgq = NULL;
+ }
+ return err;
+}
+
int
cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
{
@@ -116,13 +216,16 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
uint16_t num_qs;
int size, err, i;
- if (vport->base.rxq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
- PMD_DRV_LOG(ERR, "This rxq model isn't supported.");
- err = -EINVAL;
- return err;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF) {
+ if (vport->base.rxq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ PMD_DRV_LOG(ERR, "This rxq model isn't supported.");
+ err = -EINVAL;
+ return err;
+ }
}
- num_qs = CPFL_RX_CFGQ_NUM;
+ num_qs = adapter->num_rx_cfgq;
+
size = sizeof(*vc_rxqs) + (num_qs - 1) *
sizeof(struct virtchnl2_rxq_info);
vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
@@ -131,7 +234,12 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
err = -ENOMEM;
return err;
}
- vc_rxqs->vport_id = vport->base.vport_id;
+
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vc_rxqs->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ else
+ vc_rxqs->vport_id = vport->base.vport_id;
+
vc_rxqs->num_qinfo = num_qs;
for (i = 0; i < num_qs; i++) {
@@ -141,7 +249,8 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
rxq_info->queue_id = adapter->cfgq_info[2 * i + 1].id;
rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
rxq_info->data_buffer_size = adapter->cfgq_info[2 * i + 1].buf_size;
- rxq_info->max_pkt_size = vport->base.max_pkt_len;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF)
+ rxq_info->max_pkt_size = vport->base.max_pkt_len;
rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
rxq_info->ring_len = adapter->cfgq_info[2 * i + 1].len;
@@ -172,13 +281,16 @@ cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter)
uint16_t num_qs;
int size, err, i;
- if (vport->base.txq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
- PMD_DRV_LOG(ERR, "This txq model isn't supported.");
- err = -EINVAL;
- return err;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF) {
+ if (vport->base.txq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ PMD_DRV_LOG(ERR, "This txq model isn't supported.");
+ err = -EINVAL;
+ return err;
+ }
}
- num_qs = CPFL_TX_CFGQ_NUM;
+ num_qs = adapter->num_tx_cfgq;
+
size = sizeof(*vc_txqs) + (num_qs - 1) *
sizeof(struct virtchnl2_txq_info);
vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
@@ -187,7 +299,12 @@ cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter)
err = -ENOMEM;
return err;
}
- vc_txqs->vport_id = vport->base.vport_id;
+
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vc_txqs->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ else
+ vc_txqs->vport_id = vport->base.vport_id;
+
vc_txqs->num_qinfo = num_qs;
for (i = 0; i < num_qs; i++) {
diff --git a/drivers/net/intel/idpf/base/idpf_osdep.h b/drivers/net/intel/idpf/base/idpf_osdep.h
index 7b43df3079..47b95d0da6 100644
--- a/drivers/net/intel/idpf/base/idpf_osdep.h
+++ b/drivers/net/intel/idpf/base/idpf_osdep.h
@@ -361,6 +361,9 @@ idpf_hweight32(u32 num)
#endif
+#define idpf_struct_size(ptr, field, num) \
+ (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num))
+
enum idpf_mac_type {
IDPF_MAC_UNKNOWN = 0,
IDPF_MAC_PF,
diff --git a/drivers/net/intel/idpf/base/virtchnl2.h b/drivers/net/intel/idpf/base/virtchnl2.h
index cf010c0504..6cfb4f56fa 100644
--- a/drivers/net/intel/idpf/base/virtchnl2.h
+++ b/drivers/net/intel/idpf/base/virtchnl2.h
@@ -1024,7 +1024,8 @@ struct virtchnl2_add_queues {
__le16 num_tx_complq;
__le16 num_rx_q;
__le16 num_rx_bufq;
- u8 pad[4];
+ u8 mbx_q_index;
+ u8 pad[3];
struct virtchnl2_queue_reg_chunks chunks;
};
diff --git a/drivers/net/intel/idpf/idpf_common_device.h b/drivers/net/intel/idpf/idpf_common_device.h
index d536ce7e15..f962a3f805 100644
--- a/drivers/net/intel/idpf/idpf_common_device.h
+++ b/drivers/net/intel/idpf/idpf_common_device.h
@@ -45,6 +45,8 @@
(sizeof(struct virtchnl2_ptype) + \
(((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
+#define VCPF_CFGQ_VPORT_ID 0xFFFFFFFF
+
struct idpf_adapter {
struct idpf_hw hw;
struct virtchnl2_version_info virtchnl_version;
diff --git a/drivers/net/intel/idpf/idpf_common_virtchnl.c b/drivers/net/intel/idpf/idpf_common_virtchnl.c
index bab854e191..e927d7415a 100644
--- a/drivers/net/intel/idpf/idpf_common_virtchnl.c
+++ b/drivers/net/intel/idpf/idpf_common_virtchnl.c
@@ -787,6 +787,44 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
return err;
}
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ena_dis_one_queue_vcpf)
+int
+idpf_vc_ena_dis_one_queue_vcpf(struct idpf_adapter *adapter, uint16_t qid,
+ uint32_t type, bool on)
+{
+ struct virtchnl2_del_ena_dis_queues *queue_select;
+ struct virtchnl2_queue_chunk *queue_chunk;
+ struct idpf_cmd_info args;
+ int err, len;
+
+ len = sizeof(struct virtchnl2_del_ena_dis_queues);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (queue_select == NULL)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = 1;
+ queue_select->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+
+ queue_chunk->type = type;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+
+ args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
+ VIRTCHNL2_OP_DISABLE_QUEUES;
+ args.in_args = (uint8_t *)queue_select;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+ err = idpf_vc_cmd_execute(adapter, &args);
+ if (err != 0)
+ DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+ on ? "ENABLE" : "DISABLE");
+
+ rte_free(queue_select);
+ return err;
+}
+
RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_switch)
int
idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
diff --git a/drivers/net/intel/idpf/idpf_common_virtchnl.h b/drivers/net/intel/idpf/idpf_common_virtchnl.h
index 68cba9111c..90fce65676 100644
--- a/drivers/net/intel/idpf/idpf_common_virtchnl.h
+++ b/drivers/net/intel/idpf/idpf_common_virtchnl.h
@@ -76,6 +76,9 @@ __rte_internal
int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
uint32_t type, bool on);
__rte_internal
+int idpf_vc_ena_dis_one_queue_vcpf(struct idpf_adapter *adapter, uint16_t qid,
+ uint32_t type, bool on);
+__rte_internal
int idpf_vc_queue_grps_del(struct idpf_vport *vport,
uint16_t num_q_grps,
struct virtchnl2_queue_group_id *qg_ids);
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v3 4/4] net/cpfl: add cpchnl get vport info support
2025-09-23 12:54 ` [PATCH v3 0/4] add vcpf pmd support Shetty, Praveen
` (2 preceding siblings ...)
2025-09-23 12:54 ` [PATCH v3 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
@ 2025-09-23 12:54 ` Shetty, Praveen
2025-09-26 8:11 ` Shetty, Praveen
3 siblings, 1 reply; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-23 12:54 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Dhananjay Shukla, Atul Patel
From: Praveen Shetty <praveen.shetty@intel.com>
vCPF will only receive the relative queue id from the FW.
CPCHNL2_OP_GET_VPORT_INFO cpchnl message is used
to get the absolute rx/tx queue id and vsi of its own vport.
This patch will add the support to call CPCHNL2_OP_GET_VPORT_INFO
cpchnl message from the vCPF PMD.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Signed-off-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Signed-off-by: Atul Patel <Atul.Patel@intel.com>
---
drivers/net/intel/cpfl/cpfl_cpchnl.h | 8 ++++
drivers/net/intel/cpfl/cpfl_ethdev.c | 63 +++++++++++++++++++++++++
drivers/net/intel/cpfl/cpfl_ethdev.h | 70 +++++++++++++++++++++-------
3 files changed, 125 insertions(+), 16 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_cpchnl.h b/drivers/net/intel/cpfl/cpfl_cpchnl.h
index 0c9dfcdbf1..c56d3e6cea 100644
--- a/drivers/net/intel/cpfl/cpfl_cpchnl.h
+++ b/drivers/net/intel/cpfl/cpfl_cpchnl.h
@@ -140,6 +140,14 @@ enum cpchnl2_func_type {
CPCHNL2_FTYPE_LAN_MAX
};
+/**
+ * @brief function types
+ */
+enum vcpf_cpchnl2_func_type {
+ VCPF_CPCHNL2_FTYPE_LAN_PF = 0,
+ VCPF_CPCHNL2_FTYPE_LAN_VF = 1,
+};
+
/**
* @brief containing vport id & type
*/
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index c411a2a024..7b7e21afa6 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -1902,6 +1902,43 @@ cpfl_dev_alarm_handler(void *param)
rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
}
+static
+int vcpf_save_vport_info_response(struct cpfl_vport *cpfl_vport,
+ struct cpchnl2_get_vport_info_response *response)
+{
+ struct cpchnl2_vport_info *info;
+ struct vcpf_vport_info *vport_info;
+ struct cpchnl2_queue_group_info *qgp;
+ struct cpchnl2_queue_chunk *q_chnk;
+ u16 num_queue_groups;
+ u16 num_chunks;
+ u32 q_type;
+
+ info = &response->info;
+ vport_info = &cpfl_vport->vport_info;
+ vport_info->vport_index = info->vport_index;
+ vport_info->vsi_id = info->vsi_id;
+
+ num_queue_groups = response->queue_groups.num_queue_groups;
+ for (u16 i = 0; i < num_queue_groups; i++) {
+ qgp = &response->queue_groups.groups[i];
+ num_chunks = qgp->chunks.num_chunks;
+ /* rx q and tx q are stored in first 2 chunks */
+ for (u16 j = 0; j < (num_chunks - 2); j++) {
+ q_chnk = &qgp->chunks.chunks[j];
+ q_type = q_chnk->type;
+ if (q_type == VIRTCHNL2_QUEUE_TYPE_TX) {
+ vport_info->abs_start_txq_id = q_chnk->start_queue_id;
+ vport_info->num_tx_q = q_chnk->num_queues;
+ } else if (q_type == VIRTCHNL2_QUEUE_TYPE_RX) {
+ vport_info->abs_start_rxq_id = q_chnk->start_queue_id;
+ vport_info->num_rx_q = q_chnk->num_queues;
+ }
+ }
+ }
+ return 0;
+}
+
static int
cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
{
@@ -2720,7 +2757,11 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
/* for sending create vport virtchnl msg prepare */
struct virtchnl2_create_vport create_vport_info;
struct virtchnl2_add_queue_groups p2p_queue_grps_info;
+ struct cpchnl2_get_vport_info_response response;
uint8_t p2p_q_vc_out_info[IDPF_DFLT_MBX_BUF_SIZE] = {0};
+ struct cpfl_vport_id vi;
+ struct cpchnl2_vport_id v_id;
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
int ret = 0;
dev->dev_ops = &cpfl_eth_dev_ops;
@@ -2790,6 +2831,28 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
cpfl_p2p_queue_grps_del(vport);
}
}
+ /* get the vport info */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF) {
+ pci_dev = RTE_DEV_TO_PCI(dev->device);
+ vi.func_type = VCPF_CPCHNL2_FTYPE_LAN_VF;
+ vi.pf_id = CPFL_HOST0_CPF_ID;
+ vi.vf_id = pci_dev->addr.function;
+
+ v_id.vport_id = cpfl_vport->base.vport_info.info.vport_id;
+ v_id.vport_type = cpfl_vport->base.vport_info.info.vport_type;
+
+ ret = cpfl_cc_vport_info_get(adapter, &v_id, &vi, &response);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to send vport info cpchnl message.");
+ return -1;
+ }
+
+ ret = vcpf_save_vport_info_response(cpfl_vport, &response);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to save cpchnl response.");
+ return -1;
+ }
+ }
return 0;
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index 81f223eef5..90b9e05819 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -165,10 +165,20 @@ struct cpfl_itf {
void *data;
};
+struct vcpf_vport_info {
+ u16 vport_index;
+ u16 vsi_id;
+ u32 abs_start_txq_id;
+ u32 num_tx_q;
+ u32 abs_start_rxq_id;
+ u32 num_rx_q;
+};
+
struct cpfl_vport {
struct cpfl_itf itf;
struct idpf_vport base;
struct p2p_queue_chunks_info *p2p_q_chunks_info;
+ struct vcpf_vport_info vport_info;
struct rte_mempool *p2p_mp;
@@ -320,6 +330,7 @@ cpfl_get_vsi_id(struct cpfl_itf *itf)
uint32_t vport_id;
int ret;
struct cpfl_vport_id vport_identity;
+ u16 vsi_id = 0;
if (!itf)
return CPFL_INVALID_HW_ID;
@@ -329,24 +340,30 @@ cpfl_get_vsi_id(struct cpfl_itf *itf)
return repr->vport_info->vport.info.vsi_id;
} else if (itf->type == CPFL_ITF_TYPE_VPORT) {
- vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
-
- vport_identity.func_type = CPCHNL2_FTYPE_LAN_PF;
- /* host: CPFL_HOST0_CPF_ID, acc: CPFL_ACC_CPF_ID */
- vport_identity.pf_id = (itf->adapter->host_id == CPFL_HOST_ID_ACC) ?
- CPFL_ACC_CPF_ID : CPFL_HOST0_CPF_ID;
- vport_identity.vf_id = 0;
- vport_identity.vport_id = vport_id;
- ret = rte_hash_lookup_data(itf->adapter->vport_map_hash,
- &vport_identity,
- (void **)&info);
- if (ret < 0) {
- PMD_DRV_LOG(ERR, "vport id not exist");
- goto err;
+ if (itf->adapter->base.hw.device_id == IDPF_DEV_ID_CPF) {
+ vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
+
+ vport_identity.func_type = CPCHNL2_FTYPE_LAN_PF;
+ /* host: CPFL_HOST0_CPF_ID, acc: CPFL_ACC_CPF_ID */
+ vport_identity.pf_id = (itf->adapter->host_id == CPFL_HOST_ID_ACC) ?
+ CPFL_ACC_CPF_ID : CPFL_HOST0_CPF_ID;
+ vport_identity.vf_id = 0;
+ vport_identity.vport_id = vport_id;
+ ret = rte_hash_lookup_data(itf->adapter->vport_map_hash,
+ &vport_identity,
+ (void **)&info);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "vport id not exist");
+ goto err;
+ }
+
+ vsi_id = info->vport.info.vsi_id;
+ } else {
+ if (itf->adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vsi_id = (uint16_t)((struct cpfl_vport *)itf)->vport_info.vsi_id;
}
-
- return info->vport.info.vsi_id;
}
+ return vsi_id;
err:
return CPFL_INVALID_HW_ID;
@@ -375,4 +392,25 @@ cpfl_get_itf_by_port_id(uint16_t port_id)
return CPFL_DEV_TO_ITF(dev);
}
+
+static inline uint32_t
+vcpf_get_abs_qid(uint16_t port_id, uint32_t queue_type)
+{
+ struct cpfl_itf *itf = cpfl_get_itf_by_port_id(port_id);
+ struct cpfl_vport *vport;
+ if (!itf)
+ return CPFL_INVALID_HW_ID;
+ if (itf->type == CPFL_ITF_TYPE_VPORT) {
+ vport = (void *)itf;
+ if (itf->adapter->base.hw.device_id == IXD_DEV_ID_VCPF) {
+ switch (queue_type) {
+ case VIRTCHNL2_QUEUE_TYPE_TX:
+ return vport->vport_info.abs_start_txq_id;
+ case VIRTCHNL2_QUEUE_TYPE_RX:
+ return vport->vport_info.abs_start_rxq_id;
+ }
+ }
+ }
+ return 0;
+}
#endif /* _CPFL_ETHDEV_H_ */
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v3 4/4] net/cpfl: add cpchnl get vport info support
2025-09-23 12:54 ` [PATCH v3 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
@ 2025-09-26 8:11 ` Shetty, Praveen
0 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-26 8:11 UTC (permalink / raw)
To: dev
From: Praveen Shetty <praveen.shetty@intel.com>
vCPF will only receive the relative queue id from the FW.
CPCHNL2_OP_GET_VPORT_INFO cpchnl message is used to get the absolute rx/tx queue id and vsi of its own vport.
This patch will add the support to call CPCHNL2_OP_GET_VPORT_INFO cpchnl message from the vCPF PMD.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Signed-off-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Signed-off-by: Atul Patel <Atul.Patel@intel.com>
---
drivers/net/intel/cpfl/cpfl_cpchnl.h | 8 ++++ drivers/net/intel/cpfl/cpfl_ethdev.c | 63 +++++++++++++++++++++++++ drivers/net/intel/cpfl/cpfl_ethdev.h | 70 +++++++++++++++++++++-------
3 files changed, 125 insertions(+), 16 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_cpchnl.h b/drivers/net/intel/cpfl/cpfl_cpchnl.h
index 0c9dfcdbf1..c56d3e6cea 100644
--- a/drivers/net/intel/cpfl/cpfl_cpchnl.h
+++ b/drivers/net/intel/cpfl/cpfl_cpchnl.h
@@ -140,6 +140,14 @@ enum cpchnl2_func_type {
CPCHNL2_FTYPE_LAN_MAX
};
+/**
+ * @brief function types
+ */
+enum vcpf_cpchnl2_func_type {
+ VCPF_CPCHNL2_FTYPE_LAN_PF = 0,
+ VCPF_CPCHNL2_FTYPE_LAN_VF = 1,
+};
+
/**
* @brief containing vport id & type
*/
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index c411a2a024..7b7e21afa6 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -1902,6 +1902,43 @@ cpfl_dev_alarm_handler(void *param)
rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter); }
+static
+int vcpf_save_vport_info_response(struct cpfl_vport *cpfl_vport,
+ struct cpchnl2_get_vport_info_response *response) {
+ struct cpchnl2_vport_info *info;
+ struct vcpf_vport_info *vport_info;
+ struct cpchnl2_queue_group_info *qgp;
+ struct cpchnl2_queue_chunk *q_chnk;
+ u16 num_queue_groups;
+ u16 num_chunks;
+ u32 q_type;
+
+ info = &response->info;
+ vport_info = &cpfl_vport->vport_info;
+ vport_info->vport_index = info->vport_index;
+ vport_info->vsi_id = info->vsi_id;
+
+ num_queue_groups = response->queue_groups.num_queue_groups;
+ for (u16 i = 0; i < num_queue_groups; i++) {
+ qgp = &response->queue_groups.groups[i];
+ num_chunks = qgp->chunks.num_chunks;
+ /* rx q and tx q are stored in first 2 chunks */
+ for (u16 j = 0; j < (num_chunks - 2); j++) {
+ q_chnk = &qgp->chunks.chunks[j];
+ q_type = q_chnk->type;
+ if (q_type == VIRTCHNL2_QUEUE_TYPE_TX) {
+ vport_info->abs_start_txq_id = q_chnk->start_queue_id;
+ vport_info->num_tx_q = q_chnk->num_queues;
+ } else if (q_type == VIRTCHNL2_QUEUE_TYPE_RX) {
+ vport_info->abs_start_rxq_id = q_chnk->start_queue_id;
+ vport_info->num_rx_q = q_chnk->num_queues;
+ }
+ }
+ }
+ return 0;
+}
+
static int
cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter) { @@ -2720,7 +2757,11 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
/* for sending create vport virtchnl msg prepare */
struct virtchnl2_create_vport create_vport_info;
struct virtchnl2_add_queue_groups p2p_queue_grps_info;
+ struct cpchnl2_get_vport_info_response response;
uint8_t p2p_q_vc_out_info[IDPF_DFLT_MBX_BUF_SIZE] = {0};
+ struct cpfl_vport_id vi;
+ struct cpchnl2_vport_id v_id;
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
int ret = 0;
dev->dev_ops = &cpfl_eth_dev_ops;
@@ -2790,6 +2831,28 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
cpfl_p2p_queue_grps_del(vport);
}
}
+ /* get the vport info */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF) {
+ pci_dev = RTE_DEV_TO_PCI(dev->device);
+ vi.func_type = VCPF_CPCHNL2_FTYPE_LAN_VF;
+ vi.pf_id = CPFL_HOST0_CPF_ID;
+ vi.vf_id = pci_dev->addr.function;
+
+ v_id.vport_id = cpfl_vport->base.vport_info.info.vport_id;
+ v_id.vport_type = cpfl_vport->base.vport_info.info.vport_type;
+
+ ret = cpfl_cc_vport_info_get(adapter, &v_id, &vi, &response);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to send vport info cpchnl message.");
+ return -1;
+ }
+
+ ret = vcpf_save_vport_info_response(cpfl_vport, &response);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to save cpchnl response.");
+ return -1;
+ }
+ }
return 0;
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index 81f223eef5..90b9e05819 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -165,10 +165,20 @@ struct cpfl_itf {
void *data;
};
+struct vcpf_vport_info {
+ u16 vport_index;
+ u16 vsi_id;
+ u32 abs_start_txq_id;
+ u32 num_tx_q;
+ u32 abs_start_rxq_id;
+ u32 num_rx_q;
+};
+
struct cpfl_vport {
struct cpfl_itf itf;
struct idpf_vport base;
struct p2p_queue_chunks_info *p2p_q_chunks_info;
+ struct vcpf_vport_info vport_info;
struct rte_mempool *p2p_mp;
@@ -320,6 +330,7 @@ cpfl_get_vsi_id(struct cpfl_itf *itf)
uint32_t vport_id;
int ret;
struct cpfl_vport_id vport_identity;
+ u16 vsi_id = 0;
if (!itf)
return CPFL_INVALID_HW_ID;
@@ -329,24 +340,30 @@ cpfl_get_vsi_id(struct cpfl_itf *itf)
return repr->vport_info->vport.info.vsi_id;
} else if (itf->type == CPFL_ITF_TYPE_VPORT) {
- vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
-
- vport_identity.func_type = CPCHNL2_FTYPE_LAN_PF;
- /* host: CPFL_HOST0_CPF_ID, acc: CPFL_ACC_CPF_ID */
- vport_identity.pf_id = (itf->adapter->host_id == CPFL_HOST_ID_ACC) ?
- CPFL_ACC_CPF_ID : CPFL_HOST0_CPF_ID;
- vport_identity.vf_id = 0;
- vport_identity.vport_id = vport_id;
- ret = rte_hash_lookup_data(itf->adapter->vport_map_hash,
- &vport_identity,
- (void **)&info);
- if (ret < 0) {
- PMD_DRV_LOG(ERR, "vport id not exist");
- goto err;
+ if (itf->adapter->base.hw.device_id == IDPF_DEV_ID_CPF) {
+ vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
+
+ vport_identity.func_type = CPCHNL2_FTYPE_LAN_PF;
+ /* host: CPFL_HOST0_CPF_ID, acc: CPFL_ACC_CPF_ID */
+ vport_identity.pf_id = (itf->adapter->host_id == CPFL_HOST_ID_ACC) ?
+ CPFL_ACC_CPF_ID : CPFL_HOST0_CPF_ID;
+ vport_identity.vf_id = 0;
+ vport_identity.vport_id = vport_id;
+ ret = rte_hash_lookup_data(itf->adapter->vport_map_hash,
+ &vport_identity,
+ (void **)&info);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "vport id not exist");
+ goto err;
+ }
+
+ vsi_id = info->vport.info.vsi_id;
+ } else {
+ if (itf->adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vsi_id = (uint16_t)((struct cpfl_vport *)itf)->vport_info.vsi_id;
}
-
- return info->vport.info.vsi_id;
}
+ return vsi_id;
err:
return CPFL_INVALID_HW_ID;
@@ -375,4 +392,25 @@ cpfl_get_itf_by_port_id(uint16_t port_id)
return CPFL_DEV_TO_ITF(dev);
}
+
+static inline uint32_t
+vcpf_get_abs_qid(uint16_t port_id, uint32_t queue_type) {
+ struct cpfl_itf *itf = cpfl_get_itf_by_port_id(port_id);
+ struct cpfl_vport *vport;
+ if (!itf)
+ return CPFL_INVALID_HW_ID;
+ if (itf->type == CPFL_ITF_TYPE_VPORT) {
+ vport = (void *)itf;
+ if (itf->adapter->base.hw.device_id == IXD_DEV_ID_VCPF) {
+ switch (queue_type) {
+ case VIRTCHNL2_QUEUE_TYPE_TX:
+ return vport->vport_info.abs_start_txq_id;
+ case VIRTCHNL2_QUEUE_TYPE_RX:
+ return vport->vport_info.abs_start_rxq_id;
+ }
+ }
+ }
+ return 0;
+}
#endif /* _CPFL_ETHDEV_H_ */
--
Recheck-request: iol-intel-Performance
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v3 1/4] net/intel: add vCPF PMD support
2025-09-23 12:54 ` [PATCH v3 1/4] net/intel: add vCPF PMD support Shetty, Praveen
@ 2025-09-29 12:18 ` Bruce Richardson
2025-09-29 18:55 ` Shetty, Praveen
0 siblings, 1 reply; 35+ messages in thread
From: Bruce Richardson @ 2025-09-29 12:18 UTC (permalink / raw)
To: Shetty, Praveen; +Cc: aman.deep.singh, dev, Atul Patel, Dhananjay Shukla
On Tue, Sep 23, 2025 at 02:54:52PM +0200, Shetty, Praveen wrote:
> From: Praveen Shetty <praveen.shetty@intel.com>
>
> This patch adds the registration support for a new vCPF PMD.
> vCPF PMD is responsible for enabling control and data path
> functionality for the CPF VF devices.
>
> Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
> Tested-by: Atul Patel <atul.patel@intel.com>
> Tested-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
> ---
A few minor comments inline below.
/Bruce
> drivers/net/intel/cpfl/cpfl_ethdev.c | 17 +++++++++++++++++
> drivers/net/intel/cpfl/cpfl_ethdev.h | 1 +
> drivers/net/intel/idpf/idpf_common_device.c | 4 ++--
> drivers/net/intel/idpf/idpf_common_device.h | 1 +
> 4 files changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
> index 6d7b23ad7a..d6227c99b5 100644
> --- a/drivers/net/intel/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
> @@ -1854,6 +1854,7 @@ cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter)
>
> switch (mbx_op) {
> case idpf_mbq_opc_send_msg_to_peer_pf:
> + case idpf_mbq_opc_send_msg_to_peer_drv:
> if (vc_op == VIRTCHNL2_OP_EVENT) {
> cpfl_handle_vchnl_event_msg(adapter, adapter->base.mbx_resp,
> ctlq_msg.data_len);
> @@ -2610,6 +2611,11 @@ static const struct rte_pci_id pci_id_cpfl_map[] = {
> { .vendor_id = 0, /* sentinel */ },
> };
>
> +static const struct rte_pci_id pci_id_vcpf_map[] = {
> + { RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IXD_DEV_ID_VCPF) },
> + { .vendor_id = 0, /* sentinel */ },
> +};
> +
> static struct cpfl_adapter_ext *
> cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
> {
> @@ -2866,6 +2872,14 @@ static struct rte_pci_driver rte_cpfl_pmd = {
> .remove = cpfl_pci_remove,
> };
>
> +static struct rte_pci_driver rte_vcpf_pmd = {
> + .id_table = pci_id_vcpf_map,
> + .drv_flags = RTE_PCI_DRV_NEED_MAPPING |
> + RTE_PCI_DRV_PROBE_AGAIN,
> + .probe = cpfl_pci_probe,
> + .remove = cpfl_pci_remove,
> +};
> +
> /**
> * Driver initialization routine.
> * Invoked once at EAL init time.
> @@ -2874,6 +2888,9 @@ static struct rte_pci_driver rte_cpfl_pmd = {
> RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
> RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
> RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
> +RTE_PMD_REGISTER_PCI(net_vcpf, rte_vcpf_pmd);
> +RTE_PMD_REGISTER_PCI_TABLE(net_vcpf, pci_id_vcpf_map);
> +RTE_PMD_REGISTER_KMOD_DEP(net_vcpf, "* igb_uio | vfio-pci");
Minor question - do you know if this works with uio_pci_generic, or has it
been tested? With igb_uio largely unmaintained right now, the in-tree uio
would be good to be able to recommend if vfio is not an option.
> RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
> CPFL_TX_SINGLE_Q "=<0|1> "
> CPFL_RX_SINGLE_Q "=<0|1> "
> diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
> index d4e1176ab1..2cfcdd6206 100644
> --- a/drivers/net/intel/cpfl/cpfl_ethdev.h
> +++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
> @@ -60,6 +60,7 @@
>
> /* Device IDs */
> #define IDPF_DEV_ID_CPF 0x1453
> +#define IXD_DEV_ID_VCPF 0x1203
> #define VIRTCHNL2_QUEUE_GROUP_P2P 0x100
>
I see the same device id added twice, once in cpfl and once in idpf
drivers. Can the cpfl driver re-use the definition from idpf_common_device
and save duplication?
> #define CPFL_HOST_ID_NUM 2
> diff --git a/drivers/net/intel/idpf/idpf_common_device.c b/drivers/net/intel/idpf/idpf_common_device.c
> index ff1fbcd2b4..8c637a2fb6 100644
> --- a/drivers/net/intel/idpf/idpf_common_device.c
> +++ b/drivers/net/intel/idpf/idpf_common_device.c
> @@ -130,7 +130,7 @@ idpf_init_mbx(struct idpf_hw *hw)
> struct idpf_ctlq_info *ctlq;
> int ret = 0;
>
> - if (hw->device_id == IDPF_DEV_ID_SRIOV)
> + if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id == IXD_DEV_ID_VCPF)
> ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, vf_ctlq_info);
> else
> ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, pf_ctlq_info);
> @@ -389,7 +389,7 @@ idpf_adapter_init(struct idpf_adapter *adapter)
> struct idpf_hw *hw = &adapter->hw;
> int ret;
>
> - if (hw->device_id == IDPF_DEV_ID_SRIOV) {
> + if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id == IXD_DEV_ID_VCPF) {
> ret = idpf_check_vf_reset_done(hw);
> } else {
> idpf_reset_pf(hw);
> diff --git a/drivers/net/intel/idpf/idpf_common_device.h b/drivers/net/intel/idpf/idpf_common_device.h
> index 5f3e4a4fcf..d536ce7e15 100644
> --- a/drivers/net/intel/idpf/idpf_common_device.h
> +++ b/drivers/net/intel/idpf/idpf_common_device.h
> @@ -11,6 +11,7 @@
> #include "idpf_common_logs.h"
>
> #define IDPF_DEV_ID_SRIOV 0x145C
> +#define IXD_DEV_ID_VCPF 0x1203
>
> #define IDPF_RSS_KEY_LEN 52
>
> --
> 2.37.3
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v3 2/4] net/idpf: add splitq jumbo packet handling
2025-09-23 12:54 ` [PATCH v3 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
@ 2025-09-29 12:32 ` Bruce Richardson
2025-09-29 14:39 ` Stephen Hemminger
2025-09-29 18:55 ` Shetty, Praveen
0 siblings, 2 replies; 35+ messages in thread
From: Bruce Richardson @ 2025-09-29 12:32 UTC (permalink / raw)
To: Shetty, Praveen; +Cc: aman.deep.singh, dev, Dhananjay Shukla, atulpatel261194
On Tue, Sep 23, 2025 at 02:54:53PM +0200, Shetty, Praveen wrote:
> From: Praveen Shetty <praveen.shetty@intel.com>
>
> This patch will add the jumbo packets handling in the
> idpf_dp_splitq_recv_pkts function.
>
> Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
> Signed-off-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
> Signed-off-by: atulpatel261194 <Atul.Patel@intel.com>
> ---
One small comment inline below.
/Bruce
> drivers/net/intel/idpf/idpf_common_rxtx.c | 50 ++++++++++++++++++-----
> 1 file changed, 40 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/net/intel/idpf/idpf_common_rxtx.c b/drivers/net/intel/idpf/idpf_common_rxtx.c
> index eb25b091d8..412aff8f5f 100644
> --- a/drivers/net/intel/idpf/idpf_common_rxtx.c
> +++ b/drivers/net/intel/idpf/idpf_common_rxtx.c
> @@ -623,10 +623,12 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
> volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
> uint16_t pktlen_gen_bufq_id;
> - struct idpf_rx_queue *rxq;
> + struct idpf_rx_queue *rxq = rx_queue;
> const uint32_t *ptype_tbl;
> uint8_t status_err0_qw1;
> struct idpf_adapter *ad;
> + struct rte_mbuf *first_seg = rxq->pkt_first_seg;
> + struct rte_mbuf *last_seg = rxq->pkt_last_seg;
> struct rte_mbuf *rxm;
> uint16_t rx_id_bufq1;
> uint16_t rx_id_bufq2;
> @@ -659,6 +661,7 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
>
> pktlen_gen_bufq_id =
> rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
> + status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
> gen_id = (pktlen_gen_bufq_id &
> VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
> VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
> @@ -697,16 +700,39 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> rxm->pkt_len = pkt_len;
> rxm->data_len = pkt_len;
> rxm->data_off = RTE_PKTMBUF_HEADROOM;
> +
> + /*
> + * If this is the first buffer of the received packet, set the
> + * pointer to the first mbuf of the packet and initialize its
> + * context. Otherwise, update the total length and the number
> + * of segments of the current scattered packet, and update the
> + * pointer to the last mbuf of the current packet.
> + */
> + if (!first_seg) {
> + first_seg = rxm;
> + first_seg->nb_segs = 1;
> + first_seg->pkt_len = pkt_len;
> + } else {
> + first_seg->pkt_len =
> + (uint16_t)(first_seg->pkt_len +
> + pkt_len);
Since we allow 100 characters per line, does this line really need to be
split into 3? [I realise this is a copy-paste from other drivers, but we
can clean it up as new code here]
> + first_seg->nb_segs++;
> + last_seg->next = rxm;
> + }
> +
> + if (!(status_err0_qw1 & (1 << VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S))) {
> + last_seg = rxm;
> + continue;
> + }
> +
> rxm->next = NULL;
> - rxm->nb_segs = 1;
> - rxm->port = rxq->port_id;
> - rxm->ol_flags = 0;
> - rxm->packet_type =
> + first_seg->port = rxq->port_id;
> + first_seg->ol_flags = 0;
> + first_seg->packet_type =
> ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
> VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
> VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
> -
> - status_err0_qw1 = rx_desc->status_err0_qw1;
> + status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
> pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
> pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
> if (idpf_timestamp_dynflag > 0 &&
> @@ -719,16 +745,20 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> *RTE_MBUF_DYNFIELD(rxm,
> idpf_timestamp_dynfield_offset,
> rte_mbuf_timestamp_t *) = ts_ns;
> - rxm->ol_flags |= idpf_timestamp_dynflag;
> + first_seg->ol_flags |= idpf_timestamp_dynflag;
> }
>
> - rxm->ol_flags |= pkt_flags;
> + first_seg->ol_flags |= pkt_flags;
>
> - rx_pkts[nb_rx++] = rxm;
> + rx_pkts[nb_rx++] = first_seg;
> +
> + first_seg = NULL;
> }
>
> if (nb_rx > 0) {
> rxq->rx_tail = rx_id;
> + rxq->pkt_first_seg = first_seg;
> + rxq->pkt_last_seg = last_seg;
> if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
> rxq->bufq1->rx_next_avail = rx_id_bufq1;
> if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
> --
> 2.37.3
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v3 3/4] net/intel: add config queue support to vCPF
2025-09-23 12:54 ` [PATCH v3 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
@ 2025-09-29 13:40 ` Bruce Richardson
2025-09-29 19:53 ` Shetty, Praveen
0 siblings, 1 reply; 35+ messages in thread
From: Bruce Richardson @ 2025-09-29 13:40 UTC (permalink / raw)
To: Shetty, Praveen; +Cc: aman.deep.singh, dev, Dhananjay Shukla, Atul Patel
On Tue, Sep 23, 2025 at 02:54:54PM +0200, Shetty, Praveen wrote:
> From: Praveen Shetty <praveen.shetty@intel.com>
>
> A "configuration queue" is a software term to denote
> a hardware mailbox queue dedicated to NSS programming.
> While the hardware does not have a construct of a
> "configuration queue", software does to state clearly
> the distinction between a queue software dedicates to
> regular mailbox processing (e.g. CPChnl or Virtchnl)
> and a queue software dedicates to NSS programming
> (e.g. SEM/LEM rule programming).
>
Please provide expansions or clarifications for the acronyms used in the
commit message, so that the commit log is understandable for those unaware
of what the NSS is, or what SEM/LEM refers to. As far as I know, these are
not generally known terms in the industry.
Also, you say that the hardware doesn't have a config queue, but software
does - I think that needs a bit of explanation as to what exactly the
patch is doing/implementing? How is software providing a special config
queue if the facility is not provided by HW.
> Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
> Tested-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
> Tested-by: Atul Patel <atul.patel@intel.com>
> ---
Couple of small comments inline below.
/Bruce
> drivers/net/intel/cpfl/cpfl_ethdev.c | 274 +++++++++++++++---
> drivers/net/intel/cpfl/cpfl_ethdev.h | 38 ++-
> drivers/net/intel/cpfl/cpfl_vchnl.c | 143 ++++++++-
> drivers/net/intel/idpf/base/idpf_osdep.h | 3 +
> drivers/net/intel/idpf/base/virtchnl2.h | 3 +-
> drivers/net/intel/idpf/idpf_common_device.h | 2 +
> drivers/net/intel/idpf/idpf_common_virtchnl.c | 38 +++
> drivers/net/intel/idpf/idpf_common_virtchnl.h | 3 +
> 8 files changed, 449 insertions(+), 55 deletions(-)
>
> diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
> index d6227c99b5..c411a2a024 100644
> --- a/drivers/net/intel/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
> @@ -29,6 +29,9 @@
> #define CPFL_FLOW_PARSER "flow_parser"
> #endif
>
> +#define VCPF_FID 0
> +#define CPFL_FID 6
> +
> rte_spinlock_t cpfl_adapter_lock;
> /* A list for all adapters, one adapter matches one PCI device */
> struct cpfl_adapter_list cpfl_adapter_list;
> @@ -1699,7 +1702,8 @@ cpfl_handle_vchnl_event_msg(struct cpfl_adapter_ext *adapter, uint8_t *msg, uint
> }
>
> /* ignore if it is ctrl vport */
> - if (adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
> + if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF &&
> + adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
> return;
>
> vport = cpfl_find_vport(adapter, vc_event->vport_id);
> @@ -1903,18 +1907,30 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
> {
> int i, ret;
>
> - for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
> - ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
> + for (i = 0; i < adapter->num_tx_cfgq; i++) {
> + if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
> + ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
> + adapter->cfgq_info[0].id,
> + VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, false);
> + else
> + ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
> VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
> +
> if (ret) {
> PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
> return ret;
> }
> }
>
> - for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
> - ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
> - VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
> + for (i = 0; i < adapter->num_rx_cfgq; i++) {
> + if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
> + ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
> + adapter->cfgq_info[1].id,
> + VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, false);
> + else
> + ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
> + VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
> +
> if (ret) {
> PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
> return ret;
> @@ -1922,6 +1938,7 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
> }
>
> return 0;
> +
> }
>
> static int
> @@ -1941,8 +1958,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
> return ret;
> }
>
> - for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
> - ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
> + for (i = 0; i < adapter->num_tx_cfgq; i++) {
> + if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
> + ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
> + adapter->cfgq_info[0].id,
> + VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, true);
> + else
> + ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
> VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
> if (ret) {
> PMD_DRV_LOG(ERR, "Fail to enable Tx config queue.");
> @@ -1950,8 +1972,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
> }
> }
>
> - for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
> - ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
> + for (i = 0; i < adapter->num_rx_cfgq; i++) {
> + if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
> + ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
> + adapter->cfgq_info[1].id,
> + VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, true);
> + else
> + ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
> VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
> if (ret) {
> PMD_DRV_LOG(ERR, "Fail to enable Rx config queue.");
> @@ -1971,14 +1998,20 @@ cpfl_remove_cfgqs(struct cpfl_adapter_ext *adapter)
>
> create_cfgq_info = adapter->cfgq_info;
>
> - for (i = 0; i < CPFL_CFGQ_NUM; i++) {
> - if (adapter->ctlqp[i])
> + for (i = 0; i < adapter->num_cfgq; i++) {
> + if (adapter->ctlqp[i]) {
> cpfl_vport_ctlq_remove(hw, adapter->ctlqp[i]);
> + adapter->ctlqp[i] = NULL;
> + }
> if (create_cfgq_info[i].ring_mem.va)
> idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem);
> if (create_cfgq_info[i].buf_mem.va)
> idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem);
> }
> + if (adapter->ctlqp) {
> + rte_free(adapter->ctlqp);
> + adapter->ctlqp = NULL;
> + }
> }
>
> static int
> @@ -1988,7 +2021,16 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
> int ret = 0;
> int i = 0;
>
> - for (i = 0; i < CPFL_CFGQ_NUM; i++) {
> + adapter->ctlqp = rte_zmalloc("ctlqp", adapter->num_cfgq *
> + sizeof(struct idpf_ctlq_info *),
> + RTE_CACHE_LINE_SIZE);
> +
> + if (!adapter->ctlqp) {
> + PMD_DRV_LOG(ERR, "Failed to allocate memory for control queues");
> + return -ENOMEM;
> + }
> +
> + for (i = 0; i < adapter->num_cfgq; i++) {
> cfg_cq = NULL;
> ret = cpfl_vport_ctlq_add((struct idpf_hw *)(&adapter->base.hw),
> &adapter->cfgq_info[i],
> @@ -2007,6 +2049,62 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
> return ret;
> }
>
> +static
> +int vcpf_save_chunk_in_cfgq(struct cpfl_adapter_ext *adapter)
> +{
> + struct virtchnl2_add_queues *add_q =
> + (struct virtchnl2_add_queues *)adapter->addq_recv_info;
> + struct vcpf_cfg_queue *cfgq;
> + struct virtchnl2_queue_reg_chunk *q_chnk;
> + u16 rx, tx, num_chunks, num_q, struct_size;
> + u32 q_id, q_type;
> +
> + rx = 0; tx = 0;
> +
> + cfgq = rte_zmalloc("cfgq", adapter->num_cfgq *
> + sizeof(struct vcpf_cfg_queue),
> + RTE_CACHE_LINE_SIZE);
> +
I suspect you can probably fix both sides of the multiply on a single line
here, and still be within 100 chars. That will mak ethe code slightly
easier to read.
> + if (!cfgq) {
> + PMD_DRV_LOG(ERR, "Failed to allocate memory for cfgq");
> + return -ENOMEM;
> + }
> +
> + struct_size = idpf_struct_size(add_q, chunks.chunks, (add_q->chunks.num_chunks - 1));
> + adapter->cfgq_in.cfgq_add = rte_zmalloc("config_queues", struct_size, 0);
Missing check for a failed zmalloc call.
> + rte_memcpy(adapter->cfgq_in.cfgq_add, add_q, struct_size);
> +
> + num_chunks = add_q->chunks.num_chunks;
> + for (u16 i = 0; i < num_chunks; i++) {
> + num_q = add_q->chunks.chunks[i].num_queues;
> + q_chnk = &add_q->chunks.chunks[i];
> + for (u16 j = 0; j < num_q; j++) {
> + if (rx > adapter->num_cfgq || tx > adapter->num_cfgq)
> + break;
> + q_id = q_chnk->start_queue_id + j;
> + q_type = q_chnk->type;
> + if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_TX) {
> + cfgq[0].qid = q_id;
> + cfgq[0].qtail_reg_start = q_chnk->qtail_reg_start;
> + cfgq[0].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
> + q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_TX;
> + tx++;
> + } else if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_RX) {
> + cfgq[1].qid = q_id;
> + cfgq[1].qtail_reg_start = q_chnk->qtail_reg_start;
> + cfgq[1].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
> + q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_RX;
> + rx++;
> + }
> + }
> + }
> +
<snip>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v3 2/4] net/idpf: add splitq jumbo packet handling
2025-09-29 12:32 ` Bruce Richardson
@ 2025-09-29 14:39 ` Stephen Hemminger
2025-09-29 18:55 ` Shetty, Praveen
1 sibling, 0 replies; 35+ messages in thread
From: Stephen Hemminger @ 2025-09-29 14:39 UTC (permalink / raw)
To: Bruce Richardson
Cc: Shetty, Praveen, aman.deep.singh, dev, Dhananjay Shukla, atulpatel261194
On Mon, 29 Sep 2025 13:32:15 +0100
Bruce Richardson <bruce.richardson@intel.com> wrote:
> > + first_seg->pkt_len =
> > + (uint16_t)(first_seg->pkt_len +
> > + pkt_len);
>
> Since we allow 100 characters per line, does this line really need to be
> split into 3? [I realise this is a copy-paste from other drivers, but we
> can clean it up as new code here]
Also why the cast?
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v3 1/4] net/intel: add vCPF PMD support
2025-09-29 12:18 ` Bruce Richardson
@ 2025-09-29 18:55 ` Shetty, Praveen
0 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-29 18:55 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: Singh, Aman Deep, dev, Patel, Atul, Shukla, Dhananjay
On Tue, Sep 23, 2025 at 02:54:52PM +0200, Shetty, Praveen wrote:
> From: Praveen Shetty <praveen.shetty@intel.com>
>
> This patch adds the registration support for a new vCPF PMD.
> vCPF PMD is responsible for enabling control and data path
> functionality for the CPF VF devices.
>
> Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
> Tested-by: Atul Patel <atul.patel@intel.com>
> Tested-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
> ---
A few minor comments inline below.
/Bruce
> drivers/net/intel/cpfl/cpfl_ethdev.c | 17 +++++++++++++++++
> drivers/net/intel/cpfl/cpfl_ethdev.h | 1 +
> drivers/net/intel/idpf/idpf_common_device.c | 4 ++--
> drivers/net/intel/idpf/idpf_common_device.h | 1 +
> 4 files changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c
> b/drivers/net/intel/cpfl/cpfl_ethdev.c
> index 6d7b23ad7a..d6227c99b5 100644
> --- a/drivers/net/intel/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
> @@ -1854,6 +1854,7 @@ cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext
> *adapter)
>
> switch (mbx_op) {
> case idpf_mbq_opc_send_msg_to_peer_pf:
> + case idpf_mbq_opc_send_msg_to_peer_drv:
> if (vc_op == VIRTCHNL2_OP_EVENT) {
> cpfl_handle_vchnl_event_msg(adapter, adapter->base.mbx_resp,
> ctlq_msg.data_len);
> @@ -2610,6 +2611,11 @@ static const struct rte_pci_id pci_id_cpfl_map[] = {
> { .vendor_id = 0, /* sentinel */ },
> };
>
> +static const struct rte_pci_id pci_id_vcpf_map[] = {
> + { RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IXD_DEV_ID_VCPF) },
> + { .vendor_id = 0, /* sentinel */ },
> +};
> +
> static struct cpfl_adapter_ext *
> cpfl_find_adapter_ext(struct rte_pci_device *pci_dev) { @@ -2866,6
> +2872,14 @@ static struct rte_pci_driver rte_cpfl_pmd = {
> .remove = cpfl_pci_remove,
> };
>
> +static struct rte_pci_driver rte_vcpf_pmd = {
> + .id_table = pci_id_vcpf_map,
> + .drv_flags = RTE_PCI_DRV_NEED_MAPPING |
> + RTE_PCI_DRV_PROBE_AGAIN,
> + .probe = cpfl_pci_probe,
> + .remove = cpfl_pci_remove,
> +};
> +
> /**
> * Driver initialization routine.
> * Invoked once at EAL init time.
> @@ -2874,6 +2888,9 @@ static struct rte_pci_driver rte_cpfl_pmd = {
> RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
> RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
> RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
> +RTE_PMD_REGISTER_PCI(net_vcpf, rte_vcpf_pmd);
> +RTE_PMD_REGISTER_PCI_TABLE(net_vcpf, pci_id_vcpf_map);
> +RTE_PMD_REGISTER_KMOD_DEP(net_vcpf, "* igb_uio | vfio-pci");
Minor question - do you know if this works with uio_pci_generic, or has it been tested? With igb_uio largely unmaintained right now, the in-tree uio would be good to be able to recommend if vfio is not an option.
>> No, it doesn't work with uio_pci_generic, I was getting DMA transaction failure when I try to use it. Will update the macro to have only vfio-pci as a KMOD_DEP.
> RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
> CPFL_TX_SINGLE_Q "=<0|1> "
> CPFL_RX_SINGLE_Q "=<0|1> "
> diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h
> b/drivers/net/intel/cpfl/cpfl_ethdev.h
> index d4e1176ab1..2cfcdd6206 100644
> --- a/drivers/net/intel/cpfl/cpfl_ethdev.h
> +++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
> @@ -60,6 +60,7 @@
>
> /* Device IDs */
> #define IDPF_DEV_ID_CPF 0x1453
> +#define IXD_DEV_ID_VCPF 0x1203
> #define VIRTCHNL2_QUEUE_GROUP_P2P 0x100
>
I see the same device id added twice, once in cpfl and once in idpf drivers. Can the cpfl driver re-use the definition from idpf_common_device and save duplication?
> Sure, will address this in v4.
> #define CPFL_HOST_ID_NUM 2
> diff --git a/drivers/net/intel/idpf/idpf_common_device.c
> b/drivers/net/intel/idpf/idpf_common_device.c
> index ff1fbcd2b4..8c637a2fb6 100644
> --- a/drivers/net/intel/idpf/idpf_common_device.c
> +++ b/drivers/net/intel/idpf/idpf_common_device.c
> @@ -130,7 +130,7 @@ idpf_init_mbx(struct idpf_hw *hw)
> struct idpf_ctlq_info *ctlq;
> int ret = 0;
>
> - if (hw->device_id == IDPF_DEV_ID_SRIOV)
> + if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id ==
> +IXD_DEV_ID_VCPF)
> ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, vf_ctlq_info);
> else
> ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, pf_ctlq_info); @@ -389,7
> +389,7 @@ idpf_adapter_init(struct idpf_adapter *adapter)
> struct idpf_hw *hw = &adapter->hw;
> int ret;
>
> - if (hw->device_id == IDPF_DEV_ID_SRIOV) {
> + if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id ==
> +IXD_DEV_ID_VCPF) {
> ret = idpf_check_vf_reset_done(hw);
> } else {
> idpf_reset_pf(hw);
> diff --git a/drivers/net/intel/idpf/idpf_common_device.h
> b/drivers/net/intel/idpf/idpf_common_device.h
> index 5f3e4a4fcf..d536ce7e15 100644
> --- a/drivers/net/intel/idpf/idpf_common_device.h
> +++ b/drivers/net/intel/idpf/idpf_common_device.h
> @@ -11,6 +11,7 @@
> #include "idpf_common_logs.h"
>
> #define IDPF_DEV_ID_SRIOV 0x145C
> +#define IXD_DEV_ID_VCPF 0x1203
>
> #define IDPF_RSS_KEY_LEN 52
>
> --
> 2.37.3
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v3 2/4] net/idpf: add splitq jumbo packet handling
2025-09-29 12:32 ` Bruce Richardson
2025-09-29 14:39 ` Stephen Hemminger
@ 2025-09-29 18:55 ` Shetty, Praveen
1 sibling, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-29 18:55 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: Singh, Aman Deep, dev, Shukla, Dhananjay, Patel, Atul
On Tue, Sep 23, 2025 at 02:54:53PM +0200, Shetty, Praveen wrote:
> From: Praveen Shetty <praveen.shetty@intel.com>
>
> This patch will add the jumbo packets handling in the
> idpf_dp_splitq_recv_pkts function.
>
> Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
> Signed-off-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
> Signed-off-by: atulpatel261194 <Atul.Patel@intel.com>
> ---
One small comment inline below.
/Bruce
> drivers/net/intel/idpf/idpf_common_rxtx.c | 50
> ++++++++++++++++++-----
> 1 file changed, 40 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/net/intel/idpf/idpf_common_rxtx.c
> b/drivers/net/intel/idpf/idpf_common_rxtx.c
> index eb25b091d8..412aff8f5f 100644
> --- a/drivers/net/intel/idpf/idpf_common_rxtx.c
> +++ b/drivers/net/intel/idpf/idpf_common_rxtx.c
> @@ -623,10 +623,12 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
> volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
> uint16_t pktlen_gen_bufq_id;
> - struct idpf_rx_queue *rxq;
> + struct idpf_rx_queue *rxq = rx_queue;
> const uint32_t *ptype_tbl;
> uint8_t status_err0_qw1;
> struct idpf_adapter *ad;
> + struct rte_mbuf *first_seg = rxq->pkt_first_seg;
> + struct rte_mbuf *last_seg = rxq->pkt_last_seg;
> struct rte_mbuf *rxm;
> uint16_t rx_id_bufq1;
> uint16_t rx_id_bufq2;
> @@ -659,6 +661,7 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct
> rte_mbuf **rx_pkts,
>
> pktlen_gen_bufq_id =
> rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
> + status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
> gen_id = (pktlen_gen_bufq_id &
> VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
> VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
> @@ -697,16 +700,39 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> rxm->pkt_len = pkt_len;
> rxm->data_len = pkt_len;
> rxm->data_off = RTE_PKTMBUF_HEADROOM;
> +
> + /*
> + * If this is the first buffer of the received packet, set the
> + * pointer to the first mbuf of the packet and initialize its
> + * context. Otherwise, update the total length and the number
> + * of segments of the current scattered packet, and update the
> + * pointer to the last mbuf of the current packet.
> + */
> + if (!first_seg) {
> + first_seg = rxm;
> + first_seg->nb_segs = 1;
> + first_seg->pkt_len = pkt_len;
> + } else {
> + first_seg->pkt_len =
> + (uint16_t)(first_seg->pkt_len +
> + pkt_len);
Since we allow 100 characters per line, does this line really need to be split into 3? [I realise this is a copy-paste from other drivers, but we can clean it up as new code here]
> thanks, will address this in v4.
> + first_seg->nb_segs++;
> + last_seg->next = rxm;
> + }
> +
> + if (!(status_err0_qw1 & (1 << VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S))) {
> + last_seg = rxm;
> + continue;
> + }
> +
> rxm->next = NULL;
> - rxm->nb_segs = 1;
> - rxm->port = rxq->port_id;
> - rxm->ol_flags = 0;
> - rxm->packet_type =
> + first_seg->port = rxq->port_id;
> + first_seg->ol_flags = 0;
> + first_seg->packet_type =
> ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
> VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
> VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
> -
> - status_err0_qw1 = rx_desc->status_err0_qw1;
> + status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
> pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
> pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
> if (idpf_timestamp_dynflag > 0 &&
> @@ -719,16 +745,20 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> *RTE_MBUF_DYNFIELD(rxm,
> idpf_timestamp_dynfield_offset,
> rte_mbuf_timestamp_t *) = ts_ns;
> - rxm->ol_flags |= idpf_timestamp_dynflag;
> + first_seg->ol_flags |= idpf_timestamp_dynflag;
> }
>
> - rxm->ol_flags |= pkt_flags;
> + first_seg->ol_flags |= pkt_flags;
>
> - rx_pkts[nb_rx++] = rxm;
> + rx_pkts[nb_rx++] = first_seg;
> +
> + first_seg = NULL;
> }
>
> if (nb_rx > 0) {
> rxq->rx_tail = rx_id;
> + rxq->pkt_first_seg = first_seg;
> + rxq->pkt_last_seg = last_seg;
> if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
> rxq->bufq1->rx_next_avail = rx_id_bufq1;
> if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
> --
> 2.37.3
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v3 3/4] net/intel: add config queue support to vCPF
2025-09-29 13:40 ` Bruce Richardson
@ 2025-09-29 19:53 ` Shetty, Praveen
2025-09-30 7:50 ` Bruce Richardson
0 siblings, 1 reply; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-29 19:53 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: Singh, Aman Deep, dev, Shukla, Dhananjay, Patel, Atul
On Tue, Sep 23, 2025 at 02:54:54PM +0200, Shetty, Praveen wrote:
> From: Praveen Shetty <praveen.shetty@intel.com>
>
> A "configuration queue" is a software term to denote a hardware
> mailbox queue dedicated to NSS programming.
> While the hardware does not have a construct of a "configuration
> queue", software does to state clearly the distinction between a queue
> software dedicates to regular mailbox processing (e.g. CPChnl or
> Virtchnl) and a queue software dedicates to NSS programming (e.g.
> SEM/LEM rule programming).
>
Please provide expansions or clarifications for the acronyms used in the commit message, so that the commit log is understandable for those unaware of what the NSS is, or what SEM/LEM refers to. As far as I know, these are not generally known terms in the industry.
>> Sure - will address this in v4.
Also, you say that the hardware doesn't have a config queue, but software does - I think that needs a bit of explanation as to what exactly the patch is doing/implementing? How is software providing a special config queue if the facility is not provided by HW.
>> From the HW perspective, both mailbox and the config queues are "control" queues.
>> For HW, "opcode" in the queue descriptor is one of the key differentiating factors between mailbox queues and the config queues(operation code is different for mailbox queues and the config queues).
>> Mailbox queues are used for Virtchnl and the CPChnl communication between the driver and the FW.
>> Config queues are used for programming the FXP pipeline(Flexible packet processor).
>> This patch will request for the queues from the fw using add_queue virtchnl message and configures it as a config queue.
>> vCPF driver will then use this config queues to program the FXP pipeline using rte_flow.
>> will add this information in the v4.
> Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
> Tested-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
> Tested-by: Atul Patel <atul.patel@intel.com>
> ---
Couple of small comments inline below.
/Bruce
> drivers/net/intel/cpfl/cpfl_ethdev.c | 274 +++++++++++++++---
> drivers/net/intel/cpfl/cpfl_ethdev.h | 38 ++-
> drivers/net/intel/cpfl/cpfl_vchnl.c | 143 ++++++++-
> drivers/net/intel/idpf/base/idpf_osdep.h | 3 +
> drivers/net/intel/idpf/base/virtchnl2.h | 3 +-
> drivers/net/intel/idpf/idpf_common_device.h | 2 +
> drivers/net/intel/idpf/idpf_common_virtchnl.c | 38 +++
> drivers/net/intel/idpf/idpf_common_virtchnl.h | 3 +
> 8 files changed, 449 insertions(+), 55 deletions(-)
>
> diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c
> b/drivers/net/intel/cpfl/cpfl_ethdev.c
> index d6227c99b5..c411a2a024 100644
> --- a/drivers/net/intel/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
> @@ -29,6 +29,9 @@
> #define CPFL_FLOW_PARSER "flow_parser"
> #endif
>
> +#define VCPF_FID 0
> +#define CPFL_FID 6
> +
> rte_spinlock_t cpfl_adapter_lock;
> /* A list for all adapters, one adapter matches one PCI device */
> struct cpfl_adapter_list cpfl_adapter_list; @@ -1699,7 +1702,8 @@
> cpfl_handle_vchnl_event_msg(struct cpfl_adapter_ext *adapter, uint8_t *msg, uint
> }
>
> /* ignore if it is ctrl vport */
> - if (adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
> + if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF &&
> + adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
> return;
>
> vport = cpfl_find_vport(adapter, vc_event->vport_id); @@ -1903,18
> +1907,30 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter) {
> int i, ret;
>
> - for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
> - ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
> + for (i = 0; i < adapter->num_tx_cfgq; i++) {
> + if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
> + ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
> + adapter->cfgq_info[0].id,
> + VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, false);
> + else
> + ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false,
> +false,
> VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
> +
> if (ret) {
> PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
> return ret;
> }
> }
>
> - for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
> - ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
> - VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
> + for (i = 0; i < adapter->num_rx_cfgq; i++) {
> + if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
> + ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
> + adapter->cfgq_info[1].id,
> + VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, false);
> + else
> + ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
> + VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
> +
> if (ret) {
> PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
> return ret;
> @@ -1922,6 +1938,7 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
> }
>
> return 0;
> +
> }
>
> static int
> @@ -1941,8 +1958,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
> return ret;
> }
>
> - for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
> - ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
> + for (i = 0; i < adapter->num_tx_cfgq; i++) {
> + if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
> + ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
> + adapter->cfgq_info[0].id,
> + VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, true);
> + else
> + ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false,
> +true,
> VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
> if (ret) {
> PMD_DRV_LOG(ERR, "Fail to enable Tx config queue."); @@ -1950,8
> +1972,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
> }
> }
>
> - for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
> - ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
> + for (i = 0; i < adapter->num_rx_cfgq; i++) {
> + if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
> + ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
> + adapter->cfgq_info[1].id,
> + VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, true);
> + else
> + ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true,
> +true,
> VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
> if (ret) {
> PMD_DRV_LOG(ERR, "Fail to enable Rx config queue."); @@ -1971,14
> +1998,20 @@ cpfl_remove_cfgqs(struct cpfl_adapter_ext *adapter)
>
> create_cfgq_info = adapter->cfgq_info;
>
> - for (i = 0; i < CPFL_CFGQ_NUM; i++) {
> - if (adapter->ctlqp[i])
> + for (i = 0; i < adapter->num_cfgq; i++) {
> + if (adapter->ctlqp[i]) {
> cpfl_vport_ctlq_remove(hw, adapter->ctlqp[i]);
> + adapter->ctlqp[i] = NULL;
> + }
> if (create_cfgq_info[i].ring_mem.va)
> idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem);
> if (create_cfgq_info[i].buf_mem.va)
> idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem);
> }
> + if (adapter->ctlqp) {
> + rte_free(adapter->ctlqp);
> + adapter->ctlqp = NULL;
> + }
> }
>
> static int
> @@ -1988,7 +2021,16 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
> int ret = 0;
> int i = 0;
>
> - for (i = 0; i < CPFL_CFGQ_NUM; i++) {
> + adapter->ctlqp = rte_zmalloc("ctlqp", adapter->num_cfgq *
> + sizeof(struct idpf_ctlq_info *),
> + RTE_CACHE_LINE_SIZE);
> +
> + if (!adapter->ctlqp) {
> + PMD_DRV_LOG(ERR, "Failed to allocate memory for control queues");
> + return -ENOMEM;
> + }
> +
> + for (i = 0; i < adapter->num_cfgq; i++) {
> cfg_cq = NULL;
> ret = cpfl_vport_ctlq_add((struct idpf_hw *)(&adapter->base.hw),
> &adapter->cfgq_info[i],
> @@ -2007,6 +2049,62 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
> return ret;
> }
>
> +static
> +int vcpf_save_chunk_in_cfgq(struct cpfl_adapter_ext *adapter) {
> + struct virtchnl2_add_queues *add_q =
> + (struct virtchnl2_add_queues *)adapter->addq_recv_info;
> + struct vcpf_cfg_queue *cfgq;
> + struct virtchnl2_queue_reg_chunk *q_chnk;
> + u16 rx, tx, num_chunks, num_q, struct_size;
> + u32 q_id, q_type;
> +
> + rx = 0; tx = 0;
> +
> + cfgq = rte_zmalloc("cfgq", adapter->num_cfgq *
> + sizeof(struct vcpf_cfg_queue),
> + RTE_CACHE_LINE_SIZE);
> +
I suspect you can probably fix both sides of the multiply on a single line here, and still be within 100 chars. That will mak ethe code slightly easier to read.
>> Thanks, will address this in v4.
> + if (!cfgq) {
> + PMD_DRV_LOG(ERR, "Failed to allocate memory for cfgq");
> + return -ENOMEM;
> + }
> +
> + struct_size = idpf_struct_size(add_q, chunks.chunks, (add_q->chunks.num_chunks - 1));
> + adapter->cfgq_in.cfgq_add = rte_zmalloc("config_queues",
> +struct_size, 0);
Missing check for a failed zmalloc call.
>> thanks, will address this in v4.
> + rte_memcpy(adapter->cfgq_in.cfgq_add, add_q, struct_size);
> +
> + num_chunks = add_q->chunks.num_chunks;
> + for (u16 i = 0; i < num_chunks; i++) {
> + num_q = add_q->chunks.chunks[i].num_queues;
> + q_chnk = &add_q->chunks.chunks[i];
> + for (u16 j = 0; j < num_q; j++) {
> + if (rx > adapter->num_cfgq || tx > adapter->num_cfgq)
> + break;
> + q_id = q_chnk->start_queue_id + j;
> + q_type = q_chnk->type;
> + if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_TX) {
> + cfgq[0].qid = q_id;
> + cfgq[0].qtail_reg_start = q_chnk->qtail_reg_start;
> + cfgq[0].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
> + q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_TX;
> + tx++;
> + } else if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_RX) {
> + cfgq[1].qid = q_id;
> + cfgq[1].qtail_reg_start = q_chnk->qtail_reg_start;
> + cfgq[1].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
> + q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_RX;
> + rx++;
> + }
> + }
> + }
> +
<snip>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v3 3/4] net/intel: add config queue support to vCPF
2025-09-29 19:53 ` Shetty, Praveen
@ 2025-09-30 7:50 ` Bruce Richardson
2025-09-30 8:31 ` Shetty, Praveen
0 siblings, 1 reply; 35+ messages in thread
From: Bruce Richardson @ 2025-09-30 7:50 UTC (permalink / raw)
To: Shetty, Praveen; +Cc: Singh, Aman Deep, dev, Shukla, Dhananjay, Patel, Atul
On Mon, Sep 29, 2025 at 08:53:13PM +0100, Shetty, Praveen wrote:
>
> On Tue, Sep 23, 2025 at 02:54:54PM +0200, Shetty, Praveen wrote:
> > From: Praveen Shetty <praveen.shetty@intel.com>
> >
> > A "configuration queue" is a software term to denote a hardware
> > mailbox queue dedicated to NSS programming.
> > While the hardware does not have a construct of a "configuration
> > queue", software does to state clearly the distinction between a queue
> > software dedicates to regular mailbox processing (e.g. CPChnl or
> > Virtchnl) and a queue software dedicates to NSS programming (e.g.
> > SEM/LEM rule programming).
> >
>
> Please provide expansions or clarifications for the acronyms used in the commit message, so that the commit log is understandable for those unaware of what the NSS is, or what SEM/LEM refers to. As far as I know, these are not generally known terms in the industry.
> >> Sure - will address this in v4.
>
> Also, you say that the hardware doesn't have a config queue, but software does - I think that needs a bit of explanation as to what exactly the patch is doing/implementing? How is software providing a special config queue if the facility is not provided by HW.
>
> >> From the HW perspective, both mailbox and the config queues are "control" queues.
> >> For HW, "opcode" in the queue descriptor is one of the key differentiating factors between mailbox queues and the config queues(operation code is different for mailbox queues and the config queues).
> >> Mailbox queues are used for Virtchnl and the CPChnl communication between the driver and the FW.
> >> Config queues are used for programming the FXP pipeline(Flexible packet processor).
> >> This patch will request for the queues from the fw using add_queue virtchnl message and configures it as a config queue.
> >> vCPF driver will then use this config queues to program the FXP pipeline using rte_flow.
> >> will add this information in the v4.
>
Please provide more details like this in the revised commit log. Doesn't
need to be fully this, but maybe a summary of it.
Thanks,
/Bruce
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v3 3/4] net/intel: add config queue support to vCPF
2025-09-30 7:50 ` Bruce Richardson
@ 2025-09-30 8:31 ` Shetty, Praveen
0 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-30 8:31 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: Singh, Aman Deep, dev, Shukla, Dhananjay, Patel, Atul
On Mon, Sep 29, 2025 at 08:53:13PM +0100, Shetty, Praveen wrote:
>
> On Tue, Sep 23, 2025 at 02:54:54PM +0200, Shetty, Praveen wrote:
> > From: Praveen Shetty <praveen.shetty@intel.com>
> >
> > A "configuration queue" is a software term to denote a hardware
> > mailbox queue dedicated to NSS programming.
> > While the hardware does not have a construct of a "configuration
> > queue", software does to state clearly the distinction between a
> > queue software dedicates to regular mailbox processing (e.g. CPChnl
> > or
> > Virtchnl) and a queue software dedicates to NSS programming (e.g.
> > SEM/LEM rule programming).
> >
>
> Please provide expansions or clarifications for the acronyms used in the commit message, so that the commit log is understandable for those unaware of what the NSS is, or what SEM/LEM refers to. As far as I know, these are not generally known terms in the industry.
> >> Sure - will address this in v4.
>
> Also, you say that the hardware doesn't have a config queue, but software does - I think that needs a bit of explanation as to what exactly the patch is doing/implementing? How is software providing a special config queue if the facility is not provided by HW.
>
> >> From the HW perspective, both mailbox and the config queues are "control" queues.
> >> For HW, "opcode" in the queue descriptor is one of the key differentiating factors between mailbox queues and the config queues(operation code is different for mailbox queues and the config queues).
> >> Mailbox queues are used for Virtchnl and the CPChnl communication between the driver and the FW.
> >> Config queues are used for programming the FXP pipeline(Flexible packet processor).
> >> This patch will request for the queues from the fw using add_queue virtchnl message and configures it as a config queue.
> >> vCPF driver will then use this config queues to program the FXP pipeline using rte_flow.
> >> will add this information in the v4.
>
Please provide more details like this in the revised commit log. Doesn't need to be fully this, but maybe a summary of it.
>> Sure, thanks Bruce!
Thanks,
/Bruce
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v4 0/4] add vcpf pmd support
2025-09-22 9:48 ` [PATCH 1/4] net/intel: add vCPF PMD support Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 0/4] add vcpf pmd support Shetty, Praveen
@ 2025-09-30 13:55 ` Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 1/4] net/intel: add vCPF PMD support Shetty, Praveen
` (3 more replies)
2025-09-30 18:27 ` [PATCH v5 0/4] add vcpf pmd support Shetty, Praveen
2 siblings, 4 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-30 13:55 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh; +Cc: dev
Virtual Control Plane Function (vCPF) is a SR-IOV Virtual Function of
the CPF(PF) device.vCPF is used to support multiple control plane functions.
This patchset is for extending the CPFL PMD to support the new vCPF device.
In this implementaion, both CPFL and the vCPF devices share most of the
initialization routine and share the common data path implementation, which
eliminates code duplication and improving the maintainability of the driver code.
---
v4:
- addressed review comments
v3:
- fixed cpchnl2_func_type enum for PF device
v2:
- fixed test case failure
---
Praveen Shetty (4):
net/intel: add vCPF PMD support
net/idpf: add splitq jumbo packet handling
net/intel: add config queue support to vCPF
net/cpfl: add cpchnl get vport info support
drivers/net/intel/cpfl/cpfl_cpchnl.h | 8 +
drivers/net/intel/cpfl/cpfl_ethdev.c | 356 ++++++++++++++++--
drivers/net/intel/cpfl/cpfl_ethdev.h | 108 +++++-
drivers/net/intel/cpfl/cpfl_vchnl.c | 143 ++++++-
drivers/net/intel/idpf/base/idpf_osdep.h | 3 +
drivers/net/intel/idpf/base/virtchnl2.h | 3 +-
drivers/net/intel/idpf/idpf_common_device.c | 4 +-
drivers/net/intel/idpf/idpf_common_device.h | 3 +
drivers/net/intel/idpf/idpf_common_rxtx.c | 48 ++-
drivers/net/intel/idpf/idpf_common_virtchnl.c | 38 ++
drivers/net/intel/idpf/idpf_common_virtchnl.h | 3 +
11 files changed, 634 insertions(+), 83 deletions(-)
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v4 1/4] net/intel: add vCPF PMD support
2025-09-30 13:55 ` [PATCH v4 0/4] add vcpf pmd support Shetty, Praveen
@ 2025-09-30 13:55 ` Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
` (2 subsequent siblings)
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-30 13:55 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Atul Patel, Dhananjay Shukla
From: Praveen Shetty <praveen.shetty@intel.com>
This patch adds the registration support for a new vCPF PMD.
vCPF PMD is responsible for enabling control and data path
functionality for the CPF VF devices.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Tested-by: Atul Patel <atul.patel@intel.com>
Tested-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
---
drivers/net/intel/cpfl/cpfl_ethdev.c | 17 +++++++++++++++++
drivers/net/intel/idpf/idpf_common_device.c | 4 ++--
drivers/net/intel/idpf/idpf_common_device.h | 1 +
3 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index 6d7b23ad7a..6aa0971941 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -1854,6 +1854,7 @@ cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter)
switch (mbx_op) {
case idpf_mbq_opc_send_msg_to_peer_pf:
+ case idpf_mbq_opc_send_msg_to_peer_drv:
if (vc_op == VIRTCHNL2_OP_EVENT) {
cpfl_handle_vchnl_event_msg(adapter, adapter->base.mbx_resp,
ctlq_msg.data_len);
@@ -2610,6 +2611,11 @@ static const struct rte_pci_id pci_id_cpfl_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static const struct rte_pci_id pci_id_vcpf_map[] = {
+ { RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IXD_DEV_ID_VCPF) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
static struct cpfl_adapter_ext *
cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
{
@@ -2866,6 +2872,14 @@ static struct rte_pci_driver rte_cpfl_pmd = {
.remove = cpfl_pci_remove,
};
+static struct rte_pci_driver rte_vcpf_pmd = {
+ .id_table = pci_id_vcpf_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING |
+ RTE_PCI_DRV_PROBE_AGAIN,
+ .probe = cpfl_pci_probe,
+ .remove = cpfl_pci_remove,
+};
+
/**
* Driver initialization routine.
* Invoked once at EAL init time.
@@ -2874,6 +2888,9 @@ static struct rte_pci_driver rte_cpfl_pmd = {
RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PCI(net_vcpf, rte_vcpf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_vcpf, pci_id_vcpf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_vcpf, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
CPFL_TX_SINGLE_Q "=<0|1> "
CPFL_RX_SINGLE_Q "=<0|1> "
diff --git a/drivers/net/intel/idpf/idpf_common_device.c b/drivers/net/intel/idpf/idpf_common_device.c
index ff1fbcd2b4..8c637a2fb6 100644
--- a/drivers/net/intel/idpf/idpf_common_device.c
+++ b/drivers/net/intel/idpf/idpf_common_device.c
@@ -130,7 +130,7 @@ idpf_init_mbx(struct idpf_hw *hw)
struct idpf_ctlq_info *ctlq;
int ret = 0;
- if (hw->device_id == IDPF_DEV_ID_SRIOV)
+ if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id == IXD_DEV_ID_VCPF)
ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, vf_ctlq_info);
else
ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, pf_ctlq_info);
@@ -389,7 +389,7 @@ idpf_adapter_init(struct idpf_adapter *adapter)
struct idpf_hw *hw = &adapter->hw;
int ret;
- if (hw->device_id == IDPF_DEV_ID_SRIOV) {
+ if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id == IXD_DEV_ID_VCPF) {
ret = idpf_check_vf_reset_done(hw);
} else {
idpf_reset_pf(hw);
diff --git a/drivers/net/intel/idpf/idpf_common_device.h b/drivers/net/intel/idpf/idpf_common_device.h
index 5f3e4a4fcf..d536ce7e15 100644
--- a/drivers/net/intel/idpf/idpf_common_device.h
+++ b/drivers/net/intel/idpf/idpf_common_device.h
@@ -11,6 +11,7 @@
#include "idpf_common_logs.h"
#define IDPF_DEV_ID_SRIOV 0x145C
+#define IXD_DEV_ID_VCPF 0x1203
#define IDPF_RSS_KEY_LEN 52
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v4 2/4] net/idpf: add splitq jumbo packet handling
2025-09-30 13:55 ` [PATCH v4 0/4] add vcpf pmd support Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 1/4] net/intel: add vCPF PMD support Shetty, Praveen
@ 2025-09-30 13:55 ` Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-30 13:55 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Dhananjay Shukla, atulpatel261194
From: Praveen Shetty <praveen.shetty@intel.com>
This patch will add the jumbo packets handling in the
idpf_dp_splitq_recv_pkts function.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Signed-off-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Signed-off-by: atulpatel261194 <Atul.Patel@intel.com>
---
drivers/net/intel/idpf/idpf_common_rxtx.c | 48 ++++++++++++++++++-----
1 file changed, 38 insertions(+), 10 deletions(-)
diff --git a/drivers/net/intel/idpf/idpf_common_rxtx.c b/drivers/net/intel/idpf/idpf_common_rxtx.c
index eb25b091d8..0a06aed92f 100644
--- a/drivers/net/intel/idpf/idpf_common_rxtx.c
+++ b/drivers/net/intel/idpf/idpf_common_rxtx.c
@@ -623,10 +623,12 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
uint16_t pktlen_gen_bufq_id;
- struct idpf_rx_queue *rxq;
+ struct idpf_rx_queue *rxq = rx_queue;
const uint32_t *ptype_tbl;
uint8_t status_err0_qw1;
struct idpf_adapter *ad;
+ struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+ struct rte_mbuf *last_seg = rxq->pkt_last_seg;
struct rte_mbuf *rxm;
uint16_t rx_id_bufq1;
uint16_t rx_id_bufq2;
@@ -659,6 +661,7 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pktlen_gen_bufq_id =
rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
+ status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
gen_id = (pktlen_gen_bufq_id &
VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
@@ -697,16 +700,37 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->pkt_len = pkt_len;
rxm->data_len = pkt_len;
rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+ /*
+ * If this is the first buffer of the received packet, set the
+ * pointer to the first mbuf of the packet and initialize its
+ * context. Otherwise, update the total length and the number
+ * of segments of the current scattered packet, and update the
+ * pointer to the last mbuf of the current packet.
+ */
+ if (!first_seg) {
+ first_seg = rxm;
+ first_seg->nb_segs = 1;
+ first_seg->pkt_len = pkt_len;
+ } else {
+ first_seg->pkt_len = (uint16_t)(first_seg->pkt_len + pkt_len);
+ first_seg->nb_segs++;
+ last_seg->next = rxm;
+ }
+
+ if (!(status_err0_qw1 & (1 << VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S))) {
+ last_seg = rxm;
+ continue;
+ }
+
rxm->next = NULL;
- rxm->nb_segs = 1;
- rxm->port = rxq->port_id;
- rxm->ol_flags = 0;
- rxm->packet_type =
+ first_seg->port = rxq->port_id;
+ first_seg->ol_flags = 0;
+ first_seg->packet_type =
ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
-
- status_err0_qw1 = rx_desc->status_err0_qw1;
+ status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
if (idpf_timestamp_dynflag > 0 &&
@@ -719,16 +743,20 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*RTE_MBUF_DYNFIELD(rxm,
idpf_timestamp_dynfield_offset,
rte_mbuf_timestamp_t *) = ts_ns;
- rxm->ol_flags |= idpf_timestamp_dynflag;
+ first_seg->ol_flags |= idpf_timestamp_dynflag;
}
- rxm->ol_flags |= pkt_flags;
+ first_seg->ol_flags |= pkt_flags;
- rx_pkts[nb_rx++] = rxm;
+ rx_pkts[nb_rx++] = first_seg;
+
+ first_seg = NULL;
}
if (nb_rx > 0) {
rxq->rx_tail = rx_id;
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
rxq->bufq1->rx_next_avail = rx_id_bufq1;
if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v4 3/4] net/intel: add config queue support to vCPF
2025-09-30 13:55 ` [PATCH v4 0/4] add vcpf pmd support Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 1/4] net/intel: add vCPF PMD support Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
@ 2025-09-30 13:55 ` Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-30 13:55 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Dhananjay Shukla, Atul Patel
From: Praveen Shetty <praveen.shetty@intel.com>
A "configuration queue" is a software term to denote
a hardware mailbox queue dedicated to FXP (Flexible packet processor)
programming.While the hardware does not have a construct of a
"configuration queue", software does to state clearly
the distinction between a queue software dedicates to
regular mailbox processing (e.g. CPChnl or Virtchnl)
and a queue software dedicates for programming the FXP
Pipeline.From the hardware’s viewpoint, both mailbox and
configuration queues are treated as "control" queues,
with the main distinction being the "opcode" in their
descriptors.This patch will requests queues from the
firmware using an add_queue Virtchnl message and sets
them up as config queues.The vCPF driver then uses these
config queues to program the FXP pipeline via rte_flow.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Tested-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Tested-by: Atul Patel <atul.patel@intel.com>
---
drivers/net/intel/cpfl/cpfl_ethdev.c | 276 +++++++++++++++---
drivers/net/intel/cpfl/cpfl_ethdev.h | 38 ++-
drivers/net/intel/cpfl/cpfl_vchnl.c | 143 ++++++++-
drivers/net/intel/idpf/base/idpf_osdep.h | 3 +
drivers/net/intel/idpf/base/virtchnl2.h | 3 +-
drivers/net/intel/idpf/idpf_common_device.h | 2 +
drivers/net/intel/idpf/idpf_common_virtchnl.c | 38 +++
drivers/net/intel/idpf/idpf_common_virtchnl.h | 3 +
8 files changed, 451 insertions(+), 55 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index 6aa0971941..22f3859dca 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -29,6 +29,9 @@
#define CPFL_FLOW_PARSER "flow_parser"
#endif
+#define VCPF_FID 0
+#define CPFL_FID 6
+
rte_spinlock_t cpfl_adapter_lock;
/* A list for all adapters, one adapter matches one PCI device */
struct cpfl_adapter_list cpfl_adapter_list;
@@ -1699,7 +1702,8 @@ cpfl_handle_vchnl_event_msg(struct cpfl_adapter_ext *adapter, uint8_t *msg, uint
}
/* ignore if it is ctrl vport */
- if (adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF &&
+ adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
return;
vport = cpfl_find_vport(adapter, vc_event->vport_id);
@@ -1903,18 +1907,30 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
{
int i, ret;
- for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
+ for (i = 0; i < adapter->num_tx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[0].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, false);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
+
if (ret) {
PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
return ret;
}
}
- for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
- VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
+ for (i = 0; i < adapter->num_rx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[1].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, false);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
+
if (ret) {
PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
return ret;
@@ -1922,6 +1938,7 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
}
return 0;
+
}
static int
@@ -1941,8 +1958,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
return ret;
}
- for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
+ for (i = 0; i < adapter->num_tx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[0].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, true);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to enable Tx config queue.");
@@ -1950,8 +1972,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
}
}
- for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
+ for (i = 0; i < adapter->num_rx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[1].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, true);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to enable Rx config queue.");
@@ -1971,14 +1998,20 @@ cpfl_remove_cfgqs(struct cpfl_adapter_ext *adapter)
create_cfgq_info = adapter->cfgq_info;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
- if (adapter->ctlqp[i])
+ for (i = 0; i < adapter->num_cfgq; i++) {
+ if (adapter->ctlqp[i]) {
cpfl_vport_ctlq_remove(hw, adapter->ctlqp[i]);
+ adapter->ctlqp[i] = NULL;
+ }
if (create_cfgq_info[i].ring_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem);
if (create_cfgq_info[i].buf_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem);
}
+ if (adapter->ctlqp) {
+ rte_free(adapter->ctlqp);
+ adapter->ctlqp = NULL;
+ }
}
static int
@@ -1988,7 +2021,16 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
int ret = 0;
int i = 0;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ adapter->ctlqp = rte_zmalloc("ctlqp", adapter->num_cfgq *
+ sizeof(struct idpf_ctlq_info *),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!adapter->ctlqp) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for control queues");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < adapter->num_cfgq; i++) {
cfg_cq = NULL;
ret = cpfl_vport_ctlq_add((struct idpf_hw *)(&adapter->base.hw),
&adapter->cfgq_info[i],
@@ -2007,6 +2049,64 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
return ret;
}
+static
+int vcpf_save_chunk_in_cfgq(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_add_queues *add_q =
+ (struct virtchnl2_add_queues *)adapter->addq_recv_info;
+ struct vcpf_cfg_queue *cfgq;
+ struct virtchnl2_queue_reg_chunk *q_chnk;
+ u16 rx, tx, num_chunks, num_q, struct_size;
+ u32 q_id, q_type;
+
+ rx = 0; tx = 0;
+
+ cfgq = rte_zmalloc("cfgq", adapter->num_cfgq * sizeof(struct vcpf_cfg_queue),
+ RTE_CACHE_LINE_SIZE);
+ if (!cfgq) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for cfgq");
+ return -ENOMEM;
+ }
+
+ struct_size = idpf_struct_size(add_q, chunks.chunks, (add_q->chunks.num_chunks - 1));
+ adapter->cfgq_in.cfgq_add = rte_zmalloc("config_queues", struct_size, 0);
+ if (!adapter->cfgq_in.cfgq_add) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for add_q");
+ return -ENOMEM;
+ }
+ rte_memcpy(adapter->cfgq_in.cfgq_add, add_q, struct_size);
+
+ num_chunks = add_q->chunks.num_chunks;
+ for (u16 i = 0; i < num_chunks; i++) {
+ num_q = add_q->chunks.chunks[i].num_queues;
+ q_chnk = &add_q->chunks.chunks[i];
+ for (u16 j = 0; j < num_q; j++) {
+ if (rx > adapter->num_cfgq || tx > adapter->num_cfgq)
+ break;
+ q_id = q_chnk->start_queue_id + j;
+ q_type = q_chnk->type;
+ if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_TX) {
+ cfgq[0].qid = q_id;
+ cfgq[0].qtail_reg_start = q_chnk->qtail_reg_start;
+ cfgq[0].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
+ q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_TX;
+ tx++;
+ } else if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_RX) {
+ cfgq[1].qid = q_id;
+ cfgq[1].qtail_reg_start = q_chnk->qtail_reg_start;
+ cfgq[1].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
+ q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_RX;
+ rx++;
+ }
+ }
+ }
+
+ adapter->cfgq_in.cfgq = cfgq;
+ adapter->cfgq_in.num_cfgq = adapter->num_cfgq;
+
+ return 0;
+}
+
#define CPFL_CFGQ_RING_LEN 512
#define CPFL_CFGQ_DESCRIPTOR_SIZE 32
#define CPFL_CFGQ_BUFFER_SIZE 256
@@ -2017,32 +2117,71 @@ cpfl_cfgq_setup(struct cpfl_adapter_ext *adapter)
{
struct cpfl_ctlq_create_info *create_cfgq_info;
struct cpfl_vport *vport;
+ struct vcpf_cfgq_info *cfgq_info = &adapter->cfgq_in;
int i, err;
uint32_t ring_size = CPFL_CFGQ_RING_SIZE * sizeof(struct idpf_ctlq_desc);
uint32_t buf_size = CPFL_CFGQ_RING_SIZE * CPFL_CFGQ_BUFFER_SIZE;
+ uint64_t tx_qtail_start;
+ uint64_t rx_qtail_start;
+ uint32_t tx_qtail_spacing;
+ uint32_t rx_qtail_spacing;
vport = &adapter->ctrl_vport;
+
+ tx_qtail_start = vport->base.chunks_info.tx_qtail_start;
+ tx_qtail_spacing = vport->base.chunks_info.tx_qtail_spacing;
+ rx_qtail_start = vport->base.chunks_info.rx_qtail_start;
+ rx_qtail_spacing = vport->base.chunks_info.rx_qtail_spacing;
+
+ adapter->cfgq_info = rte_zmalloc("cfgq_info", adapter->num_cfgq *
+ sizeof(struct cpfl_ctlq_create_info),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!adapter->cfgq_info) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for cfgq_info");
+ return -ENOMEM;
+ }
+
create_cfgq_info = adapter->cfgq_info;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ for (i = 0; i < adapter->num_cfgq; i++) {
if (i % 2 == 0) {
- /* Setup Tx config queue */
- create_cfgq_info[i].id = vport->base.chunks_info.tx_start_qid + i / 2;
+ /* Setup Tx config queue */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].id = cfgq_info->cfgq[i].qid;
+ else
+ create_cfgq_info[i].id = vport->base.chunks_info.tx_start_qid +
+ i / 2;
+
create_cfgq_info[i].type = IDPF_CTLQ_TYPE_CONFIG_TX;
create_cfgq_info[i].len = CPFL_CFGQ_RING_SIZE;
create_cfgq_info[i].buf_size = CPFL_CFGQ_BUFFER_SIZE;
memset(&create_cfgq_info[i].reg, 0, sizeof(struct idpf_ctlq_reg));
- create_cfgq_info[i].reg.tail = vport->base.chunks_info.tx_qtail_start +
- i / 2 * vport->base.chunks_info.tx_qtail_spacing;
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].reg.tail = cfgq_info->cfgq[i].qtail_reg_start;
+ else
+ create_cfgq_info[i].reg.tail = tx_qtail_start +
+ i / 2 * tx_qtail_spacing;
+
} else {
- /* Setup Rx config queue */
- create_cfgq_info[i].id = vport->base.chunks_info.rx_start_qid + i / 2;
+ /* Setup Rx config queue */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].id = cfgq_info->cfgq[i].qid;
+ else
+ create_cfgq_info[i].id = vport->base.chunks_info.rx_start_qid +
+ i / 2;
+
create_cfgq_info[i].type = IDPF_CTLQ_TYPE_CONFIG_RX;
create_cfgq_info[i].len = CPFL_CFGQ_RING_SIZE;
create_cfgq_info[i].buf_size = CPFL_CFGQ_BUFFER_SIZE;
memset(&create_cfgq_info[i].reg, 0, sizeof(struct idpf_ctlq_reg));
- create_cfgq_info[i].reg.tail = vport->base.chunks_info.rx_qtail_start +
- i / 2 * vport->base.chunks_info.rx_qtail_spacing;
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].reg.tail = cfgq_info->cfgq[i].qtail_reg_start;
+ else
+ create_cfgq_info[i].reg.tail = rx_qtail_start +
+ i / 2 * rx_qtail_spacing;
+
+
if (!idpf_alloc_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem,
buf_size)) {
err = -ENOMEM;
@@ -2050,19 +2189,24 @@ cpfl_cfgq_setup(struct cpfl_adapter_ext *adapter)
}
}
if (!idpf_alloc_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem,
- ring_size)) {
+ ring_size)) {
err = -ENOMEM;
goto free_mem;
}
}
+
return 0;
free_mem:
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ for (i = 0; i < adapter->num_cfgq; i++) {
if (create_cfgq_info[i].ring_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem);
if (create_cfgq_info[i].buf_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem);
}
+ if (adapter->cfgq_info) {
+ rte_free(adapter->cfgq_info);
+ adapter->cfgq_info = NULL;
+ }
return err;
}
@@ -2107,7 +2251,10 @@ cpfl_ctrl_path_close(struct cpfl_adapter_ext *adapter)
{
cpfl_stop_cfgqs(adapter);
cpfl_remove_cfgqs(adapter);
- idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ else
+ vcpf_del_queues(adapter);
}
static int
@@ -2115,22 +2262,39 @@ cpfl_ctrl_path_open(struct cpfl_adapter_ext *adapter)
{
int ret;
- ret = cpfl_vc_create_ctrl_vport(adapter);
- if (ret) {
- PMD_INIT_LOG(ERR, "Failed to create control vport");
- return ret;
- }
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF) {
+ ret = cpfl_vc_create_ctrl_vport(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to create control vport");
+ return ret;
+ }
- ret = cpfl_init_ctrl_vport(adapter);
- if (ret) {
- PMD_INIT_LOG(ERR, "Failed to init control vport");
- goto err_init_ctrl_vport;
+ ret = cpfl_init_ctrl_vport(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init control vport");
+ goto err_init_ctrl_vport;
+ }
+ } else {
+ ret = vcpf_add_queues(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to add queues");
+ return ret;
+ }
+
+ ret = vcpf_save_chunk_in_cfgq(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to save config queue chunk");
+ return ret;
+ }
}
ret = cpfl_cfgq_setup(adapter);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to setup control queues");
- goto err_cfgq_setup;
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ goto err_cfgq_setup;
+ else
+ goto err_del_cfg;
}
ret = cpfl_add_cfgqs(adapter);
@@ -2153,9 +2317,13 @@ cpfl_ctrl_path_open(struct cpfl_adapter_ext *adapter)
cpfl_remove_cfgqs(adapter);
err_cfgq_setup:
err_init_ctrl_vport:
- idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+err_del_cfg:
+ vcpf_del_queues(adapter);
return ret;
+
}
static struct virtchnl2_get_capabilities req_caps = {
@@ -2291,12 +2459,29 @@ get_running_host_id(void)
return host_id;
}
+static uint8_t
+set_config_queue_details(struct cpfl_adapter_ext *adapter, struct rte_pci_addr *pci_addr)
+{
+ if (pci_addr->function == CPFL_FID) {
+ adapter->num_cfgq = CPFL_CFGQ_NUM;
+ adapter->num_rx_cfgq = CPFL_RX_CFGQ_NUM;
+ adapter->num_tx_cfgq = CPFL_TX_CFGQ_NUM;
+ } else if (pci_addr->function == VCPF_FID) {
+ adapter->num_cfgq = VCPF_CFGQ_NUM;
+ adapter->num_rx_cfgq = VCPF_RX_CFGQ_NUM;
+ adapter->num_tx_cfgq = VCPF_TX_CFGQ_NUM;
+ }
+
+ return 0;
+}
+
static int
cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
struct cpfl_devargs *devargs)
{
struct idpf_adapter *base = &adapter->base;
struct idpf_hw *hw = &base->hw;
+ struct rte_pci_addr *pci_addr = &pci_dev->addr;
int ret = 0;
#ifndef RTE_HAS_JANSSON
@@ -2348,10 +2533,23 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
goto err_vports_alloc;
}
- ret = cpfl_ctrl_path_open(adapter);
+ /* set the number of config queues to be requested */
+ ret = set_config_queue_details(adapter, pci_addr);
if (ret) {
- PMD_INIT_LOG(ERR, "Failed to setup control path");
- goto err_create_ctrl_vport;
+ PMD_INIT_LOG(ERR, "Failed to set the config queue details");
+ return -1;
+ }
+
+ if (pci_addr->function == VCPF_FID || pci_addr->function == CPFL_FID) {
+ ret = cpfl_ctrl_path_open(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup control path");
+ if (pci_addr->function == CPFL_FID)
+ goto err_create_ctrl_vport;
+ else
+ return ret;
+ }
+
}
#ifdef RTE_HAS_JANSSON
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index d4e1176ab1..f550bca754 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -89,6 +89,9 @@
#define CPFL_FPCP_CFGQ_TX 0
#define CPFL_FPCP_CFGQ_RX 1
#define CPFL_CFGQ_NUM 8
+#define VCPF_RX_CFGQ_NUM 1
+#define VCPF_TX_CFGQ_NUM 1
+#define VCPF_CFGQ_NUM 2
/* bit[15:14] type
* bit[13] host/accelerator core
@@ -200,6 +203,30 @@ struct cpfl_metadata {
struct cpfl_metadata_chunk chunks[CPFL_META_LENGTH];
};
+/**
+ * struct vcpf_cfg_queue - config queue information
+ * @qid: rx/tx queue id
+ * @qtail_reg_start: rx/tx tail queue register start
+ * @qtail_reg_spacing: rx/tx tail queue register spacing
+ */
+struct vcpf_cfg_queue {
+ u32 qid;
+ u64 qtail_reg_start;
+ u32 qtail_reg_spacing;
+};
+
+/**
+ * struct vcpf_cfgq_info - config queue information
+ * @num_cfgq: number of config queues
+ * @cfgq_add: config queue add information
+ * @cfgq: config queue information
+ */
+struct vcpf_cfgq_info {
+ u16 num_cfgq;
+ struct virtchnl2_add_queues *cfgq_add;
+ struct vcpf_cfg_queue *cfgq;
+};
+
struct cpfl_adapter_ext {
TAILQ_ENTRY(cpfl_adapter_ext) next;
struct idpf_adapter base;
@@ -229,8 +256,13 @@ struct cpfl_adapter_ext {
/* ctrl vport and ctrl queues. */
struct cpfl_vport ctrl_vport;
uint8_t ctrl_vport_recv_info[IDPF_DFLT_MBX_BUF_SIZE];
- struct idpf_ctlq_info *ctlqp[CPFL_CFGQ_NUM];
- struct cpfl_ctlq_create_info cfgq_info[CPFL_CFGQ_NUM];
+ struct idpf_ctlq_info **ctlqp;
+ struct cpfl_ctlq_create_info *cfgq_info;
+ struct vcpf_cfgq_info cfgq_in;
+ uint8_t addq_recv_info[IDPF_DFLT_MBX_BUF_SIZE];
+ uint16_t num_cfgq;
+ uint16_t num_rx_cfgq;
+ uint16_t num_tx_cfgq;
uint8_t host_id;
};
@@ -251,6 +283,8 @@ int cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter);
int cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter);
int cpfl_alloc_dma_mem_batch(struct idpf_dma_mem *orig_dma, struct idpf_dma_mem *dma,
uint32_t size, int batch_size);
+int vcpf_add_queues(struct cpfl_adapter_ext *adapter);
+int vcpf_del_queues(struct cpfl_adapter_ext *adapter);
#define CPFL_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/intel/cpfl/cpfl_vchnl.c b/drivers/net/intel/cpfl/cpfl_vchnl.c
index 7d277a0e8e..9c842b60df 100644
--- a/drivers/net/intel/cpfl/cpfl_vchnl.c
+++ b/drivers/net/intel/cpfl/cpfl_vchnl.c
@@ -106,6 +106,106 @@ cpfl_vc_create_ctrl_vport(struct cpfl_adapter_ext *adapter)
return err;
}
+#define VCPF_CFQ_MB_INDEX 0xFF
+int
+vcpf_add_queues(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_add_queues add_cfgq;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&add_cfgq, 0, sizeof(struct virtchnl2_add_queues));
+ u16 num_cfgq = 1;
+
+ add_cfgq.num_tx_q = rte_cpu_to_le_16(num_cfgq);
+ add_cfgq.num_rx_q = rte_cpu_to_le_16(num_cfgq);
+ add_cfgq.mbx_q_index = VCPF_CFQ_MB_INDEX;
+
+ add_cfgq.vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ add_cfgq.num_tx_complq = 0;
+ add_cfgq.num_rx_bufq = 0;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_ADD_QUEUES;
+ args.in_args = (uint8_t *)&add_cfgq;
+ args.in_args_size = sizeof(add_cfgq);
+ args.out_buffer = adapter->base.mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_vc_cmd_execute(&adapter->base, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command VIRTCHNL2_OP_ADD_QUEUES");
+ return err;
+ }
+
+ rte_memcpy(adapter->addq_recv_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+
+ return err;
+}
+
+int
+vcpf_del_queues(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_del_ena_dis_queues *del_cfgq;
+ u16 num_chunks;
+ struct idpf_cmd_info args;
+ int i, err, size;
+
+ num_chunks = adapter->cfgq_in.cfgq_add->chunks.num_chunks;
+ size = idpf_struct_size(del_cfgq, chunks.chunks, (num_chunks - 1));
+ del_cfgq = rte_zmalloc("del_cfgq", size, 0);
+ if (!del_cfgq) {
+ PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_del_ena_dis_queues");
+ err = -ENOMEM;
+ return err;
+ }
+
+ del_cfgq->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ del_cfgq->chunks.num_chunks = num_chunks;
+
+ /* fill config queue chunk data */
+ for (i = 0; i < num_chunks; i++) {
+ del_cfgq->chunks.chunks[i].type =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].type;
+ del_cfgq->chunks.chunks[i].start_queue_id =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].start_queue_id;
+ del_cfgq->chunks.chunks[i].num_queues =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].num_queues;
+ }
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_DEL_QUEUES;
+ args.in_args = (uint8_t *)del_cfgq;
+ args.in_args_size = idpf_struct_size(del_cfgq, chunks.chunks,
+ (del_cfgq->chunks.num_chunks - 1));
+ args.out_buffer = adapter->base.mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_vc_cmd_execute(&adapter->base, &args);
+ rte_free(del_cfgq);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command VIRTCHNL2_OP_DEL_QUEUES");
+ return err;
+ }
+
+ if (adapter->cfgq_info) {
+ rte_free(adapter->cfgq_info);
+ adapter->cfgq_info = NULL;
+ }
+ adapter->cfgq_in.num_cfgq = 0;
+ if (adapter->cfgq_in.cfgq_add) {
+ rte_free(adapter->cfgq_in.cfgq_add);
+ adapter->cfgq_in.cfgq_add = NULL;
+ }
+ if (adapter->cfgq_in.cfgq) {
+ rte_free(adapter->cfgq_in.cfgq);
+ adapter->cfgq_in.cfgq = NULL;
+ }
+ return err;
+}
+
int
cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
{
@@ -116,13 +216,16 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
uint16_t num_qs;
int size, err, i;
- if (vport->base.rxq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
- PMD_DRV_LOG(ERR, "This rxq model isn't supported.");
- err = -EINVAL;
- return err;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF) {
+ if (vport->base.rxq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ PMD_DRV_LOG(ERR, "This rxq model isn't supported.");
+ err = -EINVAL;
+ return err;
+ }
}
- num_qs = CPFL_RX_CFGQ_NUM;
+ num_qs = adapter->num_rx_cfgq;
+
size = sizeof(*vc_rxqs) + (num_qs - 1) *
sizeof(struct virtchnl2_rxq_info);
vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
@@ -131,7 +234,12 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
err = -ENOMEM;
return err;
}
- vc_rxqs->vport_id = vport->base.vport_id;
+
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vc_rxqs->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ else
+ vc_rxqs->vport_id = vport->base.vport_id;
+
vc_rxqs->num_qinfo = num_qs;
for (i = 0; i < num_qs; i++) {
@@ -141,7 +249,8 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
rxq_info->queue_id = adapter->cfgq_info[2 * i + 1].id;
rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
rxq_info->data_buffer_size = adapter->cfgq_info[2 * i + 1].buf_size;
- rxq_info->max_pkt_size = vport->base.max_pkt_len;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF)
+ rxq_info->max_pkt_size = vport->base.max_pkt_len;
rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
rxq_info->ring_len = adapter->cfgq_info[2 * i + 1].len;
@@ -172,13 +281,16 @@ cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter)
uint16_t num_qs;
int size, err, i;
- if (vport->base.txq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
- PMD_DRV_LOG(ERR, "This txq model isn't supported.");
- err = -EINVAL;
- return err;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF) {
+ if (vport->base.txq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ PMD_DRV_LOG(ERR, "This txq model isn't supported.");
+ err = -EINVAL;
+ return err;
+ }
}
- num_qs = CPFL_TX_CFGQ_NUM;
+ num_qs = adapter->num_tx_cfgq;
+
size = sizeof(*vc_txqs) + (num_qs - 1) *
sizeof(struct virtchnl2_txq_info);
vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
@@ -187,7 +299,12 @@ cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter)
err = -ENOMEM;
return err;
}
- vc_txqs->vport_id = vport->base.vport_id;
+
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vc_txqs->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ else
+ vc_txqs->vport_id = vport->base.vport_id;
+
vc_txqs->num_qinfo = num_qs;
for (i = 0; i < num_qs; i++) {
diff --git a/drivers/net/intel/idpf/base/idpf_osdep.h b/drivers/net/intel/idpf/base/idpf_osdep.h
index 7b43df3079..47b95d0da6 100644
--- a/drivers/net/intel/idpf/base/idpf_osdep.h
+++ b/drivers/net/intel/idpf/base/idpf_osdep.h
@@ -361,6 +361,9 @@ idpf_hweight32(u32 num)
#endif
+#define idpf_struct_size(ptr, field, num) \
+ (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num))
+
enum idpf_mac_type {
IDPF_MAC_UNKNOWN = 0,
IDPF_MAC_PF,
diff --git a/drivers/net/intel/idpf/base/virtchnl2.h b/drivers/net/intel/idpf/base/virtchnl2.h
index cf010c0504..6cfb4f56fa 100644
--- a/drivers/net/intel/idpf/base/virtchnl2.h
+++ b/drivers/net/intel/idpf/base/virtchnl2.h
@@ -1024,7 +1024,8 @@ struct virtchnl2_add_queues {
__le16 num_tx_complq;
__le16 num_rx_q;
__le16 num_rx_bufq;
- u8 pad[4];
+ u8 mbx_q_index;
+ u8 pad[3];
struct virtchnl2_queue_reg_chunks chunks;
};
diff --git a/drivers/net/intel/idpf/idpf_common_device.h b/drivers/net/intel/idpf/idpf_common_device.h
index d536ce7e15..f962a3f805 100644
--- a/drivers/net/intel/idpf/idpf_common_device.h
+++ b/drivers/net/intel/idpf/idpf_common_device.h
@@ -45,6 +45,8 @@
(sizeof(struct virtchnl2_ptype) + \
(((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
+#define VCPF_CFGQ_VPORT_ID 0xFFFFFFFF
+
struct idpf_adapter {
struct idpf_hw hw;
struct virtchnl2_version_info virtchnl_version;
diff --git a/drivers/net/intel/idpf/idpf_common_virtchnl.c b/drivers/net/intel/idpf/idpf_common_virtchnl.c
index bab854e191..e927d7415a 100644
--- a/drivers/net/intel/idpf/idpf_common_virtchnl.c
+++ b/drivers/net/intel/idpf/idpf_common_virtchnl.c
@@ -787,6 +787,44 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
return err;
}
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ena_dis_one_queue_vcpf)
+int
+idpf_vc_ena_dis_one_queue_vcpf(struct idpf_adapter *adapter, uint16_t qid,
+ uint32_t type, bool on)
+{
+ struct virtchnl2_del_ena_dis_queues *queue_select;
+ struct virtchnl2_queue_chunk *queue_chunk;
+ struct idpf_cmd_info args;
+ int err, len;
+
+ len = sizeof(struct virtchnl2_del_ena_dis_queues);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (queue_select == NULL)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = 1;
+ queue_select->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+
+ queue_chunk->type = type;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+
+ args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
+ VIRTCHNL2_OP_DISABLE_QUEUES;
+ args.in_args = (uint8_t *)queue_select;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+ err = idpf_vc_cmd_execute(adapter, &args);
+ if (err != 0)
+ DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+ on ? "ENABLE" : "DISABLE");
+
+ rte_free(queue_select);
+ return err;
+}
+
RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_switch)
int
idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
diff --git a/drivers/net/intel/idpf/idpf_common_virtchnl.h b/drivers/net/intel/idpf/idpf_common_virtchnl.h
index 68cba9111c..90fce65676 100644
--- a/drivers/net/intel/idpf/idpf_common_virtchnl.h
+++ b/drivers/net/intel/idpf/idpf_common_virtchnl.h
@@ -76,6 +76,9 @@ __rte_internal
int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
uint32_t type, bool on);
__rte_internal
+int idpf_vc_ena_dis_one_queue_vcpf(struct idpf_adapter *adapter, uint16_t qid,
+ uint32_t type, bool on);
+__rte_internal
int idpf_vc_queue_grps_del(struct idpf_vport *vport,
uint16_t num_q_grps,
struct virtchnl2_queue_group_id *qg_ids);
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v4 4/4] net/cpfl: add cpchnl get vport info support
2025-09-30 13:55 ` [PATCH v4 0/4] add vcpf pmd support Shetty, Praveen
` (2 preceding siblings ...)
2025-09-30 13:55 ` [PATCH v4 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
@ 2025-09-30 13:55 ` Shetty, Praveen
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-30 13:55 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Dhananjay Shukla, Atul Patel
From: Praveen Shetty <praveen.shetty@intel.com>
vCPF will only receive the relative queue id from the FW.
CPCHNL2_OP_GET_VPORT_INFO cpchnl message is used
to get the absolute rx/tx queue id and vsi of its own vport.
This patch will add the support to call CPCHNL2_OP_GET_VPORT_INFO
cpchnl message from the vCPF PMD.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Signed-off-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Signed-off-by: Atul Patel <Atul.Patel@intel.com>
---
drivers/net/intel/cpfl/cpfl_cpchnl.h | 8 ++++
drivers/net/intel/cpfl/cpfl_ethdev.c | 63 +++++++++++++++++++++++++
drivers/net/intel/cpfl/cpfl_ethdev.h | 70 +++++++++++++++++++++-------
3 files changed, 125 insertions(+), 16 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_cpchnl.h b/drivers/net/intel/cpfl/cpfl_cpchnl.h
index 0c9dfcdbf1..c56d3e6cea 100644
--- a/drivers/net/intel/cpfl/cpfl_cpchnl.h
+++ b/drivers/net/intel/cpfl/cpfl_cpchnl.h
@@ -140,6 +140,14 @@ enum cpchnl2_func_type {
CPCHNL2_FTYPE_LAN_MAX
};
+/**
+ * @brief function types
+ */
+enum vcpf_cpchnl2_func_type {
+ VCPF_CPCHNL2_FTYPE_LAN_PF = 0,
+ VCPF_CPCHNL2_FTYPE_LAN_VF = 1,
+};
+
/**
* @brief containing vport id & type
*/
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index 22f3859dca..110678e312 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -1902,6 +1902,43 @@ cpfl_dev_alarm_handler(void *param)
rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
}
+static
+int vcpf_save_vport_info_response(struct cpfl_vport *cpfl_vport,
+ struct cpchnl2_get_vport_info_response *response)
+{
+ struct cpchnl2_vport_info *info;
+ struct vcpf_vport_info *vport_info;
+ struct cpchnl2_queue_group_info *qgp;
+ struct cpchnl2_queue_chunk *q_chnk;
+ u16 num_queue_groups;
+ u16 num_chunks;
+ u32 q_type;
+
+ info = &response->info;
+ vport_info = &cpfl_vport->vport_info;
+ vport_info->vport_index = info->vport_index;
+ vport_info->vsi_id = info->vsi_id;
+
+ num_queue_groups = response->queue_groups.num_queue_groups;
+ for (u16 i = 0; i < num_queue_groups; i++) {
+ qgp = &response->queue_groups.groups[i];
+ num_chunks = qgp->chunks.num_chunks;
+ /* rx q and tx q are stored in first 2 chunks */
+ for (u16 j = 0; j < (num_chunks - 2); j++) {
+ q_chnk = &qgp->chunks.chunks[j];
+ q_type = q_chnk->type;
+ if (q_type == VIRTCHNL2_QUEUE_TYPE_TX) {
+ vport_info->abs_start_txq_id = q_chnk->start_queue_id;
+ vport_info->num_tx_q = q_chnk->num_queues;
+ } else if (q_type == VIRTCHNL2_QUEUE_TYPE_RX) {
+ vport_info->abs_start_rxq_id = q_chnk->start_queue_id;
+ vport_info->num_rx_q = q_chnk->num_queues;
+ }
+ }
+ }
+ return 0;
+}
+
static int
cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
{
@@ -2722,7 +2759,11 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
/* for sending create vport virtchnl msg prepare */
struct virtchnl2_create_vport create_vport_info;
struct virtchnl2_add_queue_groups p2p_queue_grps_info;
+ struct cpchnl2_get_vport_info_response response;
uint8_t p2p_q_vc_out_info[IDPF_DFLT_MBX_BUF_SIZE] = {0};
+ struct cpfl_vport_id vi;
+ struct cpchnl2_vport_id v_id;
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
int ret = 0;
dev->dev_ops = &cpfl_eth_dev_ops;
@@ -2792,6 +2833,28 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
cpfl_p2p_queue_grps_del(vport);
}
}
+ /* get the vport info */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF) {
+ pci_dev = RTE_DEV_TO_PCI(dev->device);
+ vi.func_type = VCPF_CPCHNL2_FTYPE_LAN_VF;
+ vi.pf_id = CPFL_HOST0_CPF_ID;
+ vi.vf_id = pci_dev->addr.function;
+
+ v_id.vport_id = cpfl_vport->base.vport_info.info.vport_id;
+ v_id.vport_type = cpfl_vport->base.vport_info.info.vport_type;
+
+ ret = cpfl_cc_vport_info_get(adapter, &v_id, &vi, &response);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to send vport info cpchnl message.");
+ return -1;
+ }
+
+ ret = vcpf_save_vport_info_response(cpfl_vport, &response);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to save cpchnl response.");
+ return -1;
+ }
+ }
return 0;
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index f550bca754..be73e05a0e 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -164,10 +164,20 @@ struct cpfl_itf {
void *data;
};
+struct vcpf_vport_info {
+ u16 vport_index;
+ u16 vsi_id;
+ u32 abs_start_txq_id;
+ u32 num_tx_q;
+ u32 abs_start_rxq_id;
+ u32 num_rx_q;
+};
+
struct cpfl_vport {
struct cpfl_itf itf;
struct idpf_vport base;
struct p2p_queue_chunks_info *p2p_q_chunks_info;
+ struct vcpf_vport_info vport_info;
struct rte_mempool *p2p_mp;
@@ -319,6 +329,7 @@ cpfl_get_vsi_id(struct cpfl_itf *itf)
uint32_t vport_id;
int ret;
struct cpfl_vport_id vport_identity;
+ u16 vsi_id = 0;
if (!itf)
return CPFL_INVALID_HW_ID;
@@ -328,24 +339,30 @@ cpfl_get_vsi_id(struct cpfl_itf *itf)
return repr->vport_info->vport.info.vsi_id;
} else if (itf->type == CPFL_ITF_TYPE_VPORT) {
- vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
-
- vport_identity.func_type = CPCHNL2_FTYPE_LAN_PF;
- /* host: CPFL_HOST0_CPF_ID, acc: CPFL_ACC_CPF_ID */
- vport_identity.pf_id = (itf->adapter->host_id == CPFL_HOST_ID_ACC) ?
- CPFL_ACC_CPF_ID : CPFL_HOST0_CPF_ID;
- vport_identity.vf_id = 0;
- vport_identity.vport_id = vport_id;
- ret = rte_hash_lookup_data(itf->adapter->vport_map_hash,
- &vport_identity,
- (void **)&info);
- if (ret < 0) {
- PMD_DRV_LOG(ERR, "vport id not exist");
- goto err;
+ if (itf->adapter->base.hw.device_id == IDPF_DEV_ID_CPF) {
+ vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
+
+ vport_identity.func_type = CPCHNL2_FTYPE_LAN_PF;
+ /* host: CPFL_HOST0_CPF_ID, acc: CPFL_ACC_CPF_ID */
+ vport_identity.pf_id = (itf->adapter->host_id == CPFL_HOST_ID_ACC) ?
+ CPFL_ACC_CPF_ID : CPFL_HOST0_CPF_ID;
+ vport_identity.vf_id = 0;
+ vport_identity.vport_id = vport_id;
+ ret = rte_hash_lookup_data(itf->adapter->vport_map_hash,
+ &vport_identity,
+ (void **)&info);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "vport id not exist");
+ goto err;
+ }
+
+ vsi_id = info->vport.info.vsi_id;
+ } else {
+ if (itf->adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vsi_id = (uint16_t)((struct cpfl_vport *)itf)->vport_info.vsi_id;
}
-
- return info->vport.info.vsi_id;
}
+ return vsi_id;
err:
return CPFL_INVALID_HW_ID;
@@ -374,4 +391,25 @@ cpfl_get_itf_by_port_id(uint16_t port_id)
return CPFL_DEV_TO_ITF(dev);
}
+
+static inline uint32_t
+vcpf_get_abs_qid(uint16_t port_id, uint32_t queue_type)
+{
+ struct cpfl_itf *itf = cpfl_get_itf_by_port_id(port_id);
+ struct cpfl_vport *vport;
+ if (!itf)
+ return CPFL_INVALID_HW_ID;
+ if (itf->type == CPFL_ITF_TYPE_VPORT) {
+ vport = (void *)itf;
+ if (itf->adapter->base.hw.device_id == IXD_DEV_ID_VCPF) {
+ switch (queue_type) {
+ case VIRTCHNL2_QUEUE_TYPE_TX:
+ return vport->vport_info.abs_start_txq_id;
+ case VIRTCHNL2_QUEUE_TYPE_RX:
+ return vport->vport_info.abs_start_rxq_id;
+ }
+ }
+ }
+ return 0;
+}
#endif /* _CPFL_ETHDEV_H_ */
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v5 0/4] add vcpf pmd support
2025-09-22 9:48 ` [PATCH 1/4] net/intel: add vCPF PMD support Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 0/4] add vcpf pmd support Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 0/4] add vcpf pmd support Shetty, Praveen
@ 2025-09-30 18:27 ` Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 1/4] net/intel: add vCPF PMD support Shetty, Praveen
` (3 more replies)
2 siblings, 4 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-30 18:27 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh; +Cc: dev
Virtual Control Plane Function (vCPF) is a SR-IOV Virtual Function of
the CPF(PF) device.vCPF is used to support multiple control plane functions.
This patchset is for extending the CPFL PMD to support the new vCPF device.
In this implementaion, both CPFL and the vCPF devices share most of the
initialization routine and share the common data path implementation, which
eliminates code duplication and improving the maintainability of the driver code.
---
v5:
- fixed merge conflicts
v4:
- addressed review comments
v3:
- fixed cpchnl2_func_type enum for PF device
v2:
- fixed test case failure
---
Praveen Shetty (4):
net/intel: add vCPF PMD support
net/idpf: add splitq jumbo packet handling
net/intel: add config queue support to vCPF
net/cpfl: add cpchnl get vport info support
drivers/net/intel/cpfl/cpfl_cpchnl.h | 8 +
drivers/net/intel/cpfl/cpfl_ethdev.c | 356 ++++++++++++++++--
drivers/net/intel/cpfl/cpfl_ethdev.h | 108 +++++-
drivers/net/intel/cpfl/cpfl_vchnl.c | 143 ++++++-
drivers/net/intel/idpf/base/idpf_osdep.h | 3 +
drivers/net/intel/idpf/base/virtchnl2.h | 3 +-
drivers/net/intel/idpf/idpf_common_device.c | 4 +-
drivers/net/intel/idpf/idpf_common_device.h | 3 +
drivers/net/intel/idpf/idpf_common_rxtx.c | 48 ++-
drivers/net/intel/idpf/idpf_common_virtchnl.c | 38 ++
drivers/net/intel/idpf/idpf_common_virtchnl.h | 3 +
11 files changed, 634 insertions(+), 83 deletions(-)
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v5 1/4] net/intel: add vCPF PMD support
2025-09-30 18:27 ` [PATCH v5 0/4] add vcpf pmd support Shetty, Praveen
@ 2025-09-30 18:27 ` Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
` (2 subsequent siblings)
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-30 18:27 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Atul Patel, Dhananjay Shukla
From: Praveen Shetty <praveen.shetty@intel.com>
This patch adds the registration support for a new vCPF PMD.
vCPF PMD is responsible for enabling control and data path
functionality for the CPF VF devices.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Tested-by: Atul Patel <atul.patel@intel.com>
Tested-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
---
drivers/net/intel/cpfl/cpfl_ethdev.c | 17 +++++++++++++++++
drivers/net/intel/idpf/idpf_common_device.c | 4 ++--
drivers/net/intel/idpf/idpf_common_device.h | 1 +
3 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index 6d7b23ad7a..6aa0971941 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -1854,6 +1854,7 @@ cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter)
switch (mbx_op) {
case idpf_mbq_opc_send_msg_to_peer_pf:
+ case idpf_mbq_opc_send_msg_to_peer_drv:
if (vc_op == VIRTCHNL2_OP_EVENT) {
cpfl_handle_vchnl_event_msg(adapter, adapter->base.mbx_resp,
ctlq_msg.data_len);
@@ -2610,6 +2611,11 @@ static const struct rte_pci_id pci_id_cpfl_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static const struct rte_pci_id pci_id_vcpf_map[] = {
+ { RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IXD_DEV_ID_VCPF) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
static struct cpfl_adapter_ext *
cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
{
@@ -2866,6 +2872,14 @@ static struct rte_pci_driver rte_cpfl_pmd = {
.remove = cpfl_pci_remove,
};
+static struct rte_pci_driver rte_vcpf_pmd = {
+ .id_table = pci_id_vcpf_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING |
+ RTE_PCI_DRV_PROBE_AGAIN,
+ .probe = cpfl_pci_probe,
+ .remove = cpfl_pci_remove,
+};
+
/**
* Driver initialization routine.
* Invoked once at EAL init time.
@@ -2874,6 +2888,9 @@ static struct rte_pci_driver rte_cpfl_pmd = {
RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PCI(net_vcpf, rte_vcpf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_vcpf, pci_id_vcpf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_vcpf, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
CPFL_TX_SINGLE_Q "=<0|1> "
CPFL_RX_SINGLE_Q "=<0|1> "
diff --git a/drivers/net/intel/idpf/idpf_common_device.c b/drivers/net/intel/idpf/idpf_common_device.c
index ff1fbcd2b4..8c637a2fb6 100644
--- a/drivers/net/intel/idpf/idpf_common_device.c
+++ b/drivers/net/intel/idpf/idpf_common_device.c
@@ -130,7 +130,7 @@ idpf_init_mbx(struct idpf_hw *hw)
struct idpf_ctlq_info *ctlq;
int ret = 0;
- if (hw->device_id == IDPF_DEV_ID_SRIOV)
+ if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id == IXD_DEV_ID_VCPF)
ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, vf_ctlq_info);
else
ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, pf_ctlq_info);
@@ -389,7 +389,7 @@ idpf_adapter_init(struct idpf_adapter *adapter)
struct idpf_hw *hw = &adapter->hw;
int ret;
- if (hw->device_id == IDPF_DEV_ID_SRIOV) {
+ if (hw->device_id == IDPF_DEV_ID_SRIOV || hw->device_id == IXD_DEV_ID_VCPF) {
ret = idpf_check_vf_reset_done(hw);
} else {
idpf_reset_pf(hw);
diff --git a/drivers/net/intel/idpf/idpf_common_device.h b/drivers/net/intel/idpf/idpf_common_device.h
index 3b95d519c6..4766e5b696 100644
--- a/drivers/net/intel/idpf/idpf_common_device.h
+++ b/drivers/net/intel/idpf/idpf_common_device.h
@@ -11,6 +11,7 @@
#include "idpf_common_logs.h"
#define IDPF_DEV_ID_SRIOV 0x145C
+#define IXD_DEV_ID_VCPF 0x1203
#define IDPF_RSS_KEY_LEN 52
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v5 2/4] net/idpf: add splitq jumbo packet handling
2025-09-30 18:27 ` [PATCH v5 0/4] add vcpf pmd support Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 1/4] net/intel: add vCPF PMD support Shetty, Praveen
@ 2025-09-30 18:27 ` Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-30 18:27 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Dhananjay Shukla, atulpatel261194
From: Praveen Shetty <praveen.shetty@intel.com>
This patch will add the jumbo packets handling in the
idpf_dp_splitq_recv_pkts function.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Signed-off-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Signed-off-by: atulpatel261194 <Atul.Patel@intel.com>
---
drivers/net/intel/idpf/idpf_common_rxtx.c | 48 ++++++++++++++++++-----
1 file changed, 38 insertions(+), 10 deletions(-)
diff --git a/drivers/net/intel/idpf/idpf_common_rxtx.c b/drivers/net/intel/idpf/idpf_common_rxtx.c
index a2b8c372d6..87a87ed41a 100644
--- a/drivers/net/intel/idpf/idpf_common_rxtx.c
+++ b/drivers/net/intel/idpf/idpf_common_rxtx.c
@@ -625,10 +625,12 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
uint16_t pktlen_gen_bufq_id;
- struct idpf_rx_queue *rxq;
+ struct idpf_rx_queue *rxq = rx_queue;
const uint32_t *ptype_tbl;
uint8_t status_err0_qw1;
struct idpf_adapter *ad;
+ struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+ struct rte_mbuf *last_seg = rxq->pkt_last_seg;
struct rte_mbuf *rxm;
uint16_t rx_id_bufq1;
uint16_t rx_id_bufq2;
@@ -661,6 +663,7 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pktlen_gen_bufq_id =
rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
+ status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
gen_id = (pktlen_gen_bufq_id &
VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
@@ -699,16 +702,37 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->pkt_len = pkt_len;
rxm->data_len = pkt_len;
rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+ /*
+ * If this is the first buffer of the received packet, set the
+ * pointer to the first mbuf of the packet and initialize its
+ * context. Otherwise, update the total length and the number
+ * of segments of the current scattered packet, and update the
+ * pointer to the last mbuf of the current packet.
+ */
+ if (!first_seg) {
+ first_seg = rxm;
+ first_seg->nb_segs = 1;
+ first_seg->pkt_len = pkt_len;
+ } else {
+ first_seg->pkt_len = (uint16_t)(first_seg->pkt_len + pkt_len);
+ first_seg->nb_segs++;
+ last_seg->next = rxm;
+ }
+
+ if (!(status_err0_qw1 & (1 << VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S))) {
+ last_seg = rxm;
+ continue;
+ }
+
rxm->next = NULL;
- rxm->nb_segs = 1;
- rxm->port = rxq->port_id;
- rxm->ol_flags = 0;
- rxm->packet_type =
+ first_seg->port = rxq->port_id;
+ first_seg->ol_flags = 0;
+ first_seg->packet_type =
ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
-
- status_err0_qw1 = rx_desc->status_err0_qw1;
+ status_err0_qw1 = rte_le_to_cpu_16(rx_desc->status_err0_qw1);
pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
if (idpf_timestamp_dynflag > 0 &&
@@ -721,16 +745,20 @@ idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*RTE_MBUF_DYNFIELD(rxm,
idpf_timestamp_dynfield_offset,
rte_mbuf_timestamp_t *) = ts_ns;
- rxm->ol_flags |= idpf_timestamp_dynflag;
+ first_seg->ol_flags |= idpf_timestamp_dynflag;
}
- rxm->ol_flags |= pkt_flags;
+ first_seg->ol_flags |= pkt_flags;
- rx_pkts[nb_rx++] = rxm;
+ rx_pkts[nb_rx++] = first_seg;
+
+ first_seg = NULL;
}
if (nb_rx > 0) {
rxq->rx_tail = rx_id;
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
rxq->bufq1->rx_next_avail = rx_id_bufq1;
if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v5 3/4] net/intel: add config queue support to vCPF
2025-09-30 18:27 ` [PATCH v5 0/4] add vcpf pmd support Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 1/4] net/intel: add vCPF PMD support Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
@ 2025-09-30 18:27 ` Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-30 18:27 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Dhananjay Shukla, Atul Patel
From: Praveen Shetty <praveen.shetty@intel.com>
A "configuration queue" is a software term to denote
a hardware mailbox queue dedicated to FXP (Flexible packet processor)
programming.While the hardware does not have a construct of a
"configuration queue", software does to state clearly
the distinction between a queue software dedicates to
regular mailbox processing (e.g. CPChnl or Virtchnl)
and a queue software dedicates for programming the FXP
Pipeline.From the hardware’s viewpoint, both mailbox and
configuration queues are treated as "control" queues,
with the main distinction being the "opcode" in their
descriptors.This patch will requests queues from the
firmware using an add_queue Virtchnl message and sets
them up as config queues.The vCPF driver then uses these
config queues to program the FXP pipeline via rte_flow.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Tested-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Tested-by: Atul Patel <atul.patel@intel.com>
---
drivers/net/intel/cpfl/cpfl_ethdev.c | 276 +++++++++++++++---
drivers/net/intel/cpfl/cpfl_ethdev.h | 38 ++-
drivers/net/intel/cpfl/cpfl_vchnl.c | 143 ++++++++-
drivers/net/intel/idpf/base/idpf_osdep.h | 3 +
drivers/net/intel/idpf/base/virtchnl2.h | 3 +-
drivers/net/intel/idpf/idpf_common_device.h | 2 +
drivers/net/intel/idpf/idpf_common_virtchnl.c | 38 +++
drivers/net/intel/idpf/idpf_common_virtchnl.h | 3 +
8 files changed, 451 insertions(+), 55 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index 6aa0971941..22f3859dca 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -29,6 +29,9 @@
#define CPFL_FLOW_PARSER "flow_parser"
#endif
+#define VCPF_FID 0
+#define CPFL_FID 6
+
rte_spinlock_t cpfl_adapter_lock;
/* A list for all adapters, one adapter matches one PCI device */
struct cpfl_adapter_list cpfl_adapter_list;
@@ -1699,7 +1702,8 @@ cpfl_handle_vchnl_event_msg(struct cpfl_adapter_ext *adapter, uint8_t *msg, uint
}
/* ignore if it is ctrl vport */
- if (adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF &&
+ adapter->ctrl_vport.base.vport_id == vc_event->vport_id)
return;
vport = cpfl_find_vport(adapter, vc_event->vport_id);
@@ -1903,18 +1907,30 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
{
int i, ret;
- for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
+ for (i = 0; i < adapter->num_tx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[0].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, false);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
+
if (ret) {
PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
return ret;
}
}
- for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
- VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
+ for (i = 0; i < adapter->num_rx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[1].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, false);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
+
if (ret) {
PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
return ret;
@@ -1922,6 +1938,7 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
}
return 0;
+
}
static int
@@ -1941,8 +1958,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
return ret;
}
- for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
+ for (i = 0; i < adapter->num_tx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[0].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_TX, true);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to enable Tx config queue.");
@@ -1950,8 +1972,13 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
}
}
- for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
- ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
+ for (i = 0; i < adapter->num_rx_cfgq; i++) {
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ ret = idpf_vc_ena_dis_one_queue_vcpf(&adapter->base,
+ adapter->cfgq_info[1].id,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX, true);
+ else
+ ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to enable Rx config queue.");
@@ -1971,14 +1998,20 @@ cpfl_remove_cfgqs(struct cpfl_adapter_ext *adapter)
create_cfgq_info = adapter->cfgq_info;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
- if (adapter->ctlqp[i])
+ for (i = 0; i < adapter->num_cfgq; i++) {
+ if (adapter->ctlqp[i]) {
cpfl_vport_ctlq_remove(hw, adapter->ctlqp[i]);
+ adapter->ctlqp[i] = NULL;
+ }
if (create_cfgq_info[i].ring_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem);
if (create_cfgq_info[i].buf_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem);
}
+ if (adapter->ctlqp) {
+ rte_free(adapter->ctlqp);
+ adapter->ctlqp = NULL;
+ }
}
static int
@@ -1988,7 +2021,16 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
int ret = 0;
int i = 0;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ adapter->ctlqp = rte_zmalloc("ctlqp", adapter->num_cfgq *
+ sizeof(struct idpf_ctlq_info *),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!adapter->ctlqp) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for control queues");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < adapter->num_cfgq; i++) {
cfg_cq = NULL;
ret = cpfl_vport_ctlq_add((struct idpf_hw *)(&adapter->base.hw),
&adapter->cfgq_info[i],
@@ -2007,6 +2049,64 @@ cpfl_add_cfgqs(struct cpfl_adapter_ext *adapter)
return ret;
}
+static
+int vcpf_save_chunk_in_cfgq(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_add_queues *add_q =
+ (struct virtchnl2_add_queues *)adapter->addq_recv_info;
+ struct vcpf_cfg_queue *cfgq;
+ struct virtchnl2_queue_reg_chunk *q_chnk;
+ u16 rx, tx, num_chunks, num_q, struct_size;
+ u32 q_id, q_type;
+
+ rx = 0; tx = 0;
+
+ cfgq = rte_zmalloc("cfgq", adapter->num_cfgq * sizeof(struct vcpf_cfg_queue),
+ RTE_CACHE_LINE_SIZE);
+ if (!cfgq) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for cfgq");
+ return -ENOMEM;
+ }
+
+ struct_size = idpf_struct_size(add_q, chunks.chunks, (add_q->chunks.num_chunks - 1));
+ adapter->cfgq_in.cfgq_add = rte_zmalloc("config_queues", struct_size, 0);
+ if (!adapter->cfgq_in.cfgq_add) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for add_q");
+ return -ENOMEM;
+ }
+ rte_memcpy(adapter->cfgq_in.cfgq_add, add_q, struct_size);
+
+ num_chunks = add_q->chunks.num_chunks;
+ for (u16 i = 0; i < num_chunks; i++) {
+ num_q = add_q->chunks.chunks[i].num_queues;
+ q_chnk = &add_q->chunks.chunks[i];
+ for (u16 j = 0; j < num_q; j++) {
+ if (rx > adapter->num_cfgq || tx > adapter->num_cfgq)
+ break;
+ q_id = q_chnk->start_queue_id + j;
+ q_type = q_chnk->type;
+ if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_TX) {
+ cfgq[0].qid = q_id;
+ cfgq[0].qtail_reg_start = q_chnk->qtail_reg_start;
+ cfgq[0].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
+ q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_TX;
+ tx++;
+ } else if (q_type == VIRTCHNL2_QUEUE_TYPE_MBX_RX) {
+ cfgq[1].qid = q_id;
+ cfgq[1].qtail_reg_start = q_chnk->qtail_reg_start;
+ cfgq[1].qtail_reg_spacing = q_chnk->qtail_reg_spacing;
+ q_chnk->type = VIRTCHNL2_QUEUE_TYPE_CONFIG_RX;
+ rx++;
+ }
+ }
+ }
+
+ adapter->cfgq_in.cfgq = cfgq;
+ adapter->cfgq_in.num_cfgq = adapter->num_cfgq;
+
+ return 0;
+}
+
#define CPFL_CFGQ_RING_LEN 512
#define CPFL_CFGQ_DESCRIPTOR_SIZE 32
#define CPFL_CFGQ_BUFFER_SIZE 256
@@ -2017,32 +2117,71 @@ cpfl_cfgq_setup(struct cpfl_adapter_ext *adapter)
{
struct cpfl_ctlq_create_info *create_cfgq_info;
struct cpfl_vport *vport;
+ struct vcpf_cfgq_info *cfgq_info = &adapter->cfgq_in;
int i, err;
uint32_t ring_size = CPFL_CFGQ_RING_SIZE * sizeof(struct idpf_ctlq_desc);
uint32_t buf_size = CPFL_CFGQ_RING_SIZE * CPFL_CFGQ_BUFFER_SIZE;
+ uint64_t tx_qtail_start;
+ uint64_t rx_qtail_start;
+ uint32_t tx_qtail_spacing;
+ uint32_t rx_qtail_spacing;
vport = &adapter->ctrl_vport;
+
+ tx_qtail_start = vport->base.chunks_info.tx_qtail_start;
+ tx_qtail_spacing = vport->base.chunks_info.tx_qtail_spacing;
+ rx_qtail_start = vport->base.chunks_info.rx_qtail_start;
+ rx_qtail_spacing = vport->base.chunks_info.rx_qtail_spacing;
+
+ adapter->cfgq_info = rte_zmalloc("cfgq_info", adapter->num_cfgq *
+ sizeof(struct cpfl_ctlq_create_info),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!adapter->cfgq_info) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for cfgq_info");
+ return -ENOMEM;
+ }
+
create_cfgq_info = adapter->cfgq_info;
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ for (i = 0; i < adapter->num_cfgq; i++) {
if (i % 2 == 0) {
- /* Setup Tx config queue */
- create_cfgq_info[i].id = vport->base.chunks_info.tx_start_qid + i / 2;
+ /* Setup Tx config queue */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].id = cfgq_info->cfgq[i].qid;
+ else
+ create_cfgq_info[i].id = vport->base.chunks_info.tx_start_qid +
+ i / 2;
+
create_cfgq_info[i].type = IDPF_CTLQ_TYPE_CONFIG_TX;
create_cfgq_info[i].len = CPFL_CFGQ_RING_SIZE;
create_cfgq_info[i].buf_size = CPFL_CFGQ_BUFFER_SIZE;
memset(&create_cfgq_info[i].reg, 0, sizeof(struct idpf_ctlq_reg));
- create_cfgq_info[i].reg.tail = vport->base.chunks_info.tx_qtail_start +
- i / 2 * vport->base.chunks_info.tx_qtail_spacing;
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].reg.tail = cfgq_info->cfgq[i].qtail_reg_start;
+ else
+ create_cfgq_info[i].reg.tail = tx_qtail_start +
+ i / 2 * tx_qtail_spacing;
+
} else {
- /* Setup Rx config queue */
- create_cfgq_info[i].id = vport->base.chunks_info.rx_start_qid + i / 2;
+ /* Setup Rx config queue */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].id = cfgq_info->cfgq[i].qid;
+ else
+ create_cfgq_info[i].id = vport->base.chunks_info.rx_start_qid +
+ i / 2;
+
create_cfgq_info[i].type = IDPF_CTLQ_TYPE_CONFIG_RX;
create_cfgq_info[i].len = CPFL_CFGQ_RING_SIZE;
create_cfgq_info[i].buf_size = CPFL_CFGQ_BUFFER_SIZE;
memset(&create_cfgq_info[i].reg, 0, sizeof(struct idpf_ctlq_reg));
- create_cfgq_info[i].reg.tail = vport->base.chunks_info.rx_qtail_start +
- i / 2 * vport->base.chunks_info.rx_qtail_spacing;
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ create_cfgq_info[i].reg.tail = cfgq_info->cfgq[i].qtail_reg_start;
+ else
+ create_cfgq_info[i].reg.tail = rx_qtail_start +
+ i / 2 * rx_qtail_spacing;
+
+
if (!idpf_alloc_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem,
buf_size)) {
err = -ENOMEM;
@@ -2050,19 +2189,24 @@ cpfl_cfgq_setup(struct cpfl_adapter_ext *adapter)
}
}
if (!idpf_alloc_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem,
- ring_size)) {
+ ring_size)) {
err = -ENOMEM;
goto free_mem;
}
}
+
return 0;
free_mem:
- for (i = 0; i < CPFL_CFGQ_NUM; i++) {
+ for (i = 0; i < adapter->num_cfgq; i++) {
if (create_cfgq_info[i].ring_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].ring_mem);
if (create_cfgq_info[i].buf_mem.va)
idpf_free_dma_mem(&adapter->base.hw, &create_cfgq_info[i].buf_mem);
}
+ if (adapter->cfgq_info) {
+ rte_free(adapter->cfgq_info);
+ adapter->cfgq_info = NULL;
+ }
return err;
}
@@ -2107,7 +2251,10 @@ cpfl_ctrl_path_close(struct cpfl_adapter_ext *adapter)
{
cpfl_stop_cfgqs(adapter);
cpfl_remove_cfgqs(adapter);
- idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ else
+ vcpf_del_queues(adapter);
}
static int
@@ -2115,22 +2262,39 @@ cpfl_ctrl_path_open(struct cpfl_adapter_ext *adapter)
{
int ret;
- ret = cpfl_vc_create_ctrl_vport(adapter);
- if (ret) {
- PMD_INIT_LOG(ERR, "Failed to create control vport");
- return ret;
- }
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF) {
+ ret = cpfl_vc_create_ctrl_vport(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to create control vport");
+ return ret;
+ }
- ret = cpfl_init_ctrl_vport(adapter);
- if (ret) {
- PMD_INIT_LOG(ERR, "Failed to init control vport");
- goto err_init_ctrl_vport;
+ ret = cpfl_init_ctrl_vport(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init control vport");
+ goto err_init_ctrl_vport;
+ }
+ } else {
+ ret = vcpf_add_queues(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to add queues");
+ return ret;
+ }
+
+ ret = vcpf_save_chunk_in_cfgq(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to save config queue chunk");
+ return ret;
+ }
}
ret = cpfl_cfgq_setup(adapter);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to setup control queues");
- goto err_cfgq_setup;
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ goto err_cfgq_setup;
+ else
+ goto err_del_cfg;
}
ret = cpfl_add_cfgqs(adapter);
@@ -2153,9 +2317,13 @@ cpfl_ctrl_path_open(struct cpfl_adapter_ext *adapter)
cpfl_remove_cfgqs(adapter);
err_cfgq_setup:
err_init_ctrl_vport:
- idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+ if (adapter->base.hw.device_id == IDPF_DEV_ID_CPF)
+ idpf_vc_vport_destroy(&adapter->ctrl_vport.base);
+err_del_cfg:
+ vcpf_del_queues(adapter);
return ret;
+
}
static struct virtchnl2_get_capabilities req_caps = {
@@ -2291,12 +2459,29 @@ get_running_host_id(void)
return host_id;
}
+static uint8_t
+set_config_queue_details(struct cpfl_adapter_ext *adapter, struct rte_pci_addr *pci_addr)
+{
+ if (pci_addr->function == CPFL_FID) {
+ adapter->num_cfgq = CPFL_CFGQ_NUM;
+ adapter->num_rx_cfgq = CPFL_RX_CFGQ_NUM;
+ adapter->num_tx_cfgq = CPFL_TX_CFGQ_NUM;
+ } else if (pci_addr->function == VCPF_FID) {
+ adapter->num_cfgq = VCPF_CFGQ_NUM;
+ adapter->num_rx_cfgq = VCPF_RX_CFGQ_NUM;
+ adapter->num_tx_cfgq = VCPF_TX_CFGQ_NUM;
+ }
+
+ return 0;
+}
+
static int
cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
struct cpfl_devargs *devargs)
{
struct idpf_adapter *base = &adapter->base;
struct idpf_hw *hw = &base->hw;
+ struct rte_pci_addr *pci_addr = &pci_dev->addr;
int ret = 0;
#ifndef RTE_HAS_JANSSON
@@ -2348,10 +2533,23 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
goto err_vports_alloc;
}
- ret = cpfl_ctrl_path_open(adapter);
+ /* set the number of config queues to be requested */
+ ret = set_config_queue_details(adapter, pci_addr);
if (ret) {
- PMD_INIT_LOG(ERR, "Failed to setup control path");
- goto err_create_ctrl_vport;
+ PMD_INIT_LOG(ERR, "Failed to set the config queue details");
+ return -1;
+ }
+
+ if (pci_addr->function == VCPF_FID || pci_addr->function == CPFL_FID) {
+ ret = cpfl_ctrl_path_open(adapter);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup control path");
+ if (pci_addr->function == CPFL_FID)
+ goto err_create_ctrl_vport;
+ else
+ return ret;
+ }
+
}
#ifdef RTE_HAS_JANSSON
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index d4e1176ab1..f550bca754 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -89,6 +89,9 @@
#define CPFL_FPCP_CFGQ_TX 0
#define CPFL_FPCP_CFGQ_RX 1
#define CPFL_CFGQ_NUM 8
+#define VCPF_RX_CFGQ_NUM 1
+#define VCPF_TX_CFGQ_NUM 1
+#define VCPF_CFGQ_NUM 2
/* bit[15:14] type
* bit[13] host/accelerator core
@@ -200,6 +203,30 @@ struct cpfl_metadata {
struct cpfl_metadata_chunk chunks[CPFL_META_LENGTH];
};
+/**
+ * struct vcpf_cfg_queue - config queue information
+ * @qid: rx/tx queue id
+ * @qtail_reg_start: rx/tx tail queue register start
+ * @qtail_reg_spacing: rx/tx tail queue register spacing
+ */
+struct vcpf_cfg_queue {
+ u32 qid;
+ u64 qtail_reg_start;
+ u32 qtail_reg_spacing;
+};
+
+/**
+ * struct vcpf_cfgq_info - config queue information
+ * @num_cfgq: number of config queues
+ * @cfgq_add: config queue add information
+ * @cfgq: config queue information
+ */
+struct vcpf_cfgq_info {
+ u16 num_cfgq;
+ struct virtchnl2_add_queues *cfgq_add;
+ struct vcpf_cfg_queue *cfgq;
+};
+
struct cpfl_adapter_ext {
TAILQ_ENTRY(cpfl_adapter_ext) next;
struct idpf_adapter base;
@@ -229,8 +256,13 @@ struct cpfl_adapter_ext {
/* ctrl vport and ctrl queues. */
struct cpfl_vport ctrl_vport;
uint8_t ctrl_vport_recv_info[IDPF_DFLT_MBX_BUF_SIZE];
- struct idpf_ctlq_info *ctlqp[CPFL_CFGQ_NUM];
- struct cpfl_ctlq_create_info cfgq_info[CPFL_CFGQ_NUM];
+ struct idpf_ctlq_info **ctlqp;
+ struct cpfl_ctlq_create_info *cfgq_info;
+ struct vcpf_cfgq_info cfgq_in;
+ uint8_t addq_recv_info[IDPF_DFLT_MBX_BUF_SIZE];
+ uint16_t num_cfgq;
+ uint16_t num_rx_cfgq;
+ uint16_t num_tx_cfgq;
uint8_t host_id;
};
@@ -251,6 +283,8 @@ int cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter);
int cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter);
int cpfl_alloc_dma_mem_batch(struct idpf_dma_mem *orig_dma, struct idpf_dma_mem *dma,
uint32_t size, int batch_size);
+int vcpf_add_queues(struct cpfl_adapter_ext *adapter);
+int vcpf_del_queues(struct cpfl_adapter_ext *adapter);
#define CPFL_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/intel/cpfl/cpfl_vchnl.c b/drivers/net/intel/cpfl/cpfl_vchnl.c
index 7d277a0e8e..9c842b60df 100644
--- a/drivers/net/intel/cpfl/cpfl_vchnl.c
+++ b/drivers/net/intel/cpfl/cpfl_vchnl.c
@@ -106,6 +106,106 @@ cpfl_vc_create_ctrl_vport(struct cpfl_adapter_ext *adapter)
return err;
}
+#define VCPF_CFQ_MB_INDEX 0xFF
+int
+vcpf_add_queues(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_add_queues add_cfgq;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&add_cfgq, 0, sizeof(struct virtchnl2_add_queues));
+ u16 num_cfgq = 1;
+
+ add_cfgq.num_tx_q = rte_cpu_to_le_16(num_cfgq);
+ add_cfgq.num_rx_q = rte_cpu_to_le_16(num_cfgq);
+ add_cfgq.mbx_q_index = VCPF_CFQ_MB_INDEX;
+
+ add_cfgq.vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ add_cfgq.num_tx_complq = 0;
+ add_cfgq.num_rx_bufq = 0;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_ADD_QUEUES;
+ args.in_args = (uint8_t *)&add_cfgq;
+ args.in_args_size = sizeof(add_cfgq);
+ args.out_buffer = adapter->base.mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_vc_cmd_execute(&adapter->base, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command VIRTCHNL2_OP_ADD_QUEUES");
+ return err;
+ }
+
+ rte_memcpy(adapter->addq_recv_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+
+ return err;
+}
+
+int
+vcpf_del_queues(struct cpfl_adapter_ext *adapter)
+{
+ struct virtchnl2_del_ena_dis_queues *del_cfgq;
+ u16 num_chunks;
+ struct idpf_cmd_info args;
+ int i, err, size;
+
+ num_chunks = adapter->cfgq_in.cfgq_add->chunks.num_chunks;
+ size = idpf_struct_size(del_cfgq, chunks.chunks, (num_chunks - 1));
+ del_cfgq = rte_zmalloc("del_cfgq", size, 0);
+ if (!del_cfgq) {
+ PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_del_ena_dis_queues");
+ err = -ENOMEM;
+ return err;
+ }
+
+ del_cfgq->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ del_cfgq->chunks.num_chunks = num_chunks;
+
+ /* fill config queue chunk data */
+ for (i = 0; i < num_chunks; i++) {
+ del_cfgq->chunks.chunks[i].type =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].type;
+ del_cfgq->chunks.chunks[i].start_queue_id =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].start_queue_id;
+ del_cfgq->chunks.chunks[i].num_queues =
+ adapter->cfgq_in.cfgq_add->chunks.chunks[i].num_queues;
+ }
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_DEL_QUEUES;
+ args.in_args = (uint8_t *)del_cfgq;
+ args.in_args_size = idpf_struct_size(del_cfgq, chunks.chunks,
+ (del_cfgq->chunks.num_chunks - 1));
+ args.out_buffer = adapter->base.mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_vc_cmd_execute(&adapter->base, &args);
+ rte_free(del_cfgq);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command VIRTCHNL2_OP_DEL_QUEUES");
+ return err;
+ }
+
+ if (adapter->cfgq_info) {
+ rte_free(adapter->cfgq_info);
+ adapter->cfgq_info = NULL;
+ }
+ adapter->cfgq_in.num_cfgq = 0;
+ if (adapter->cfgq_in.cfgq_add) {
+ rte_free(adapter->cfgq_in.cfgq_add);
+ adapter->cfgq_in.cfgq_add = NULL;
+ }
+ if (adapter->cfgq_in.cfgq) {
+ rte_free(adapter->cfgq_in.cfgq);
+ adapter->cfgq_in.cfgq = NULL;
+ }
+ return err;
+}
+
int
cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
{
@@ -116,13 +216,16 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
uint16_t num_qs;
int size, err, i;
- if (vport->base.rxq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
- PMD_DRV_LOG(ERR, "This rxq model isn't supported.");
- err = -EINVAL;
- return err;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF) {
+ if (vport->base.rxq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ PMD_DRV_LOG(ERR, "This rxq model isn't supported.");
+ err = -EINVAL;
+ return err;
+ }
}
- num_qs = CPFL_RX_CFGQ_NUM;
+ num_qs = adapter->num_rx_cfgq;
+
size = sizeof(*vc_rxqs) + (num_qs - 1) *
sizeof(struct virtchnl2_rxq_info);
vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
@@ -131,7 +234,12 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
err = -ENOMEM;
return err;
}
- vc_rxqs->vport_id = vport->base.vport_id;
+
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vc_rxqs->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ else
+ vc_rxqs->vport_id = vport->base.vport_id;
+
vc_rxqs->num_qinfo = num_qs;
for (i = 0; i < num_qs; i++) {
@@ -141,7 +249,8 @@ cpfl_config_ctlq_rx(struct cpfl_adapter_ext *adapter)
rxq_info->queue_id = adapter->cfgq_info[2 * i + 1].id;
rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
rxq_info->data_buffer_size = adapter->cfgq_info[2 * i + 1].buf_size;
- rxq_info->max_pkt_size = vport->base.max_pkt_len;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF)
+ rxq_info->max_pkt_size = vport->base.max_pkt_len;
rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
rxq_info->ring_len = adapter->cfgq_info[2 * i + 1].len;
@@ -172,13 +281,16 @@ cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter)
uint16_t num_qs;
int size, err, i;
- if (vport->base.txq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
- PMD_DRV_LOG(ERR, "This txq model isn't supported.");
- err = -EINVAL;
- return err;
+ if (adapter->base.hw.device_id != IXD_DEV_ID_VCPF) {
+ if (vport->base.txq_model != VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ PMD_DRV_LOG(ERR, "This txq model isn't supported.");
+ err = -EINVAL;
+ return err;
+ }
}
- num_qs = CPFL_TX_CFGQ_NUM;
+ num_qs = adapter->num_tx_cfgq;
+
size = sizeof(*vc_txqs) + (num_qs - 1) *
sizeof(struct virtchnl2_txq_info);
vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
@@ -187,7 +299,12 @@ cpfl_config_ctlq_tx(struct cpfl_adapter_ext *adapter)
err = -ENOMEM;
return err;
}
- vc_txqs->vport_id = vport->base.vport_id;
+
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vc_txqs->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+ else
+ vc_txqs->vport_id = vport->base.vport_id;
+
vc_txqs->num_qinfo = num_qs;
for (i = 0; i < num_qs; i++) {
diff --git a/drivers/net/intel/idpf/base/idpf_osdep.h b/drivers/net/intel/idpf/base/idpf_osdep.h
index 7b43df3079..47b95d0da6 100644
--- a/drivers/net/intel/idpf/base/idpf_osdep.h
+++ b/drivers/net/intel/idpf/base/idpf_osdep.h
@@ -361,6 +361,9 @@ idpf_hweight32(u32 num)
#endif
+#define idpf_struct_size(ptr, field, num) \
+ (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num))
+
enum idpf_mac_type {
IDPF_MAC_UNKNOWN = 0,
IDPF_MAC_PF,
diff --git a/drivers/net/intel/idpf/base/virtchnl2.h b/drivers/net/intel/idpf/base/virtchnl2.h
index cf010c0504..6cfb4f56fa 100644
--- a/drivers/net/intel/idpf/base/virtchnl2.h
+++ b/drivers/net/intel/idpf/base/virtchnl2.h
@@ -1024,7 +1024,8 @@ struct virtchnl2_add_queues {
__le16 num_tx_complq;
__le16 num_rx_q;
__le16 num_rx_bufq;
- u8 pad[4];
+ u8 mbx_q_index;
+ u8 pad[3];
struct virtchnl2_queue_reg_chunks chunks;
};
diff --git a/drivers/net/intel/idpf/idpf_common_device.h b/drivers/net/intel/idpf/idpf_common_device.h
index 4766e5b696..07eab46eb4 100644
--- a/drivers/net/intel/idpf/idpf_common_device.h
+++ b/drivers/net/intel/idpf/idpf_common_device.h
@@ -45,6 +45,8 @@
(sizeof(struct virtchnl2_ptype) + \
(((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
+#define VCPF_CFGQ_VPORT_ID 0xFFFFFFFF
+
enum idpf_rx_func_type {
IDPF_RX_DEFAULT,
IDPF_RX_SINGLEQ,
diff --git a/drivers/net/intel/idpf/idpf_common_virtchnl.c b/drivers/net/intel/idpf/idpf_common_virtchnl.c
index bab854e191..e927d7415a 100644
--- a/drivers/net/intel/idpf/idpf_common_virtchnl.c
+++ b/drivers/net/intel/idpf/idpf_common_virtchnl.c
@@ -787,6 +787,44 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
return err;
}
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ena_dis_one_queue_vcpf)
+int
+idpf_vc_ena_dis_one_queue_vcpf(struct idpf_adapter *adapter, uint16_t qid,
+ uint32_t type, bool on)
+{
+ struct virtchnl2_del_ena_dis_queues *queue_select;
+ struct virtchnl2_queue_chunk *queue_chunk;
+ struct idpf_cmd_info args;
+ int err, len;
+
+ len = sizeof(struct virtchnl2_del_ena_dis_queues);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (queue_select == NULL)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = 1;
+ queue_select->vport_id = rte_cpu_to_le_32(VCPF_CFGQ_VPORT_ID);
+
+ queue_chunk->type = type;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+
+ args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
+ VIRTCHNL2_OP_DISABLE_QUEUES;
+ args.in_args = (uint8_t *)queue_select;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+ err = idpf_vc_cmd_execute(adapter, &args);
+ if (err != 0)
+ DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+ on ? "ENABLE" : "DISABLE");
+
+ rte_free(queue_select);
+ return err;
+}
+
RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_switch)
int
idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
diff --git a/drivers/net/intel/idpf/idpf_common_virtchnl.h b/drivers/net/intel/idpf/idpf_common_virtchnl.h
index 68cba9111c..90fce65676 100644
--- a/drivers/net/intel/idpf/idpf_common_virtchnl.h
+++ b/drivers/net/intel/idpf/idpf_common_virtchnl.h
@@ -76,6 +76,9 @@ __rte_internal
int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
uint32_t type, bool on);
__rte_internal
+int idpf_vc_ena_dis_one_queue_vcpf(struct idpf_adapter *adapter, uint16_t qid,
+ uint32_t type, bool on);
+__rte_internal
int idpf_vc_queue_grps_del(struct idpf_vport *vport,
uint16_t num_q_grps,
struct virtchnl2_queue_group_id *qg_ids);
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v5 4/4] net/cpfl: add cpchnl get vport info support
2025-09-30 18:27 ` [PATCH v5 0/4] add vcpf pmd support Shetty, Praveen
` (2 preceding siblings ...)
2025-09-30 18:27 ` [PATCH v5 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
@ 2025-09-30 18:27 ` Shetty, Praveen
3 siblings, 0 replies; 35+ messages in thread
From: Shetty, Praveen @ 2025-09-30 18:27 UTC (permalink / raw)
To: bruce.richardson, aman.deep.singh
Cc: dev, Praveen Shetty, Dhananjay Shukla, Atul Patel
From: Praveen Shetty <praveen.shetty@intel.com>
vCPF will only receive the relative queue id from the FW.
CPCHNL2_OP_GET_VPORT_INFO cpchnl message is used
to get the absolute rx/tx queue id and vsi of its own vport.
This patch will add the support to call CPCHNL2_OP_GET_VPORT_INFO
cpchnl message from the vCPF PMD.
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Signed-off-by: Dhananjay Shukla <dhananjay.shukla@intel.com>
Signed-off-by: Atul Patel <Atul.Patel@intel.com>
---
drivers/net/intel/cpfl/cpfl_cpchnl.h | 8 ++++
drivers/net/intel/cpfl/cpfl_ethdev.c | 63 +++++++++++++++++++++++++
drivers/net/intel/cpfl/cpfl_ethdev.h | 70 +++++++++++++++++++++-------
3 files changed, 125 insertions(+), 16 deletions(-)
diff --git a/drivers/net/intel/cpfl/cpfl_cpchnl.h b/drivers/net/intel/cpfl/cpfl_cpchnl.h
index 0c9dfcdbf1..c56d3e6cea 100644
--- a/drivers/net/intel/cpfl/cpfl_cpchnl.h
+++ b/drivers/net/intel/cpfl/cpfl_cpchnl.h
@@ -140,6 +140,14 @@ enum cpchnl2_func_type {
CPCHNL2_FTYPE_LAN_MAX
};
+/**
+ * @brief function types
+ */
+enum vcpf_cpchnl2_func_type {
+ VCPF_CPCHNL2_FTYPE_LAN_PF = 0,
+ VCPF_CPCHNL2_FTYPE_LAN_VF = 1,
+};
+
/**
* @brief containing vport id & type
*/
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.c b/drivers/net/intel/cpfl/cpfl_ethdev.c
index 22f3859dca..110678e312 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.c
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.c
@@ -1902,6 +1902,43 @@ cpfl_dev_alarm_handler(void *param)
rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
}
+static
+int vcpf_save_vport_info_response(struct cpfl_vport *cpfl_vport,
+ struct cpchnl2_get_vport_info_response *response)
+{
+ struct cpchnl2_vport_info *info;
+ struct vcpf_vport_info *vport_info;
+ struct cpchnl2_queue_group_info *qgp;
+ struct cpchnl2_queue_chunk *q_chnk;
+ u16 num_queue_groups;
+ u16 num_chunks;
+ u32 q_type;
+
+ info = &response->info;
+ vport_info = &cpfl_vport->vport_info;
+ vport_info->vport_index = info->vport_index;
+ vport_info->vsi_id = info->vsi_id;
+
+ num_queue_groups = response->queue_groups.num_queue_groups;
+ for (u16 i = 0; i < num_queue_groups; i++) {
+ qgp = &response->queue_groups.groups[i];
+ num_chunks = qgp->chunks.num_chunks;
+ /* rx q and tx q are stored in first 2 chunks */
+ for (u16 j = 0; j < (num_chunks - 2); j++) {
+ q_chnk = &qgp->chunks.chunks[j];
+ q_type = q_chnk->type;
+ if (q_type == VIRTCHNL2_QUEUE_TYPE_TX) {
+ vport_info->abs_start_txq_id = q_chnk->start_queue_id;
+ vport_info->num_tx_q = q_chnk->num_queues;
+ } else if (q_type == VIRTCHNL2_QUEUE_TYPE_RX) {
+ vport_info->abs_start_rxq_id = q_chnk->start_queue_id;
+ vport_info->num_rx_q = q_chnk->num_queues;
+ }
+ }
+ }
+ return 0;
+}
+
static int
cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
{
@@ -2722,7 +2759,11 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
/* for sending create vport virtchnl msg prepare */
struct virtchnl2_create_vport create_vport_info;
struct virtchnl2_add_queue_groups p2p_queue_grps_info;
+ struct cpchnl2_get_vport_info_response response;
uint8_t p2p_q_vc_out_info[IDPF_DFLT_MBX_BUF_SIZE] = {0};
+ struct cpfl_vport_id vi;
+ struct cpchnl2_vport_id v_id;
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
int ret = 0;
dev->dev_ops = &cpfl_eth_dev_ops;
@@ -2792,6 +2833,28 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
cpfl_p2p_queue_grps_del(vport);
}
}
+ /* get the vport info */
+ if (adapter->base.hw.device_id == IXD_DEV_ID_VCPF) {
+ pci_dev = RTE_DEV_TO_PCI(dev->device);
+ vi.func_type = VCPF_CPCHNL2_FTYPE_LAN_VF;
+ vi.pf_id = CPFL_HOST0_CPF_ID;
+ vi.vf_id = pci_dev->addr.function;
+
+ v_id.vport_id = cpfl_vport->base.vport_info.info.vport_id;
+ v_id.vport_type = cpfl_vport->base.vport_info.info.vport_type;
+
+ ret = cpfl_cc_vport_info_get(adapter, &v_id, &vi, &response);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to send vport info cpchnl message.");
+ return -1;
+ }
+
+ ret = vcpf_save_vport_info_response(cpfl_vport, &response);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to save cpchnl response.");
+ return -1;
+ }
+ }
return 0;
diff --git a/drivers/net/intel/cpfl/cpfl_ethdev.h b/drivers/net/intel/cpfl/cpfl_ethdev.h
index f550bca754..be73e05a0e 100644
--- a/drivers/net/intel/cpfl/cpfl_ethdev.h
+++ b/drivers/net/intel/cpfl/cpfl_ethdev.h
@@ -164,10 +164,20 @@ struct cpfl_itf {
void *data;
};
+struct vcpf_vport_info {
+ u16 vport_index;
+ u16 vsi_id;
+ u32 abs_start_txq_id;
+ u32 num_tx_q;
+ u32 abs_start_rxq_id;
+ u32 num_rx_q;
+};
+
struct cpfl_vport {
struct cpfl_itf itf;
struct idpf_vport base;
struct p2p_queue_chunks_info *p2p_q_chunks_info;
+ struct vcpf_vport_info vport_info;
struct rte_mempool *p2p_mp;
@@ -319,6 +329,7 @@ cpfl_get_vsi_id(struct cpfl_itf *itf)
uint32_t vport_id;
int ret;
struct cpfl_vport_id vport_identity;
+ u16 vsi_id = 0;
if (!itf)
return CPFL_INVALID_HW_ID;
@@ -328,24 +339,30 @@ cpfl_get_vsi_id(struct cpfl_itf *itf)
return repr->vport_info->vport.info.vsi_id;
} else if (itf->type == CPFL_ITF_TYPE_VPORT) {
- vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
-
- vport_identity.func_type = CPCHNL2_FTYPE_LAN_PF;
- /* host: CPFL_HOST0_CPF_ID, acc: CPFL_ACC_CPF_ID */
- vport_identity.pf_id = (itf->adapter->host_id == CPFL_HOST_ID_ACC) ?
- CPFL_ACC_CPF_ID : CPFL_HOST0_CPF_ID;
- vport_identity.vf_id = 0;
- vport_identity.vport_id = vport_id;
- ret = rte_hash_lookup_data(itf->adapter->vport_map_hash,
- &vport_identity,
- (void **)&info);
- if (ret < 0) {
- PMD_DRV_LOG(ERR, "vport id not exist");
- goto err;
+ if (itf->adapter->base.hw.device_id == IDPF_DEV_ID_CPF) {
+ vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
+
+ vport_identity.func_type = CPCHNL2_FTYPE_LAN_PF;
+ /* host: CPFL_HOST0_CPF_ID, acc: CPFL_ACC_CPF_ID */
+ vport_identity.pf_id = (itf->adapter->host_id == CPFL_HOST_ID_ACC) ?
+ CPFL_ACC_CPF_ID : CPFL_HOST0_CPF_ID;
+ vport_identity.vf_id = 0;
+ vport_identity.vport_id = vport_id;
+ ret = rte_hash_lookup_data(itf->adapter->vport_map_hash,
+ &vport_identity,
+ (void **)&info);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "vport id not exist");
+ goto err;
+ }
+
+ vsi_id = info->vport.info.vsi_id;
+ } else {
+ if (itf->adapter->base.hw.device_id == IXD_DEV_ID_VCPF)
+ vsi_id = (uint16_t)((struct cpfl_vport *)itf)->vport_info.vsi_id;
}
-
- return info->vport.info.vsi_id;
}
+ return vsi_id;
err:
return CPFL_INVALID_HW_ID;
@@ -374,4 +391,25 @@ cpfl_get_itf_by_port_id(uint16_t port_id)
return CPFL_DEV_TO_ITF(dev);
}
+
+static inline uint32_t
+vcpf_get_abs_qid(uint16_t port_id, uint32_t queue_type)
+{
+ struct cpfl_itf *itf = cpfl_get_itf_by_port_id(port_id);
+ struct cpfl_vport *vport;
+ if (!itf)
+ return CPFL_INVALID_HW_ID;
+ if (itf->type == CPFL_ITF_TYPE_VPORT) {
+ vport = (void *)itf;
+ if (itf->adapter->base.hw.device_id == IXD_DEV_ID_VCPF) {
+ switch (queue_type) {
+ case VIRTCHNL2_QUEUE_TYPE_TX:
+ return vport->vport_info.abs_start_txq_id;
+ case VIRTCHNL2_QUEUE_TYPE_RX:
+ return vport->vport_info.abs_start_rxq_id;
+ }
+ }
+ }
+ return 0;
+}
#endif /* _CPFL_ETHDEV_H_ */
--
2.37.3
^ permalink raw reply [flat|nested] 35+ messages in thread
end of thread, other threads:[~2025-09-30 18:27 UTC | newest]
Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-22 9:48 [PATCH 0/4] add vcpf pmd support Shetty, Praveen
2025-09-22 9:48 ` [PATCH 1/4] net/intel: add vCPF PMD support Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 0/4] add vcpf pmd support Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 1/4] net/intel: add vCPF PMD support Shetty, Praveen
2025-09-23 12:54 ` [PATCH v3 0/4] add vcpf pmd support Shetty, Praveen
2025-09-23 12:54 ` [PATCH v3 1/4] net/intel: add vCPF PMD support Shetty, Praveen
2025-09-29 12:18 ` Bruce Richardson
2025-09-29 18:55 ` Shetty, Praveen
2025-09-23 12:54 ` [PATCH v3 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
2025-09-29 12:32 ` Bruce Richardson
2025-09-29 14:39 ` Stephen Hemminger
2025-09-29 18:55 ` Shetty, Praveen
2025-09-23 12:54 ` [PATCH v3 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
2025-09-29 13:40 ` Bruce Richardson
2025-09-29 19:53 ` Shetty, Praveen
2025-09-30 7:50 ` Bruce Richardson
2025-09-30 8:31 ` Shetty, Praveen
2025-09-23 12:54 ` [PATCH v3 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
2025-09-26 8:11 ` Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
2025-09-22 14:10 ` [PATCH v2 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 0/4] add vcpf pmd support Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 1/4] net/intel: add vCPF PMD support Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
2025-09-30 13:55 ` [PATCH v4 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 0/4] add vcpf pmd support Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 1/4] net/intel: add vCPF PMD support Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
2025-09-30 18:27 ` [PATCH v5 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
2025-09-22 9:48 ` [PATCH 2/4] net/idpf: add splitq jumbo packet handling Shetty, Praveen
2025-09-22 9:48 ` [PATCH 3/4] net/intel: add config queue support to vCPF Shetty, Praveen
2025-09-22 9:48 ` [PATCH 4/4] net/cpfl: add cpchnl get vport info support Shetty, Praveen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).