From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4CFE441DB0; Thu, 2 Mar 2023 03:20:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A5B94161A; Thu, 2 Mar 2023 03:20:39 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 970914114B for ; Thu, 2 Mar 2023 03:20:37 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677723637; x=1709259637; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=blM9V3PfoC/dHbYX9s8tlZL1MJpYenez9YWRGg1Fyu8=; b=H5kNEA8+M8NYoRLKm4VvjcfK8qPDnfOc9qP9rcLAa+9KHpZIUOFDTGux QrC/QdmdwolUFx3M2sOuO+w9XtA68CfMYSfVgCaC/PNfNFH/SKXT+0POY bRGoCJWIW7ocCDxhTcpcFjmBpsytbyESYCFOR/dWb/cM/aW8mkeiyOOkG Rd5yssCwcjJS0M1zbrlposdYxxOFfwGYJ9kpJVn34Mt/nLCmMEsdyjiYm 3pgRTbRGpEtGuOZvcRy7cxrg/e8Aah4/9BAhI+qxDYheXf0OL35MulsPU Frv+VkPqulyF2aEf19KavMrEqZknhFg0Z71wrVDUym3tJqQaUY+mN/rtV g==; X-IronPort-AV: E=McAfee;i="6500,9779,10636"; a="315013500" X-IronPort-AV: E=Sophos;i="5.98,226,1673942400"; d="scan'208";a="315013500" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2023 18:20:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10636"; a="784607408" X-IronPort-AV: E=Sophos;i="5.98,226,1673942400"; d="scan'208";a="784607408" Received: from dpdk-mingxial-ice.sh.intel.com ([10.67.110.191]) by fmsmga002.fm.intel.com with ESMTP; 01 Mar 2023 18:20:35 -0800 From: Mingxia Liu To: dev@dpdk.org, beilei.xing@intel.com, yuying.zhang@intel.com Cc: Mingxia Liu Subject: [PATCH v8 01/21] net/cpfl: support device initialization Date: Thu, 2 Mar 2023 10:35:07 +0000 Message-Id: <20230302103527.931071-2-mingxia.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230302103527.931071-1-mingxia.liu@intel.com> References: <20230216003010.3439881-1-mingxia.liu@intel.com> <20230302103527.931071-1-mingxia.liu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Support device init and add the following dev ops: - dev_configure - dev_close - dev_infos_get - link_update - dev_supported_ptypes_get Signed-off-by: Mingxia Liu --- MAINTAINERS | 8 + doc/guides/nics/cpfl.rst | 85 +++ doc/guides/nics/features/cpfl.ini | 12 + doc/guides/nics/index.rst | 1 + doc/guides/rel_notes/release_23_03.rst | 6 + drivers/net/cpfl/cpfl_ethdev.c | 772 +++++++++++++++++++++++++ drivers/net/cpfl/cpfl_ethdev.h | 77 +++ drivers/net/cpfl/cpfl_logs.h | 29 + drivers/net/cpfl/meson.build | 14 + drivers/net/meson.build | 1 + 10 files changed, 1005 insertions(+) create mode 100644 doc/guides/nics/cpfl.rst create mode 100644 doc/guides/nics/features/cpfl.ini create mode 100644 drivers/net/cpfl/cpfl_ethdev.c create mode 100644 drivers/net/cpfl/cpfl_ethdev.h create mode 100644 drivers/net/cpfl/cpfl_logs.h create mode 100644 drivers/net/cpfl/meson.build diff --git a/MAINTAINERS b/MAINTAINERS index ffbf91296e..878204c93b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -783,6 +783,14 @@ F: drivers/common/idpf/ F: doc/guides/nics/idpf.rst F: doc/guides/nics/features/idpf.ini +Intel cpfl - EXPERIMENTAL +M: Yuying Zhang +M: Beilei Xing +T: git://dpdk.org/next/dpdk-next-net-intel +F: drivers/net/cpfl/ +F: doc/guides/nics/cpfl.rst +F: doc/guides/nics/features/cpfl.ini + Intel igc M: Junfeng Guo M: Simei Su diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst new file mode 100644 index 0000000000..253fa3afae --- /dev/null +++ b/doc/guides/nics/cpfl.rst @@ -0,0 +1,85 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2022 Intel Corporation. + +.. include:: + +CPFL Poll Mode Driver +===================== + +The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode driver support +for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100. +Please refer to +https://www.intel.com/content/www/us/en/products/network-io/infrastructure-processing-units/asic/e2000-asic.html +for more information. + +Linux Prerequisites +------------------- + +Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment. + +To get better performance on Intel platforms, +please follow the :doc:`../linux_gsg/nic_perf_intel_platform`. + + +Pre-Installation Configuration +------------------------------ + +Runtime Config Options +~~~~~~~~~~~~~~~~~~~~~~ + +- ``vport`` (default ``0``) + + The PMD supports creation of multiple vports for one PCI device, + each vport corresponds to a single ethdev. + The user can specify the vports with specific ID to be created, and ID should + be 0 ~ 7 currently, for example: + + -a ca:00.0,vport=[0,2,3] + + Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``. + + If the parameter is not provided, the vport 0 will be created by default. + +- ``rx_single`` (default ``0``) + + There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series, + single queue mode and split queue mode for Rx queue. + + For the single queue model, the descriptor queue is used by SW to post buffer + descriptors to HW, and it's also used by HW to post completed descriptors to SW. + + For the split queue model, "RX buffer queues" are used to pass descriptor buffers + from SW to HW, while RX queues are used only to pass the descriptor completions + from HW to SW. + + User can choose Rx queue mode, example: + + -a ca:00.0,rx_single=1 + + Then the PMD will configure Rx queue with single queue mode. + Otherwise, split queue mode is chosen by default. + +- ``tx_single`` (default ``0``) + + There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series, + single queue mode and split queue mode for Tx queue. + + For the single queue model, the descriptor queue is used by SW to post buffer + descriptors to HW, and it's also used by HW to post completed descriptors to SW. + + For the split queue model, "TX completion queues" are used to pass descriptor buffers + from SW to HW, while TX queues are used only to pass the descriptor completions from + HW to SW. + + User can choose Tx queue mode, example:: + + -a ca:00.0,tx_single=1 + + Then the PMD will configure Tx queue with single queue mode. + Otherwise, split queue mode is chosen by default. + + +Driver compilation and testing +------------------------------ + +Refer to the document :doc:`build_and_test` for details. \ No newline at end of file diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini new file mode 100644 index 0000000000..a2d1ca9e15 --- /dev/null +++ b/doc/guides/nics/features/cpfl.ini @@ -0,0 +1,12 @@ +; +; Supported features of the 'cpfl' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Linux = Y +x86-32 = Y +x86-64 = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index df58a237ca..5c9d1edf5e 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -20,6 +20,7 @@ Network Interface Controller Drivers bnx2x bnxt cnxk + cpfl cxgbe dpaa dpaa2 diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst index 49c18617a5..29690d8813 100644 --- a/doc/guides/rel_notes/release_23_03.rst +++ b/doc/guides/rel_notes/release_23_03.rst @@ -148,6 +148,12 @@ New Features * Added support for timesync API. * Added support for packet pacing (launch time offloading). +* **Added Intel cpfl driver.** + + * Added the new ``cpfl`` net driver + for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100. + See the :doc:`../nics/cpfl.rst` NIC guide for more details on this new driver. + * **Updated Marvell cnxk ethdev driver.** * Added support to skip RED using ``RTE_FLOW_ACTION_TYPE_SKIP_CMAN``. diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c new file mode 100644 index 0000000000..21c505fda3 --- /dev/null +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -0,0 +1,772 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Intel Corporation + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "cpfl_ethdev.h" + +#define CPFL_TX_SINGLE_Q "tx_single" +#define CPFL_RX_SINGLE_Q "rx_single" +#define CPFL_VPORT "vport" + +rte_spinlock_t cpfl_adapter_lock; +/* A list for all adapters, one adapter matches one PCI device */ +struct cpfl_adapter_list cpfl_adapter_list; +bool cpfl_adapter_list_init; + +static const char * const cpfl_valid_args[] = { + CPFL_TX_SINGLE_Q, + CPFL_RX_SINGLE_Q, + CPFL_VPORT, + NULL +}; + +uint32_t cpfl_supported_speeds[] = { + RTE_ETH_SPEED_NUM_NONE, + RTE_ETH_SPEED_NUM_10M, + RTE_ETH_SPEED_NUM_100M, + RTE_ETH_SPEED_NUM_1G, + RTE_ETH_SPEED_NUM_2_5G, + RTE_ETH_SPEED_NUM_5G, + RTE_ETH_SPEED_NUM_10G, + RTE_ETH_SPEED_NUM_20G, + RTE_ETH_SPEED_NUM_25G, + RTE_ETH_SPEED_NUM_40G, + RTE_ETH_SPEED_NUM_50G, + RTE_ETH_SPEED_NUM_56G, + RTE_ETH_SPEED_NUM_100G, + RTE_ETH_SPEED_NUM_200G +}; + +static int +cpfl_dev_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete) +{ + struct idpf_vport *vport = dev->data->dev_private; + struct rte_eth_link new_link; + unsigned int i; + + memset(&new_link, 0, sizeof(new_link)); + + for (i = 0; i < RTE_DIM(cpfl_supported_speeds); i++) { + if (vport->link_speed == cpfl_supported_speeds[i]) { + new_link.link_speed = vport->link_speed; + break; + } + } + + if (i == RTE_DIM(cpfl_supported_speeds)) { + if (vport->link_up) + new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + else + new_link.link_speed = RTE_ETH_SPEED_NUM_NONE; + } + + new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP : + RTE_ETH_LINK_DOWN; + new_link.link_autoneg = (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) ? + RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG; + + return rte_eth_linkstatus_set(dev, &new_link); +} + +static int +cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) +{ + struct idpf_vport *vport = dev->data->dev_private; + struct idpf_adapter *base = vport->adapter; + + dev_info->max_rx_queues = base->caps.max_rx_q; + dev_info->max_tx_queues = base->caps.max_tx_q; + dev_info->min_rx_bufsize = CPFL_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = vport->max_mtu + CPFL_ETH_OVERHEAD; + + dev_info->max_mtu = vport->max_mtu; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + return 0; +} + +static const uint32_t * +cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused) +{ + static const uint32_t ptypes[] = { + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_L3_IPV6_EXT_UNKNOWN, + RTE_PTYPE_L4_FRAG, + RTE_PTYPE_L4_UDP, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_SCTP, + RTE_PTYPE_L4_ICMP, + RTE_PTYPE_UNKNOWN + }; + + return ptypes; +} + +static int +cpfl_dev_configure(struct rte_eth_dev *dev) +{ + struct rte_eth_conf *conf = &dev->data->dev_conf; + + if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) { + PMD_INIT_LOG(ERR, "Setting link speed is not supported"); + return -ENOTSUP; + } + + if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported", + conf->txmode.mq_mode); + return -ENOTSUP; + } + + if (conf->lpbk_mode != 0) { + PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported", + conf->lpbk_mode); + return -ENOTSUP; + } + + if (conf->dcb_capability_en != 0) { + PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported"); + return -ENOTSUP; + } + + if (conf->intr_conf.lsc != 0) { + PMD_INIT_LOG(ERR, "LSC interrupt is not supported"); + return -ENOTSUP; + } + + if (conf->intr_conf.rxq != 0) { + PMD_INIT_LOG(ERR, "RXQ interrupt is not supported"); + return -ENOTSUP; + } + + if (conf->intr_conf.rmv != 0) { + PMD_INIT_LOG(ERR, "RMV interrupt is not supported"); + return -ENOTSUP; + } + + return 0; +} + +static int +cpfl_dev_close(struct rte_eth_dev *dev) +{ + struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); + + idpf_vport_deinit(vport); + + adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id); + adapter->cur_vport_nb--; + dev->data->dev_private = NULL; + adapter->vports[vport->sw_idx] = NULL; + rte_free(vport); + + return 0; +} + +static const struct eth_dev_ops cpfl_eth_dev_ops = { + .dev_configure = cpfl_dev_configure, + .dev_close = cpfl_dev_close, + .dev_infos_get = cpfl_dev_info_get, + .link_update = cpfl_dev_link_update, + .dev_supported_ptypes_get = cpfl_dev_supported_ptypes_get, +}; + +static int +insert_value(struct cpfl_devargs *devargs, uint16_t id) +{ + uint16_t i; + + /* ignore duplicate */ + for (i = 0; i < devargs->req_vport_nb; i++) { + if (devargs->req_vports[i] == id) + return 0; + } + + devargs->req_vports[devargs->req_vport_nb] = id; + devargs->req_vport_nb++; + + return 0; +} + +static const char * +parse_range(const char *value, struct cpfl_devargs *devargs) +{ + uint16_t lo, hi, i; + int n = 0; + int result; + const char *pos = value; + + result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n); + if (result == 1) { + if (insert_value(devargs, lo) != 0) + return NULL; + } else if (result == 2) { + if (lo > hi) + return NULL; + for (i = lo; i <= hi; i++) { + if (insert_value(devargs, i) != 0) + return NULL; + } + } else { + return NULL; + } + + return pos + n; +} + +static int +parse_vport(const char *key, const char *value, void *args) +{ + struct cpfl_devargs *devargs = args; + const char *pos = value; + + devargs->req_vport_nb = 0; + + if (*pos == '[') + pos++; + + while (1) { + pos = parse_range(pos, devargs); + if (pos == NULL) { + PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ", + value, key); + return -EINVAL; + } + if (*pos != ',') + break; + pos++; + } + + if (*value == '[' && *pos != ']') { + PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ", + value, key); + return -EINVAL; + } + + return 0; +} + +static int +parse_bool(const char *key, const char *value, void *args) +{ + int *i = args; + char *end; + int num; + + errno = 0; + + num = strtoul(value, &end, 10); + + if (errno == ERANGE || (num != 0 && num != 1)) { + PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", value must be 0 or 1", + value, key); + return -EINVAL; + } + + *i = num; + return 0; +} + +static int +cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter, + struct cpfl_devargs *cpfl_args) +{ + struct rte_devargs *devargs = pci_dev->device.devargs; + struct rte_kvargs *kvlist; + int i, ret; + + cpfl_args->req_vport_nb = 0; + + if (devargs == NULL) + return 0; + + kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args); + if (kvlist == NULL) { + PMD_INIT_LOG(ERR, "invalid kvargs key"); + return -EINVAL; + } + + if (rte_kvargs_count(kvlist, CPFL_VPORT) > 1) { + PMD_INIT_LOG(ERR, "devarg vport is duplicated."); + return -EINVAL; + } + + ret = rte_kvargs_process(kvlist, CPFL_VPORT, &parse_vport, + cpfl_args); + if (ret != 0) + goto fail; + + ret = rte_kvargs_process(kvlist, CPFL_TX_SINGLE_Q, &parse_bool, + &adapter->base.is_tx_singleq); + if (ret != 0) + goto fail; + + ret = rte_kvargs_process(kvlist, CPFL_RX_SINGLE_Q, &parse_bool, + &adapter->base.is_rx_singleq); + if (ret != 0) + goto fail; + + /* check parsed devargs */ + if (adapter->cur_vport_nb + cpfl_args->req_vport_nb > + adapter->max_vport_nb) { + PMD_INIT_LOG(ERR, "Total vport number can't be > %d", + adapter->max_vport_nb); + ret = -EINVAL; + goto fail; + } + + for (i = 0; i < cpfl_args->req_vport_nb; i++) { + if (cpfl_args->req_vports[i] > adapter->max_vport_nb - 1) { + PMD_INIT_LOG(ERR, "Invalid vport id %d, it should be 0 ~ %d", + cpfl_args->req_vports[i], adapter->max_vport_nb - 1); + ret = -EINVAL; + goto fail; + } + + if (adapter->cur_vports & RTE_BIT32(cpfl_args->req_vports[i])) { + PMD_INIT_LOG(ERR, "Vport %d has been requested", + cpfl_args->req_vports[i]); + ret = -EINVAL; + goto fail; + } + } + +fail: + rte_kvargs_free(kvlist); + return ret; +} + +static struct idpf_vport * +cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id) +{ + struct idpf_vport *vport = NULL; + int i; + + for (i = 0; i < adapter->cur_vport_nb; i++) { + vport = adapter->vports[i]; + if (vport->vport_id != vport_id) + continue; + else + return vport; + } + + return vport; +} + +static void +cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen) +{ + struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg; + struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev; + + if (msglen < sizeof(struct virtchnl2_event)) { + PMD_DRV_LOG(ERR, "Error event"); + return; + } + + switch (vc_event->event) { + case VIRTCHNL2_EVENT_LINK_CHANGE: + PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE"); + vport->link_up = !!(vc_event->link_status); + vport->link_speed = vc_event->link_speed; + cpfl_dev_link_update(dev, 0); + break; + default: + PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event); + break; + } +} + +static void +cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter) +{ + struct idpf_adapter *base = &adapter->base; + struct idpf_dma_mem *dma_mem = NULL; + struct idpf_hw *hw = &base->hw; + struct virtchnl2_event *vc_event; + struct idpf_ctlq_msg ctlq_msg; + enum idpf_mbx_opc mbx_op; + struct idpf_vport *vport; + enum virtchnl_ops vc_op; + uint16_t pending = 1; + int ret; + + while (pending) { + ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg); + if (ret) { + PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret); + return; + } + + memcpy(base->mbx_resp, ctlq_msg.ctx.indirect.payload->va, + IDPF_DFLT_MBX_BUF_SIZE); + + mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode); + vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode); + base->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval); + + switch (mbx_op) { + case idpf_mbq_opc_send_msg_to_peer_pf: + if (vc_op == VIRTCHNL2_OP_EVENT) { + if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) { + PMD_DRV_LOG(ERR, "Error event"); + return; + } + vc_event = (struct virtchnl2_event *)base->mbx_resp; + vport = cpfl_find_vport(adapter, vc_event->vport_id); + if (!vport) { + PMD_DRV_LOG(ERR, "Can't find vport."); + return; + } + cpfl_handle_event_msg(vport, base->mbx_resp, + ctlq_msg.data_len); + } else { + if (vc_op == base->pend_cmd) + notify_cmd(base, base->cmd_retval); + else + PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u", + base->pend_cmd, vc_op); + + PMD_DRV_LOG(DEBUG, " Virtual channel response is received," + "opcode = %d", vc_op); + } + goto post_buf; + default: + PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op); + } + } + +post_buf: + if (ctlq_msg.data_len) + dma_mem = ctlq_msg.ctx.indirect.payload; + else + pending = 0; + + ret = idpf_vc_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem); + if (ret && dma_mem) + idpf_free_dma_mem(hw, dma_mem); +} + +static void +cpfl_dev_alarm_handler(void *param) +{ + struct cpfl_adapter_ext *adapter = param; + + cpfl_handle_virtchnl_msg(adapter); + + rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter); +} + +static int +cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter) +{ + struct idpf_adapter *base = &adapter->base; + struct idpf_hw *hw = &base->hw; + int ret = 0; + + hw->hw_addr = (void *)pci_dev->mem_resource[0].addr; + hw->hw_addr_len = pci_dev->mem_resource[0].len; + hw->back = base; + hw->vendor_id = pci_dev->id.vendor_id; + hw->device_id = pci_dev->id.device_id; + hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id; + + strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE); + + ret = idpf_adapter_init(base); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to init adapter"); + goto err_adapter_init; + } + + rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter); + + adapter->max_vport_nb = adapter->base.caps.max_vports > CPFL_MAX_VPORT_NUM ? + CPFL_MAX_VPORT_NUM : adapter->base.caps.max_vports; + + adapter->vports = rte_zmalloc("vports", + adapter->max_vport_nb * + sizeof(*adapter->vports), + 0); + if (adapter->vports == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate vports memory"); + ret = -ENOMEM; + goto err_get_ptype; + } + + adapter->cur_vports = 0; + adapter->cur_vport_nb = 0; + + adapter->used_vecs_num = 0; + + return ret; + +err_get_ptype: + idpf_adapter_deinit(base); +err_adapter_init: + return ret; +} + +static uint16_t +cpfl_vport_idx_alloc(struct cpfl_adapter_ext *adapter) +{ + uint16_t vport_idx; + uint16_t i; + + for (i = 0; i < adapter->max_vport_nb; i++) { + if (adapter->vports[i] == NULL) + break; + } + + if (i == adapter->max_vport_nb) + vport_idx = CPFL_INVALID_VPORT_IDX; + else + vport_idx = i; + + return vport_idx; +} + +static int +cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) +{ + struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport_param *param = init_params; + struct cpfl_adapter_ext *adapter = param->adapter; + /* for sending create vport virtchnl msg prepare */ + struct virtchnl2_create_vport create_vport_info; + int ret = 0; + + dev->dev_ops = &cpfl_eth_dev_ops; + vport->adapter = &adapter->base; + vport->sw_idx = param->idx; + vport->devarg_id = param->devarg_id; + vport->dev = dev; + + memset(&create_vport_info, 0, sizeof(create_vport_info)); + ret = idpf_vport_info_init(vport, &create_vport_info); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to init vport req_info."); + goto err; + } + + ret = idpf_vport_init(vport, &create_vport_info, dev->data); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to init vports."); + goto err; + } + + adapter->vports[param->idx] = vport; + adapter->cur_vports |= RTE_BIT32(param->devarg_id); + adapter->cur_vport_nb++; + + dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0); + if (dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory."); + ret = -ENOMEM; + goto err_mac_addrs; + } + + rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr, + &dev->data->mac_addrs[0]); + + return 0; + +err_mac_addrs: + adapter->vports[param->idx] = NULL; /* reset */ + idpf_vport_deinit(vport); + adapter->cur_vports &= ~RTE_BIT32(param->devarg_id); + adapter->cur_vport_nb--; +err: + return ret; +} + +static const struct rte_pci_id pci_id_cpfl_map[] = { + { RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IDPF_DEV_ID_CPF) }, + { .vendor_id = 0, /* sentinel */ }, +}; + +static struct cpfl_adapter_ext * +cpfl_find_adapter_ext(struct rte_pci_device *pci_dev) +{ + struct cpfl_adapter_ext *adapter; + int found = 0; + + if (pci_dev == NULL) + return NULL; + + rte_spinlock_lock(&cpfl_adapter_lock); + TAILQ_FOREACH(adapter, &cpfl_adapter_list, next) { + if (strncmp(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE) == 0) { + found = 1; + break; + } + } + rte_spinlock_unlock(&cpfl_adapter_lock); + + if (found == 0) + return NULL; + + return adapter; +} + +static void +cpfl_adapter_ext_deinit(struct cpfl_adapter_ext *adapter) +{ + rte_eal_alarm_cancel(cpfl_dev_alarm_handler, adapter); + idpf_adapter_deinit(&adapter->base); + + rte_free(adapter->vports); + adapter->vports = NULL; +} + +static int +cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct cpfl_vport_param vport_param; + struct cpfl_adapter_ext *adapter; + struct cpfl_devargs devargs; + char name[RTE_ETH_NAME_MAX_LEN]; + int i, retval; + bool first_probe = false; + + if (!cpfl_adapter_list_init) { + rte_spinlock_init(&cpfl_adapter_lock); + TAILQ_INIT(&cpfl_adapter_list); + cpfl_adapter_list_init = true; + } + + adapter = cpfl_find_adapter_ext(pci_dev); + if (adapter == NULL) { + first_probe = true; + adapter = rte_zmalloc("cpfl_adapter_ext", + sizeof(struct cpfl_adapter_ext), 0); + if (adapter == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate adapter."); + return -ENOMEM; + } + + retval = cpfl_adapter_ext_init(pci_dev, adapter); + if (retval != 0) { + PMD_INIT_LOG(ERR, "Failed to init adapter."); + return retval; + } + + rte_spinlock_lock(&cpfl_adapter_lock); + TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next); + rte_spinlock_unlock(&cpfl_adapter_lock); + } + + retval = cpfl_parse_devargs(pci_dev, adapter, &devargs); + if (retval != 0) { + PMD_INIT_LOG(ERR, "Failed to parse private devargs"); + goto err; + } + + if (devargs.req_vport_nb == 0) { + /* If no vport devarg, create vport 0 by default. */ + vport_param.adapter = adapter; + vport_param.devarg_id = 0; + vport_param.idx = cpfl_vport_idx_alloc(adapter); + if (vport_param.idx == CPFL_INVALID_VPORT_IDX) { + PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id); + return 0; + } + snprintf(name, sizeof(name), "cpfl_%s_vport_0", + pci_dev->device.name); + retval = rte_eth_dev_create(&pci_dev->device, name, + sizeof(struct idpf_vport), + NULL, NULL, cpfl_dev_vport_init, + &vport_param); + if (retval != 0) + PMD_DRV_LOG(ERR, "Failed to create default vport 0"); + } else { + for (i = 0; i < devargs.req_vport_nb; i++) { + vport_param.adapter = adapter; + vport_param.devarg_id = devargs.req_vports[i]; + vport_param.idx = cpfl_vport_idx_alloc(adapter); + if (vport_param.idx == CPFL_INVALID_VPORT_IDX) { + PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id); + break; + } + snprintf(name, sizeof(name), "cpfl_%s_vport_%d", + pci_dev->device.name, + devargs.req_vports[i]); + retval = rte_eth_dev_create(&pci_dev->device, name, + sizeof(struct idpf_vport), + NULL, NULL, cpfl_dev_vport_init, + &vport_param); + if (retval != 0) + PMD_DRV_LOG(ERR, "Failed to create vport %d", + vport_param.devarg_id); + } + } + + return 0; + +err: + if (first_probe) { + rte_spinlock_lock(&cpfl_adapter_lock); + TAILQ_REMOVE(&cpfl_adapter_list, adapter, next); + rte_spinlock_unlock(&cpfl_adapter_lock); + cpfl_adapter_ext_deinit(adapter); + rte_free(adapter); + } + return retval; +} + +static int +cpfl_pci_remove(struct rte_pci_device *pci_dev) +{ + struct cpfl_adapter_ext *adapter = cpfl_find_adapter_ext(pci_dev); + uint16_t port_id; + + /* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */ + RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) { + rte_eth_dev_close(port_id); + } + + rte_spinlock_lock(&cpfl_adapter_lock); + TAILQ_REMOVE(&cpfl_adapter_list, adapter, next); + rte_spinlock_unlock(&cpfl_adapter_lock); + cpfl_adapter_ext_deinit(adapter); + rte_free(adapter); + + return 0; +} + +static struct rte_pci_driver rte_cpfl_pmd = { + .id_table = pci_id_cpfl_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + .probe = cpfl_pci_probe, + .remove = cpfl_pci_remove, +}; + +/** + * Driver initialization routine. + * Invoked once at EAL init time. + * Register itself as the [Poll Mode] Driver of PCI devices. + */ +RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map); +RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci"); +RTE_PMD_REGISTER_PARAM_STRING(net_cpfl, + CPFL_TX_SINGLE_Q "=<0|1> " + CPFL_RX_SINGLE_Q "=<0|1> " + CPFL_VPORT "=[vport0_begin[-vport0_end][,vport1_begin[-vport1_end]][,..]]"); + +RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_driver, driver, NOTICE); diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h new file mode 100644 index 0000000000..9738e89ca8 --- /dev/null +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Intel Corporation + */ + +#ifndef _CPFL_ETHDEV_H_ +#define _CPFL_ETHDEV_H_ + +#include +#include +#include +#include +#include +#include +#include + +#include "cpfl_logs.h" + +#include +#include +#include +#include + +/* Currently, backend supports up to 8 vports */ +#define CPFL_MAX_VPORT_NUM 8 + +#define CPFL_INVALID_VPORT_IDX 0xffff + +#define CPFL_MIN_BUF_SIZE 1024 +#define CPFL_MAX_FRAME_SIZE 9728 +#define CPFL_DEFAULT_MTU RTE_ETHER_MTU + +#define CPFL_VLAN_TAG_SIZE 4 +#define CPFL_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2) + +#define CPFL_ADAPTER_NAME_LEN (PCI_PRI_STR_SIZE + 1) + +#define CPFL_ALARM_INTERVAL 50000 /* us */ + +/* Device IDs */ +#define IDPF_DEV_ID_CPF 0x1453 + +struct cpfl_vport_param { + struct cpfl_adapter_ext *adapter; + uint16_t devarg_id; /* arg id from user */ + uint16_t idx; /* index in adapter->vports[]*/ +}; + +/* Struct used when parse driver specific devargs */ +struct cpfl_devargs { + uint16_t req_vports[CPFL_MAX_VPORT_NUM]; + uint16_t req_vport_nb; +}; + +struct cpfl_adapter_ext { + TAILQ_ENTRY(cpfl_adapter_ext) next; + struct idpf_adapter base; + + char name[CPFL_ADAPTER_NAME_LEN]; + + struct idpf_vport **vports; + uint16_t max_vport_nb; + + uint16_t cur_vports; /* bit mask of created vport */ + uint16_t cur_vport_nb; + + uint16_t used_vecs_num; +}; + +TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext); + +#define CPFL_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) +#define CPFL_ADAPTER_TO_EXT(p) \ + container_of((p), struct cpfl_adapter_ext, base) + +#endif /* _CPFL_ETHDEV_H_ */ diff --git a/drivers/net/cpfl/cpfl_logs.h b/drivers/net/cpfl/cpfl_logs.h new file mode 100644 index 0000000000..bdfa5c41a5 --- /dev/null +++ b/drivers/net/cpfl/cpfl_logs.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Intel Corporation + */ + +#ifndef _CPFL_LOGS_H_ +#define _CPFL_LOGS_H_ + +#include + +extern int cpfl_logtype_init; +extern int cpfl_logtype_driver; + +#define PMD_INIT_LOG(level, ...) \ + rte_log(RTE_LOG_ ## level, \ + cpfl_logtype_init, \ + RTE_FMT("%s(): " \ + RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + __func__, \ + RTE_FMT_TAIL(__VA_ARGS__,))) + +#define PMD_DRV_LOG(level, ...) \ + rte_log(RTE_LOG_ ## level, \ + cpfl_logtype_driver, \ + RTE_FMT("%s(): " \ + RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + __func__, \ + RTE_FMT_TAIL(__VA_ARGS__,))) + +#endif /* _CPFL_LOGS_H_ */ diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build new file mode 100644 index 0000000000..c721732b50 --- /dev/null +++ b/drivers/net/cpfl/meson.build @@ -0,0 +1,14 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 Intel Corporation + +if is_windows + build = false + reason = 'not supported on Windows' + subdir_done() +endif + +deps += ['common_idpf'] + +sources = files( + 'cpfl_ethdev.c', +) \ No newline at end of file diff --git a/drivers/net/meson.build b/drivers/net/meson.build index f83a6de117..b1df17ce8c 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -13,6 +13,7 @@ drivers = [ 'bnxt', 'bonding', 'cnxk', + 'cpfl', 'cxgbe', 'dpaa', 'dpaa2', -- 2.34.1