From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7119F4240B; Wed, 18 Jan 2023 09:27:50 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B5E1D42D47; Wed, 18 Jan 2023 09:27:41 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id B331D42C4D for ; Wed, 18 Jan 2023 09:27:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674030459; x=1705566459; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jCSGPJlu3dq/8PkKQsF3rRtHxEMPYZyoyVWM5i0tR2s=; b=YHi7Nn+3V5DeJAtNw1Y4W5fAkRaC6CKgkO2f6VvhuZgxFE1iRGnYevLT 9xnxihoCGlbZbKTNiH6yrE0HGfQ7b80GwdnBTqvgMw1fXg5phjKlk7t2e RR/+a2cpgu08Xm0ps3vdxzNK0BxTnG3n+ekbG7+GFsMoL3FwNtXF7PQPA EN+ADPywAjucBG6eoFa6uW+KEveipfM0sNjJnckwRnWC4FusXu7MXv0Ge z9hzoa5yacRoM6WiVfiIrY0euwdLnROEV7kcu/QNCtwsm6tPiTRWPTf7A xCFxgRTrWbj0W9kF3E2LKvjPSR4fuBqGJZQkTbNEttTkIqvttS2apG+fB Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="305304183" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="305304183" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2023 00:27:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="652835184" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="652835184" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.167]) by orsmga007.jf.intel.com with ESMTP; 18 Jan 2023 00:27:36 -0800 From: Mingxia Liu To: dev@dpdk.org, qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: wenjun1.wu@intel.com, Mingxia Liu Subject: [PATCH v3 02/21] net/cpfl: add Tx queue setup Date: Wed, 18 Jan 2023 07:31:38 +0000 Message-Id: <20230118073154.903012-3-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230118073154.903012-1-mingxia.liu@intel.com> References: <20230113081931.221576-1-mingxia.liu@intel.com> <20230118073154.903012-1-mingxia.liu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for tx_queue_setup ops. In the single queue model, the same descriptor queue is used by SW to post buffer descriptors to HW and by HW to post completed descriptors to SW. In the split queue model, "RX buffer queues" are used to pass descriptor buffers from SW to HW while Rx queues are used only to pass the descriptor completions, that is, descriptors that point to completed buffers, from HW to SW. This is contrary to the single queue model in which Rx queues are used for both purposes. Signed-off-by: Mingxia Liu --- drivers/net/cpfl/cpfl_ethdev.c | 13 +++++++++++++ drivers/net/cpfl/cpfl_rxtx.c | 8 ++++---- drivers/net/cpfl/meson.build | 1 + 3 files changed, 18 insertions(+), 4 deletions(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 2ac53bc5b0..4a569c2f7e 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -12,6 +12,7 @@ #include #include "cpfl_ethdev.h" +#include "cpfl_rxtx.h" #define CPFL_TX_SINGLE_Q "tx_single" #define CPFL_RX_SINGLE_Q "rx_single" @@ -96,6 +97,17 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_mtu = vport->max_mtu; dev_info->min_mtu = RTE_ETHER_MIN_MTU; + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = CPFL_MAX_RING_DESC, + .nb_min = CPFL_MIN_RING_DESC, + .nb_align = CPFL_ALIGN_RING_DESC, + }; + return 0; } @@ -513,6 +525,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a static const struct eth_dev_ops cpfl_eth_dev_ops = { .dev_configure = cpfl_dev_configure, .dev_close = cpfl_dev_close, + .tx_queue_setup = cpfl_tx_queue_setup, .dev_infos_get = cpfl_dev_info_get, .link_update = cpfl_dev_link_update, .dev_supported_ptypes_get = cpfl_dev_supported_ptypes_get, diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index ea4a2002bf..a9742379db 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -130,7 +130,7 @@ cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq, cq->tx_ring_phys_addr = mz->iova; cq->compl_ring = mz->addr; cq->mz = mz; - reset_split_tx_complq(cq); + idpf_reset_split_tx_complq(cq); txq->complq = cq; @@ -164,7 +164,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH); tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ? tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH); - if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0) + if (idpf_check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0) return -EINVAL; /* Allocate the TX queue data structure. */ @@ -215,10 +215,10 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (!is_splitq) { txq->tx_ring = mz->addr; - reset_single_tx_queue(txq); + idpf_reset_single_tx_queue(txq); } else { txq->desc_ring = mz->addr; - reset_split_tx_descq(txq); + idpf_reset_split_tx_descq(txq); /* Setup tx completion queue if split model */ ret = cpfl_tx_complq_setup(dev, txq, queue_idx, diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build index 106cc97e60..3ccee15703 100644 --- a/drivers/net/cpfl/meson.build +++ b/drivers/net/cpfl/meson.build @@ -11,4 +11,5 @@ deps += ['common_idpf'] sources = files( 'cpfl_ethdev.c', + 'cpfl_rxtx.c', ) \ No newline at end of file -- 2.25.1