From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E42EBA0548; Sat, 6 Mar 2021 17:36:14 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B0F5322A528; Sat, 6 Mar 2021 17:32:09 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3CB7722A4D8 for ; Sat, 6 Mar 2021 17:32:07 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 126GRndA028142 for ; Sat, 6 Mar 2021 08:32:06 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=jq3amqJdEkZhcVfztt6u7YclPfy4bd7Cl2UaRN2TU1Y=; b=f6/UuMUiNjfktin6BpxS/n2/aQCJQVZY5T715XZ+Y55jOBn6v5UXvk8M7YwpqeJsrM3d g2zgoGgvB/vWmjfwgG0YDx8bcgKvhuXb68LT2y6gggwgeuxhQnKXxGKAWspHnpEuLeDB H9taPQtiH9NyXvPMhONjm/t5KpAKOzSzVhOcAYFV1+DASgX0Ig7dgPQNGW/snKqMqZw7 g72EZZ4kN9DAiIg3/3eaUhPWy9F9gWVLOydtpo9/DrJsXO/QqGODF3K3OFSdJHZOSWOF Z6/PA4Gks3MoF5Vh/c2j6ajk0GLzHfpqIllVPWFqsYvhQiikAFUODu8QUsPaapu32fUk uQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 3747yurf04-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sat, 06 Mar 2021 08:32:06 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sat, 6 Mar 2021 08:31:59 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sat, 6 Mar 2021 08:31:59 -0800 Received: from BG-LT7430.marvell.com (unknown [10.193.68.121]) by maili.marvell.com (Postfix) with ESMTP id 099C23F7040; Sat, 6 Mar 2021 08:31:57 -0800 (PST) From: To: , Pavan Nikhilesh , "Shijith Thotton" CC: , Date: Sat, 6 Mar 2021 21:59:40 +0530 Message-ID: <20210306162942.6845-36-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210306162942.6845-1-pbhagavatula@marvell.com> References: <20210306162942.6845-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-06_08:2021-03-03, 2021-03-06 signatures=0 Subject: [dpdk-dev] [PATCH 35/36] event/cnxk: add Tx adapter support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add support for event eth Tx adapter. Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/cnxk.rst | 4 +- drivers/event/cnxk/cn10k_eventdev.c | 90 +++++++++++++++++ drivers/event/cnxk/cn9k_eventdev.c | 117 +++++++++++++++++++++++ drivers/event/cnxk/cnxk_eventdev.h | 22 ++++- drivers/event/cnxk/cnxk_eventdev_adptr.c | 106 ++++++++++++++++++++ 5 files changed, 335 insertions(+), 4 deletions(-) diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst index abab7f742..0f916ff5c 100644 --- a/doc/guides/eventdevs/cnxk.rst +++ b/doc/guides/eventdevs/cnxk.rst @@ -42,7 +42,9 @@ Features of the OCTEON CNXK SSO PMD are: - HW managed packets enqueued from ethdev to eventdev exposed through event eth RX adapter. - N:1 ethernet device Rx queue to Event queue mapping. -- Full Rx offload support defined through ethdev queue configuration. +- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE`` + capability while maintaining receive packet order. +- Full Rx/Tx offload support defined through ethdev queue configuration. Prerequisites and Compilation procedure --------------------------------------- diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 70c6fedae..3662fd720 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -243,6 +243,39 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp) return roc_sso_rsrc_init(&dev->sso, hws, hwgrp); } +static int +cn10k_sso_updt_tx_adptr_data(const struct rte_eventdev *event_dev) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + int i; + + if (dev->tx_adptr_data == NULL) + return 0; + + for (i = 0; i < dev->nb_event_ports; i++) { + struct cn10k_sso_hws *ws = event_dev->data->ports[i]; + void *ws_cookie; + + ws_cookie = cnxk_sso_hws_get_cookie(ws); + ws_cookie = rte_realloc_socket( + ws_cookie, + sizeof(struct cnxk_sso_hws_cookie) + + sizeof(struct cn10k_sso_hws) + + (sizeof(uint64_t) * (dev->max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (ws_cookie == NULL) + return -ENOMEM; + ws = RTE_PTR_ADD(ws_cookie, sizeof(struct cnxk_sso_hws_cookie)); + memcpy(&ws->tx_adptr_data, dev->tx_adptr_data, + sizeof(uint64_t) * (dev->max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT); + event_dev->data->ports[i] = ws; + } + + return 0; +} + static void cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) { @@ -482,6 +515,10 @@ cn10k_sso_start(struct rte_eventdev *event_dev) { int rc; + rc = cn10k_sso_updt_tx_adptr_data(event_dev); + if (rc < 0) + return rc; + rc = cnxk_sso_start(event_dev, cn10k_sso_hws_reset, cn10k_sso_hws_flush_events); if (rc < 0) @@ -580,6 +617,55 @@ cn10k_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, return cnxk_sso_rx_adapter_queue_del(event_dev, eth_dev, rx_queue_id); } +static int +cn10k_sso_tx_adapter_caps_get(const struct rte_eventdev *dev, + const struct rte_eth_dev *eth_dev, uint32_t *caps) +{ + int ret; + + RTE_SET_USED(dev); + ret = strncmp(eth_dev->device->driver->name, "net_cn10k", 8); + if (ret) + *caps = 0; + else + *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT; + + return 0; +} + +static int +cn10k_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + int rc; + + RTE_SET_USED(id); + rc = cnxk_sso_tx_adapter_queue_add(event_dev, eth_dev, tx_queue_id); + if (rc < 0) + return rc; + rc = cn10k_sso_updt_tx_adptr_data(event_dev); + if (rc < 0) + return rc; + cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); + + return 0; +} + +static int +cn10k_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + int rc; + + RTE_SET_USED(id); + rc = cnxk_sso_tx_adapter_queue_del(event_dev, eth_dev, tx_queue_id); + if (rc < 0) + return rc; + return cn10k_sso_updt_tx_adptr_data(event_dev); +} + static struct rte_eventdev_ops cn10k_sso_dev_ops = { .dev_infos_get = cn10k_sso_info_get, .dev_configure = cn10k_sso_dev_configure, @@ -599,6 +685,10 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = { .eth_rx_adapter_start = cnxk_sso_rx_adapter_start, .eth_rx_adapter_stop = cnxk_sso_rx_adapter_stop, + .eth_tx_adapter_caps_get = cn10k_sso_tx_adapter_caps_get, + .eth_tx_adapter_queue_add = cn10k_sso_tx_adapter_queue_add, + .eth_tx_adapter_queue_del = cn10k_sso_tx_adapter_queue_del, + .timer_adapter_caps_get = cnxk_tim_caps_get, .dump = cnxk_sso_dump, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index e4383dca1..33b3b6237 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -248,6 +248,66 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp) return roc_sso_rsrc_init(&dev->sso, hws, hwgrp); } +static int +cn9k_sso_updt_tx_adptr_data(const struct rte_eventdev *event_dev) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + int i; + + if (dev->tx_adptr_data == NULL) + return 0; + + for (i = 0; i < dev->nb_event_ports; i++) { + if (dev->dual_ws) { + struct cn9k_sso_hws_dual *dws = + event_dev->data->ports[i]; + void *ws_cookie; + + ws_cookie = cnxk_sso_hws_get_cookie(dws); + ws_cookie = rte_realloc_socket( + ws_cookie, + sizeof(struct cnxk_sso_hws_cookie) + + sizeof(struct cn9k_sso_hws_dual) + + (sizeof(uint64_t) * + (dev->max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (ws_cookie == NULL) + return -ENOMEM; + dws = RTE_PTR_ADD(ws_cookie, + sizeof(struct cnxk_sso_hws_cookie)); + memcpy(&dws->tx_adptr_data, dev->tx_adptr_data, + sizeof(uint64_t) * (dev->max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT); + event_dev->data->ports[i] = dws; + } else { + struct cn9k_sso_hws *ws = event_dev->data->ports[i]; + void *ws_cookie; + + ws_cookie = cnxk_sso_hws_get_cookie(ws); + ws_cookie = rte_realloc_socket( + ws_cookie, + sizeof(struct cnxk_sso_hws_cookie) + + sizeof(struct cn9k_sso_hws_dual) + + (sizeof(uint64_t) * + (dev->max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (ws_cookie == NULL) + return -ENOMEM; + ws = RTE_PTR_ADD(ws_cookie, + sizeof(struct cnxk_sso_hws_cookie)); + memcpy(&ws->tx_adptr_data, dev->tx_adptr_data, + sizeof(uint64_t) * (dev->max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT); + event_dev->data->ports[i] = ws; + } + } + rte_mb(); + + return 0; +} + static void cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) { @@ -683,6 +743,10 @@ cn9k_sso_start(struct rte_eventdev *event_dev) { int rc; + rc = cn9k_sso_updt_tx_adptr_data(event_dev); + if (rc < 0) + return rc; + rc = cnxk_sso_start(event_dev, cn9k_sso_hws_reset, cn9k_sso_hws_flush_events); if (rc < 0) @@ -787,6 +851,55 @@ cn9k_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, return cnxk_sso_rx_adapter_queue_del(event_dev, eth_dev, rx_queue_id); } +static int +cn9k_sso_tx_adapter_caps_get(const struct rte_eventdev *dev, + const struct rte_eth_dev *eth_dev, uint32_t *caps) +{ + int ret; + + RTE_SET_USED(dev); + ret = strncmp(eth_dev->device->driver->name, "net_cn9k", 8); + if (ret) + *caps = 0; + else + *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT; + + return 0; +} + +static int +cn9k_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + int rc; + + RTE_SET_USED(id); + rc = cnxk_sso_tx_adapter_queue_add(event_dev, eth_dev, tx_queue_id); + if (rc < 0) + return rc; + rc = cn9k_sso_updt_tx_adptr_data(event_dev); + if (rc < 0) + return rc; + cn9k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); + + return 0; +} + +static int +cn9k_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + int rc; + + RTE_SET_USED(id); + rc = cnxk_sso_tx_adapter_queue_del(event_dev, eth_dev, tx_queue_id); + if (rc < 0) + return rc; + return cn9k_sso_updt_tx_adptr_data(event_dev); +} + static struct rte_eventdev_ops cn9k_sso_dev_ops = { .dev_infos_get = cn9k_sso_info_get, .dev_configure = cn9k_sso_dev_configure, @@ -806,6 +919,10 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = { .eth_rx_adapter_start = cnxk_sso_rx_adapter_start, .eth_rx_adapter_stop = cnxk_sso_rx_adapter_stop, + .eth_tx_adapter_caps_get = cn9k_sso_tx_adapter_caps_get, + .eth_tx_adapter_queue_add = cn9k_sso_tx_adapter_queue_add, + .eth_tx_adapter_queue_del = cn9k_sso_tx_adapter_queue_del, + .timer_adapter_caps_get = cnxk_tim_caps_get, .dump = cnxk_sso_dump, diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index 9c3331f7e..59c1af98e 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -84,9 +85,12 @@ struct cnxk_sso_evdev { rte_iova_t fc_iova; struct rte_mempool *xaq_pool; uint64_t rx_offloads; + uint64_t tx_offloads; uint64_t adptr_xae_cnt; uint16_t rx_adptr_pool_cnt; uint64_t *rx_adptr_pools; + uint64_t *tx_adptr_data; + uint16_t max_port_id; uint16_t tim_adptr_ring_cnt; uint16_t *timer_adptr_rings; uint64_t *timer_adptr_sz; @@ -121,8 +125,10 @@ struct cn10k_sso_hws { uint64_t xaq_lmt __rte_cache_aligned; uint64_t *fc_mem; uintptr_t grps_base[CNXK_SSO_MAX_HWGRP]; - uint64_t base; + /* Tx Fastpath data */ + uint64_t base __rte_cache_aligned; uintptr_t lmt_base; + uint8_t tx_adptr_data[]; } __rte_cache_aligned; /* CN9K HWS ops */ @@ -145,7 +151,9 @@ struct cn9k_sso_hws { uint64_t xaq_lmt __rte_cache_aligned; uint64_t *fc_mem; uintptr_t grps_base[CNXK_SSO_MAX_HWGRP]; - uint64_t base; + /* Tx Fastpath data */ + uint64_t base __rte_cache_aligned; + uint8_t tx_adptr_data[]; } __rte_cache_aligned; struct cn9k_sso_hws_state { @@ -163,7 +171,9 @@ struct cn9k_sso_hws_dual { uint64_t xaq_lmt __rte_cache_aligned; uint64_t *fc_mem; uintptr_t grps_base[CNXK_SSO_MAX_HWGRP]; - uint64_t base[2]; + /* Tx Fastpath data */ + uint64_t base[2] __rte_cache_aligned; + uint8_t tx_adptr_data[]; } __rte_cache_aligned; struct cnxk_sso_hws_cookie { @@ -255,5 +265,11 @@ int cnxk_sso_rx_adapter_start(const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev); int cnxk_sso_rx_adapter_stop(const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev); +int cnxk_sso_tx_adapter_queue_add(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id); +int cnxk_sso_tx_adapter_queue_del(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id); #endif /* __CNXK_EVENTDEV_H__ */ diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c index e06033117..af44f63f9 100644 --- a/drivers/event/cnxk/cnxk_eventdev_adptr.c +++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c @@ -5,6 +5,8 @@ #include "cnxk_ethdev.h" #include "cnxk_eventdev.h" +#define CNXK_SSO_SQB_LIMIT (0x180) + void cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data, uint32_t event_type) @@ -222,3 +224,107 @@ cnxk_sso_rx_adapter_stop(const struct rte_eventdev *event_dev, return 0; } + +static int +cnxk_sso_sqb_aura_limit_edit(struct roc_nix_sq *sq, uint16_t nb_sqb_bufs) +{ + uint16_t sqb_limit; + + sqb_limit = RTE_MIN(nb_sqb_bufs, sq->nb_sqb_bufs); + return roc_npa_aura_limit_modify(sq->aura_handle, sqb_limit); +} + +static int +cnxk_sso_updt_tx_queue_data(const struct rte_eventdev *event_dev, + uint16_t eth_port_id, uint16_t tx_queue_id, + void *txq) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + uint16_t max_port_id = dev->max_port_id; + uint64_t *txq_data = dev->tx_adptr_data; + + if (txq_data == NULL || eth_port_id > max_port_id) { + max_port_id = RTE_MAX(max_port_id, eth_port_id); + txq_data = rte_realloc_socket( + txq_data, + (sizeof(uint64_t) * (max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT), + RTE_CACHE_LINE_SIZE, event_dev->data->socket_id); + if (txq_data == NULL) + return -ENOMEM; + } + + ((uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) + txq_data)[eth_port_id][tx_queue_id] = (uint64_t)txq; + dev->max_port_id = max_port_id; + dev->tx_adptr_data = txq_data; + return 0; +} + +int +cnxk_sso_tx_adapter_queue_add(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private; + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct roc_nix_sq *sq; + int i, ret; + void *txq; + + if (tx_queue_id < 0) { + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + txq = eth_dev->data->tx_queues[i]; + sq = &cnxk_eth_dev->sqs[i]; + cnxk_sso_sqb_aura_limit_edit(sq, CNXK_SSO_SQB_LIMIT); + ret = cnxk_sso_updt_tx_queue_data( + event_dev, eth_dev->data->port_id, i, txq); + if (ret < 0) + return ret; + } + } else { + txq = eth_dev->data->tx_queues[tx_queue_id]; + sq = &cnxk_eth_dev->sqs[tx_queue_id]; + cnxk_sso_sqb_aura_limit_edit(sq, CNXK_SSO_SQB_LIMIT); + ret = cnxk_sso_updt_tx_queue_data( + event_dev, eth_dev->data->port_id, tx_queue_id, txq); + if (ret < 0) + return ret; + } + + dev->tx_offloads |= cnxk_eth_dev->tx_offload_flags; + + return 0; +} + +int +cnxk_sso_tx_adapter_queue_del(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private; + struct roc_nix_sq *sq; + int i, ret; + + RTE_SET_USED(event_dev); + if (tx_queue_id < 0) { + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + sq = &cnxk_eth_dev->sqs[i]; + cnxk_sso_sqb_aura_limit_edit(sq, sq->nb_sqb_bufs); + ret = cnxk_sso_updt_tx_queue_data( + event_dev, eth_dev->data->port_id, tx_queue_id, + NULL); + if (ret < 0) + return ret; + } + } else { + sq = &cnxk_eth_dev->sqs[tx_queue_id]; + cnxk_sso_sqb_aura_limit_edit(sq, sq->nb_sqb_bufs); + ret = cnxk_sso_updt_tx_queue_data( + event_dev, eth_dev->data->port_id, tx_queue_id, NULL); + if (ret < 0) + return ret; + } + + return 0; +} -- 2.17.1