From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25547A00C4; Fri, 5 Aug 2022 14:49:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D6F0F42C0C; Fri, 5 Aug 2022 14:49:24 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 1B8BA400D6 for ; Fri, 5 Aug 2022 14:49:23 +0200 (CEST) Received: from [192.168.1.39] (unknown [188.170.75.116]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id F2A52A5; Fri, 5 Aug 2022 15:49:21 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru F2A52A5 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1659703762; bh=iBmoDJY07Xsbw8wiiFPq7jgWgHQuX1tPE4ckWFPDozg=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=wgNriXyq50J+lLYVAt4ze9YttWEtgueGBHdLBniHXLLj8SszAsbLJw60T9syRC5wB vddgp1nQEvgAnkUPZsLnl7bhR04fHDL5ojISYqMxOu6Cq4dxLV+zcxakuC5YgEFd+o wLPJTEF1V9iJ8wEyphFxhqPSl4P65/2xzKW0WXOY= Message-ID: <06baf7d9-5bec-aa32-dc2d-9d1b5fe9922e@oktetlabs.ru> Date: Fri, 5 Aug 2022 15:49:20 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0 Subject: Re: [PATCH v5 05/12] net/nfp: add flower PF setup and mempool init logic Content-Language: en-US To: Chaoyong He , dev@dpdk.org Cc: niklas.soderlund@corigine.com, Stephen Hemminger , Hemant Agrawal , Thomas Monjalon References: <1659681155-16525-1-git-send-email-chaoyong.he@corigine.com> <1659681155-16525-6-git-send-email-chaoyong.he@corigine.com> From: Andrew Rybchenko In-Reply-To: <1659681155-16525-6-git-send-email-chaoyong.he@corigine.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org @Thomas, @Stephen, @Hemant, please, see lines from OvS below. On 8/5/22 09:32, Chaoyong He wrote: > This commit adds the vNIC initialization logic for the flower PF vNIC. "This commit adds" -> "Add" > The flower firmware exposes this vNIC for the purposes of fallback > traffic in the switchdev use-case. The logic of setting up this vNIC is > similar to the logic seen in nfp_net_init() and nfp_net_start(). > > This commit also adds minimal dev_ops for this PF device. Because the same here > device is being exposed externally to DPDK it should also be configured > using DPDK helpers like rte_eth_configure(). For these helpers to work > the flower logic needs to implements a minimal set of dev_ops. The Rx > and Tx logic for this vNIC will be added in a subsequent commit. > > OVS expects incoming packets coming into the OVS datapath to be > allocated from a mempool that contains objects of type "struct > dp_packet". For the PF handling the slowpath into OVS it should > use a mempool that is compatible with OVS. This commit adds the logic > to create the OVS compatible mempool. It adds certain OVS specific > structs to be able to instantiate the mempool. > > Signed-off-by: Chaoyong He > Reviewed-by: Niklas Söderlund > --- > drivers/net/nfp/flower/nfp_flower.c | 384 ++++++++++++++++++++++++- > drivers/net/nfp/flower/nfp_flower.h | 9 + > drivers/net/nfp/flower/nfp_flower_ovs_compat.h | 145 ++++++++++ > drivers/net/nfp/nfp_common.h | 3 + > 4 files changed, 537 insertions(+), 4 deletions(-) > create mode 100644 drivers/net/nfp/flower/nfp_flower_ovs_compat.h > > diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c > index 1dddced..c05d4ca 100644 > --- a/drivers/net/nfp/flower/nfp_flower.c > +++ b/drivers/net/nfp/flower/nfp_flower.c > @@ -14,7 +14,35 @@ > #include "../nfp_logs.h" > #include "../nfp_ctrl.h" > #include "../nfp_cpp_bridge.h" > +#include "../nfp_rxtx.h" > +#include "../nfpcore/nfp_mip.h" > +#include "../nfpcore/nfp_rtsym.h" > +#include "../nfpcore/nfp_nsp.h" > #include "nfp_flower.h" > +#include "nfp_flower_ovs_compat.h" > + > +#define MAX_PKT_BURST 32 > +#define MEMPOOL_CACHE_SIZE 512 > +#define DEFAULT_FLBUF_SIZE 9216 > + > +/* > + * Simple dev ops functions for the flower PF. Because a rte_device is exposed > + * to DPDK the flower logic also makes use of helper functions like > + * rte_dev_configure() to set up the PF device. Stub functions are needed to > + * use these helper functions > + */ > +static int > +nfp_flower_pf_configure(__rte_unused struct rte_eth_dev *dev) > +{ > + return 0; > +} > + > +static const struct eth_dev_ops nfp_flower_pf_dev_ops = { > + .dev_configure = nfp_flower_pf_configure, > + > + /* Use the normal dev_infos_get functionality in the NFP PMD */ > + .dev_infos_get = nfp_net_infos_get, > +}; > > static struct rte_service_spec flower_services[NFP_FLOWER_SERVICE_MAX] = { > }; > @@ -49,6 +77,304 @@ > return ret; > } > > +static void > +nfp_flower_pf_mp_init(__rte_unused struct rte_mempool *mp, > + __rte_unused void *opaque_arg, > + void *_p, > + __rte_unused unsigned int i) > +{ > + struct dp_packet *pkt = _p; > + pkt->source = DPBUF_DPDK; > + pkt->l2_pad_size = 0; > + pkt->l2_5_ofs = UINT16_MAX; > + pkt->l3_ofs = UINT16_MAX; > + pkt->l4_ofs = UINT16_MAX; > + pkt->packet_type = 0; /* PT_ETH */ > +} > + > +static struct rte_mempool * > +nfp_flower_pf_mp_create(void) > +{ > + uint32_t nb_mbufs; > + uint32_t pkt_size; > + uint32_t n_rxd = 1024; > + uint32_t n_txd = 1024; > + unsigned int numa_node; > + uint32_t aligned_mbuf_size; > + uint32_t mbuf_priv_data_len; > + struct rte_mempool *pktmbuf_pool; > + > + nb_mbufs = RTE_MAX(n_rxd + n_txd + MAX_PKT_BURST + MEMPOOL_CACHE_SIZE, > + 81920U); > + > + /* > + * The size of the mbuf's private area (i.e. area that holds OvS' > + * dp_packet data) > + */ > + mbuf_priv_data_len = sizeof(struct dp_packet) - sizeof(struct rte_mbuf); > + /* The size of the entire dp_packet. */ > + pkt_size = sizeof(struct dp_packet) + RTE_MBUF_DEFAULT_BUF_SIZE; > + /* mbuf size, rounded up to cacheline size. */ > + aligned_mbuf_size = ROUND_UP(pkt_size, RTE_CACHE_LINE_SIZE); > + mbuf_priv_data_len += (aligned_mbuf_size - pkt_size); > + > + numa_node = rte_socket_id(); > + pktmbuf_pool = rte_pktmbuf_pool_create("flower_pf_mbuf_pool", nb_mbufs, > + MEMPOOL_CACHE_SIZE, mbuf_priv_data_len, > + RTE_MBUF_DEFAULT_BUF_SIZE, numa_node); > + if (pktmbuf_pool == NULL) { > + RTE_LOG(ERR, PMD, "Cannot init mbuf pool\n"); > + return NULL; > + } > + > + rte_mempool_obj_iter(pktmbuf_pool, nfp_flower_pf_mp_init, NULL); > + > + return pktmbuf_pool; > +} > + > +static void > +nfp_flower_cleanup_pf_vnic(struct nfp_net_hw *hw) > +{ > + uint16_t i; > + struct rte_eth_dev *eth_dev; > + struct nfp_app_flower *app_flower; > + > + eth_dev = hw->eth_dev; > + app_flower = NFP_APP_PRIV_TO_APP_FLOWER(hw->pf_dev->app_priv); > + > + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) > + nfp_net_tx_queue_release(eth_dev, i); > + > + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) > + nfp_net_rx_queue_release(eth_dev, i); > + > + rte_free(eth_dev->data->mac_addrs); > + rte_mempool_free(app_flower->pf_pktmbuf_pool); > + rte_free(eth_dev->data->dev_private); > + rte_eth_dev_release_port(hw->eth_dev); > +} > + > +static int > +nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type) > +{ > + uint32_t start_q; > + uint64_t rx_bar_off; > + uint64_t tx_bar_off; > + const int stride = 4; > + struct nfp_pf_dev *pf_dev; > + struct rte_pci_device *pci_dev; > + > + pf_dev = hw->pf_dev; > + pci_dev = hw->pf_dev->pci_dev; > + > + /* NFP can not handle DMA addresses requiring more than 40 bits */ > + if (rte_mem_check_dma_mask(40)) { > + RTE_LOG(ERR, PMD, > + "device %s can not be used: restricted dma mask to 40 bits!\n", > + pci_dev->device.name); > + return -ENODEV; > + }; > + > + hw->device_id = pci_dev->id.device_id; > + hw->vendor_id = pci_dev->id.vendor_id; > + hw->subsystem_device_id = pci_dev->id.subsystem_device_id; > + hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id; > + > + PMD_INIT_LOG(DEBUG, "%s vNIC ctrl bar: %p", vnic_type, hw->ctrl_bar); > + > + /* Read the number of available rx/tx queues from hardware */ > + hw->max_rx_queues = nn_cfg_readl(hw, NFP_NET_CFG_MAX_RXRINGS); > + hw->max_tx_queues = nn_cfg_readl(hw, NFP_NET_CFG_MAX_TXRINGS); > + > + /* Work out where in the BAR the queues start */ > + start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ); > + tx_bar_off = (uint64_t)start_q * NFP_QCP_QUEUE_ADDR_SZ; > + start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); > + rx_bar_off = (uint64_t)start_q * NFP_QCP_QUEUE_ADDR_SZ; > + > + hw->tx_bar = pf_dev->hw_queues + tx_bar_off; > + hw->rx_bar = pf_dev->hw_queues + rx_bar_off; > + > + /* Get some of the read-only fields from the config BAR */ > + hw->ver = nn_cfg_readl(hw, NFP_NET_CFG_VERSION); > + hw->cap = nn_cfg_readl(hw, NFP_NET_CFG_CAP); > + hw->max_mtu = nn_cfg_readl(hw, NFP_NET_CFG_MAX_MTU); > + /* Set the current MTU to the maximum supported */ > + hw->mtu = hw->max_mtu; > + hw->flbufsz = DEFAULT_FLBUF_SIZE; > + > + /* read the Rx offset configured from firmware */ > + if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2) > + hw->rx_offset = NFP_NET_RX_OFFSET; > + else > + hw->rx_offset = nn_cfg_readl(hw, NFP_NET_CFG_RX_OFFSET_ADDR); > + > + hw->ctrl = 0; > + hw->stride_rx = stride; > + hw->stride_tx = stride; > + > + /* Reuse cfg queue setup function */ > + nfp_net_cfg_queue_setup(hw); > + > + PMD_INIT_LOG(INFO, "%s vNIC max_rx_queues: %u, max_tx_queues: %u", > + vnic_type, hw->max_rx_queues, hw->max_tx_queues); > + > + /* Initializing spinlock for reconfigs */ > + rte_spinlock_init(&hw->reconfig_lock); > + > + return 0; > +} > + > +static int > +nfp_flower_init_pf_vnic(struct nfp_net_hw *hw) > +{ > + int ret; > + uint16_t i; > + uint16_t n_txq; > + uint16_t n_rxq; > + uint16_t port_id; > + unsigned int numa_node; > + struct rte_mempool *mp; > + struct nfp_pf_dev *pf_dev; > + struct rte_eth_dev *eth_dev; > + struct nfp_app_flower *app_flower; > + > + const struct rte_eth_rxconf rx_conf = { static const ? > + .rx_free_thresh = DEFAULT_RX_FREE_THRESH, > + .rx_drop_en = 1, > + }; > + > + const struct rte_eth_txconf tx_conf = { static const ? > + .tx_thresh = { > + .pthresh = DEFAULT_TX_PTHRESH, > + .hthresh = DEFAULT_TX_HTHRESH, > + .wthresh = DEFAULT_TX_WTHRESH, > + }, > + .tx_free_thresh = DEFAULT_TX_FREE_THRESH, > + }; > + > + static struct rte_eth_conf port_conf = { I think it should be const as well > + .rxmode = { > + .mq_mode = RTE_ETH_MQ_RX_RSS, > + .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM, > + }, > + .txmode = { > + .mq_mode = RTE_ETH_MQ_TX_NONE, > + }, > + }; > + > + /* Set up some pointers here for ease of use */ > + pf_dev = hw->pf_dev; > + app_flower = NFP_APP_PRIV_TO_APP_FLOWER(pf_dev->app_priv); > + > + /* > + * Perform the "common" part of setting up a flower vNIC. > + * Mostly reading configuration from hardware. > + */ > + ret = nfp_flower_init_vnic_common(hw, "pf_vnic"); > + if (ret) Compare vs 0 > + goto done; > + > + hw->eth_dev = rte_eth_dev_allocate("pf_vnic_eth_dev"); Shoulnd't name mention 'nfp' ? > + if (hw->eth_dev == NULL) { > + ret = -ENOMEM; > + goto done; > + } > + > + /* Grab the pointer to the newly created rte_eth_dev here */ > + eth_dev = hw->eth_dev; > + > + numa_node = rte_socket_id(); > + eth_dev->data->dev_private = > + rte_zmalloc_socket("pf_vnic_eth_dev", sizeof(struct nfp_net_hw), > + RTE_CACHE_LINE_SIZE, numa_node); > + if (eth_dev->data->dev_private == NULL) { > + ret = -ENOMEM; > + goto port_release; > + } > + > + /* Fill in some of the eth_dev fields */ > + eth_dev->device = &pf_dev->pci_dev->device; > + eth_dev->data->nb_tx_queues = hw->max_tx_queues; > + eth_dev->data->nb_rx_queues = hw->max_rx_queues; Above two assignments look strange. It is rte_eth_dev_configure() job to do it. I think that these max values should be simply passed on configure. > + eth_dev->data->dev_private = hw; > + > + /* Create a mbuf pool for the PF */ > + app_flower->pf_pktmbuf_pool = nfp_flower_pf_mp_create(); > + if (app_flower->pf_pktmbuf_pool == NULL) { > + ret = -ENOMEM; > + goto private_cleanup; > + } > + > + mp = app_flower->pf_pktmbuf_pool; > + > + /* Add Rx/Tx functions */ > + eth_dev->dev_ops = &nfp_flower_pf_dev_ops; > + > + /* PF vNIC gets a random MAC */ > + eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", > + RTE_ETHER_ADDR_LEN, 0); > + if (eth_dev->data->mac_addrs == NULL) { > + ret = -ENOMEM; > + goto mempool_cleanup; > + } > + > + rte_eth_random_addr(eth_dev->data->mac_addrs->addr_bytes); > + rte_eth_dev_probing_finish(eth_dev); > + > + /* Configure the PF device now */ > + n_rxq = hw->eth_dev->data->nb_rx_queues; > + n_txq = hw->eth_dev->data->nb_tx_queues; > + port_id = hw->eth_dev->data->port_id; > + > + ret = rte_eth_dev_configure(port_id, n_rxq, n_txq, &port_conf); > + if (ret) { Compare vs 0 > + PMD_INIT_LOG(ERR, "Could not configure PF device %d", ret); > + goto mac_cleanup; > + } > + > + /* Set up the Rx queues */ > + for (i = 0; i < n_rxq; i++) { > + /* Hardcoded number of desc to 1024 */ > + ret = nfp_net_rx_queue_setup(eth_dev, i, 1024, numa_node, > + &rx_conf, mp); > + if (ret) { > + PMD_INIT_LOG(ERR, "Configure flower PF vNIC Rx queue %d failed", i); > + goto rx_queue_cleanup; > + } > + } > + > + /* Set up the Tx queues */ > + for (i = 0; i < n_txq; i++) { > + /* Hardcoded number of desc to 1024 */ > + ret = nfp_net_nfd3_tx_queue_setup(eth_dev, i, 1024, numa_node, > + &tx_conf); > + if (ret) { > + PMD_INIT_LOG(ERR, "Configure flower PF vNIC Tx queue %d failed", i); > + goto tx_queue_cleanup; > + } > + } > + > + return 0; > + > +tx_queue_cleanup: > + for (i = 0; i < n_txq; i++) > + nfp_net_tx_queue_release(eth_dev, i); > +rx_queue_cleanup: > + for (i = 0; i < n_rxq; i++) > + nfp_net_rx_queue_release(eth_dev, i); > +mac_cleanup: > + rte_free(eth_dev->data->mac_addrs); > +mempool_cleanup: > + rte_mempool_free(mp); > +private_cleanup: > + rte_free(eth_dev->data->dev_private); > +port_release: > + rte_eth_dev_release_port(hw->eth_dev); > +done: > + return ret; > +} > + > int > nfp_init_app_flower(struct nfp_pf_dev *pf_dev) > { > @@ -77,14 +403,49 @@ > goto app_cleanup; > } > > + /* Grab the number of physical ports present on hardware */ > + app_flower->nfp_eth_table = nfp_eth_read_ports(pf_dev->cpp); > + if (app_flower->nfp_eth_table == NULL) { > + PMD_INIT_LOG(ERR, "error reading nfp ethernet table"); > + ret = -EIO; > + goto vnic_cleanup; > + } > + > + /* Map the PF ctrl bar */ > + pf_dev->ctrl_bar = nfp_rtsym_map(pf_dev->sym_tbl, "_pf0_net_bar0", > + 32768, &pf_dev->ctrl_area); > + if (pf_dev->ctrl_bar == NULL) { > + PMD_INIT_LOG(ERR, "Cloud not map the PF vNIC ctrl bar"); > + ret = -ENODEV; > + goto eth_tbl_cleanup; > + } > + > + /* Fill in the PF vNIC and populate app struct */ > + app_flower->pf_hw = pf_hw; > + pf_hw->ctrl_bar = pf_dev->ctrl_bar; > + pf_hw->pf_dev = pf_dev; > + pf_hw->cpp = pf_dev->cpp; > + > + ret = nfp_flower_init_pf_vnic(app_flower->pf_hw); > + if (ret) { Compare vs 0 > + PMD_INIT_LOG(ERR, "Could not initialize flower PF vNIC"); > + goto pf_cpp_area_cleanup; > + } > + > /* Start up flower services */ > if (nfp_flower_enable_services(app_flower)) { > ret = -ESRCH; > - goto vnic_cleanup; > + goto pf_vnic_cleanup; > } > > return 0; > > +pf_vnic_cleanup: > + nfp_flower_cleanup_pf_vnic(app_flower->pf_hw); > +pf_cpp_area_cleanup: > + nfp_cpp_area_free(pf_dev->ctrl_area); > +eth_tbl_cleanup: > + free(app_flower->nfp_eth_table); > vnic_cleanup: > rte_free(pf_hw); > app_cleanup: > @@ -94,8 +455,23 @@ > } > > int > -nfp_secondary_init_app_flower(__rte_unused struct nfp_cpp *cpp) > +nfp_secondary_init_app_flower(struct nfp_cpp *cpp) > { > - PMD_INIT_LOG(ERR, "Flower firmware not supported"); > - return -ENOTSUP; > + struct rte_eth_dev *eth_dev; > + const char *port_name = "pf_vnic_eth_dev"; > + > + PMD_DRV_LOG(DEBUG, "Secondary attaching to port %s", port_name); > + > + eth_dev = rte_eth_dev_attach_secondary(port_name); > + if (eth_dev == NULL) { > + RTE_LOG(ERR, EAL, "secondary process attach failed, " > + "ethdev doesn't exist"); > + return -ENODEV; > + } > + > + eth_dev->process_private = cpp; > + eth_dev->dev_ops = &nfp_flower_pf_dev_ops; > + rte_eth_dev_probing_finish(eth_dev); > + > + return 0; > } > diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h > index 4a9b302..f6fd4eb 100644 > --- a/drivers/net/nfp/flower/nfp_flower.h > +++ b/drivers/net/nfp/flower/nfp_flower.h > @@ -14,6 +14,15 @@ enum nfp_flower_service { > struct nfp_app_flower { > /* List of rte_service ID's for the flower app */ > uint32_t flower_services_ids[NFP_FLOWER_SERVICE_MAX]; > + > + /* Pointer to a mempool for the PF vNIC */ > + struct rte_mempool *pf_pktmbuf_pool; > + > + /* Pointer to the PF vNIC */ > + struct nfp_net_hw *pf_hw; > + > + /* the eth table as reported by firmware */ > + struct nfp_eth_table *nfp_eth_table; > }; > > int nfp_init_app_flower(struct nfp_pf_dev *pf_dev); > diff --git a/drivers/net/nfp/flower/nfp_flower_ovs_compat.h b/drivers/net/nfp/flower/nfp_flower_ovs_compat.h > new file mode 100644 > index 0000000..f0fcbf2 > --- /dev/null > +++ b/drivers/net/nfp/flower/nfp_flower_ovs_compat.h > @@ -0,0 +1,145 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright (c) 2022 Corigine, Inc. > + * All rights reserved. > + */ > + > +#ifndef _NFP_FLOWER_OVS_COMPAT_H_ > +#define _NFP_FLOWER_OVS_COMPAT_H_ > + Below lines come from OvS and correspdonging file is licenced under Apache 2.0 licence. It is just few lines, but still. I'm not sure that it is OK to change the license and drop copyright. I'm adding more people in Cc to tell me if my concerns are wrong. @Thomas, @Stephen, @Hemant ? > +/* From ovs */ > +#define PAD_PASTE2(x, y) x##y > +#define PAD_PASTE(x, y) PAD_PASTE2(x, y) > +#define PAD_ID PAD_PASTE(pad, __COUNTER__) > + > +/* Returns X rounded up to the nearest multiple of Y. */ > +#define ROUND_UP(X, Y) (DIV_ROUND_UP(X, Y) * (Y)) > + > +typedef uint8_t OVS_CACHE_LINE_MARKER[1]; > + > +#ifndef __cplusplus > +#define PADDED_MEMBERS_CACHELINE_MARKER(UNIT, CACHELINE, MEMBERS) \ > + union { \ > + OVS_CACHE_LINE_MARKER CACHELINE; \ > + struct { MEMBERS }; \ > + uint8_t PAD_ID[ROUND_UP(sizeof(struct { MEMBERS }), UNIT)]; \ > + } > +#else > +#define PADDED_MEMBERS_CACHELINE_MARKER(UNIT, CACHELINE, MEMBERS) \ > + struct struct_##CACHELINE { MEMBERS }; \ Confused to see duplicate 'struct struct' above. > + union { \ > + OVS_CACHE_LINE_MARKER CACHELINE; \ > + struct { MEMBERS }; \ > + uint8_t PAD_ID[ROUND_UP(sizeof(struct struct_##CACHELINE), UNIT)]; \ > + } > +#endif > + > +struct ovs_key_ct_tuple_ipv4 { > + rte_be32_t ipv4_src; > + rte_be32_t ipv4_dst; > + rte_be16_t src_port; > + rte_be16_t dst_port; > + uint8_t ipv4_proto; > +}; > + > +struct ovs_key_ct_tuple_ipv6 { > + rte_be32_t ipv6_src[4]; > + rte_be32_t ipv6_dst[4]; > + rte_be16_t src_port; > + rte_be16_t dst_port; > + uint8_t ipv6_proto; > +}; > + > +/* Tunnel information used in flow key and metadata. */ > +struct flow_tnl { > + uint32_t ip_dst; > + struct in6_addr ipv6_dst; > + uint32_t ip_src; > + struct in6_addr ipv6_src; > + uint64_t tun_id; > + uint16_t flags; > + uint8_t ip_tos; > + uint8_t ip_ttl; > + uint16_t tp_src; > + uint16_t tp_dst; > + uint16_t gbp_id; > + uint8_t gbp_flags; > + uint8_t erspan_ver; > + uint32_t erspan_idx; > + uint8_t erspan_dir; > + uint8_t erspan_hwid; > + uint8_t gtpu_flags; > + uint8_t gtpu_msgtype; > + uint8_t pad1[4]; /* Pad to 64 bits. */ > +}; > + > +enum dp_packet_source { > + DPBUF_MALLOC, /* Obtained via malloc(). */ > + DPBUF_STACK, /* Un-movable stack space or static buffer. */ > + DPBUF_STUB, /* Starts on stack, may expand into heap. */ > + DPBUF_DPDK, /* buffer data is from DPDK allocated memory. */ > + DPBUF_AFXDP, /* Buffer data from XDP frame. */ > +}; > + > +/* Datapath packet metadata */ > +struct pkt_metadata { > +PADDED_MEMBERS_CACHELINE_MARKER(RTE_CACHE_LINE_SIZE, cacheline0, If it is DPDK-specific code why do you prefer to use such macros intead of approach used for rte_mbuf RTE_MARKER cacheline0; > + /* Recirculation id carried with the recirculating packets. */ > + uint32_t recirc_id; /* 0 for packets received from the wire. */ > + uint32_t dp_hash; /* hash value computed by the recirculation action. */ > + uint32_t skb_priority; /* Packet priority for QoS. */ > + uint32_t pkt_mark; /* Packet mark. */ > + uint8_t ct_state; /* Connection state. */ > + bool ct_orig_tuple_ipv6; > + uint16_t ct_zone; /* Connection zone. */ > + uint32_t ct_mark; /* Connection mark. */ > + uint32_t ct_label[4]; /* Connection label. */ > + uint32_t in_port; /* Input port. */ > + uint32_t orig_in_port; /* Originating in_port for tunneled packets */ > + void *conn; /* Cached conntrack connection. */ > + bool reply; /* True if reply direction. */ > + bool icmp_related; /* True if ICMP related. */ > +); > + > +PADDED_MEMBERS_CACHELINE_MARKER(RTE_CACHE_LINE_SIZE, cacheline1, > + union { /* Populated only for non-zero 'ct_state'. */ > + struct ovs_key_ct_tuple_ipv4 ipv4; > + struct ovs_key_ct_tuple_ipv6 ipv6; /* Used only if */ > + } ct_orig_tuple; /* 'ct_orig_tuple_ipv6' is set */ > +); > + > +/* > + * Encapsulating tunnel parameters. Note that if 'ip_dst' == 0, > + * the rest of the fields may be uninitialized. > + */ > +PADDED_MEMBERS_CACHELINE_MARKER(RTE_CACHE_LINE_SIZE, cacheline2, > + struct flow_tnl tunnel;); > +}; > + > +#define DP_PACKET_CONTEXT_SIZE 64 > + > +/* > + * Buffer for holding packet data. A dp_packet is automatically reallocated > + * as necessary if it grows too large for the available memory. > + * By default the packet type is set to Ethernet (PT_ETH). > + */ > +struct dp_packet { > + struct rte_mbuf mbuf; /* DPDK mbuf */ > + enum dp_packet_source source; /* Source of memory allocated as 'base'. */ > + > + /* > + * All the following elements of this struct are copied in a single call > + * of memcpy in dp_packet_clone_with_headroom. > + */ > + uint16_t l2_pad_size; /* Detected l2 padding size. Padding is non-pullable. */ > + uint16_t l2_5_ofs; /* MPLS label stack offset, or UINT16_MAX */ > + uint16_t l3_ofs; /* Network-level header offset, or UINT16_MAX. */ > + uint16_t l4_ofs; /* Transport-level header offset, or UINT16_MAX. */ > + uint32_t cutlen; /* length in bytes to cut from the end. */ > + uint32_t packet_type; /* Packet type as defined in OpenFlow */ > + union { > + struct pkt_metadata md; > + uint64_t data[DP_PACKET_CONTEXT_SIZE / 8]; > + }; > +}; > + > +#endif /* _NFP_FLOWER_OVS_COMPAT_ */ > diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h > index b28ebc9..ab2e5c2 100644 > --- a/drivers/net/nfp/nfp_common.h > +++ b/drivers/net/nfp/nfp_common.h > @@ -448,6 +448,9 @@ int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, > #define NFP_APP_PRIV_TO_APP_NIC(app_priv)\ > ((struct nfp_app_nic *)app_priv) > > +#define NFP_APP_PRIV_TO_APP_FLOWER(app_priv)\ > + ((struct nfp_app_flower *)app_priv) > + Same as NFP_APP_PRIV_TO_APP_NIC it is better to make it a tiny function, use struct nfp_pf_dev pointer as input and validate app_id before type cast. > #endif /* _NFP_COMMON_H_ */ > /* > * Local variables: