From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0C4E7A0A0C; Mon, 5 Jul 2021 10:48:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EBAD740141; Mon, 5 Jul 2021 10:48:45 +0200 (CEST) Received: from EUR03-DB5-obe.outbound.protection.outlook.com (mail-eopbgr40088.outbound.protection.outlook.com [40.107.4.88]) by mails.dpdk.org (Postfix) with ESMTP id DBD9F4003C for ; Mon, 5 Jul 2021 10:48:43 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NdOzSV0mmbSfT+ZSpn/b0cveYIc9/ojH3o9BBdvKMK+iLB5nEVTCM47VyE0HUyPwjpXzaIuFhBQan+FYBPNxd0f70KJMAFBR3bo/+4hLFJt4I9PqbkeM/NQnfqFYxJeDY/ViCT7cxESKMZmjMyC+j9sTubYC/puttq8KfJt7Vt5OC35N3MrZvsygMM9dLMvRkKDYWxlrilX1x6rbnOkz+DXpGVsgbiA4cTV1tPHUuZ1oZd2yFvbX1xcl8jgFMOmCyNYjvphd2vHmZiDyiFDiMH+HZ4PUuVF7wI7SgBgGlNg99iFx4qR+2JsUQFv/+fMHMV0Pvc0lAq2+H47pZMGM8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=szCeFPipThZabq/GX5nziDoVkhfit7yQfjlndIF9nZU=; b=XzLBA2JUsgjP31fZqXaYgzuVFiJBFQCMQZWWda7jMhUlJ5B/lS9rhCoXYVzFYl+JQ48nnHewNtzNYCVau/DQHp60XymiNFUA8KDnYVIiEzPJyM26ONlbAl7k5TEDrPU5oCjkqjvo3xD/aXsMgLev26Q65ibbDvRA08CnYMn68tM1CXh1AQzMeY9VABR+bOIJLetseUkNE4F8pC16QCA8xYoXQ13Q3OnpmYHUdEP8NjunUQta1kGdyZcu79KS54NwY+3NZFqcQqNmeX0Wupp61Hnb/XM0KZC/wSUv/Yvq2a8hkxZQ5nXoTvtiQ1LZwo8zLcRDvDXQPdVnnmJGReH+Sw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com; dkim=pass header.d=oss.nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com; s=selector2-NXP1-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=szCeFPipThZabq/GX5nziDoVkhfit7yQfjlndIF9nZU=; b=lnjTVubJYKg164ny8b711F9VyAXxMkTO5f4oMFaggn69WqDzM8nhGXVyAAfos4Z2ruqXl4jMHtOU0TphyIghyr7PF2RG+08WqWQq8Emo+pVy9BGkGZJfJ0mnUeYiUmWK6TNOokzZOKwUdF7PX6zwAaEA5Jbho5Ai2mYjOVz+6rw= Authentication-Results: nxp.com; dkim=none (message not signed) header.d=none;nxp.com; dmarc=none action=none header.from=oss.nxp.com; Received: from PAXPR04MB8815.eurprd04.prod.outlook.com (2603:10a6:102:20e::23) by PAXPR04MB8701.eurprd04.prod.outlook.com (2603:10a6:102:21c::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.27; Mon, 5 Jul 2021 08:48:39 +0000 Received: from PAXPR04MB8815.eurprd04.prod.outlook.com ([fe80::40f6:1464:ed:2902]) by PAXPR04MB8815.eurprd04.prod.outlook.com ([fe80::40f6:1464:ed:2902%7]) with mapi id 15.20.4287.033; Mon, 5 Jul 2021 08:48:39 +0000 To: Apeksha Gupta , ferruh.yigit@intel.com, dev@dpdk.org Cc: hemant.agrawal@nxp.com References: <20210430043424.19752-1-apeksha.gupta@nxp.com> <20210430043424.19752-5-apeksha.gupta@nxp.com> From: "Sachin Saxena (OSS)" Message-ID: <07bd79f5-e2c0-3c8a-6570-bf6fbe5db54d@oss.nxp.com> Date: Mon, 5 Jul 2021 14:18:29 +0530 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 In-Reply-To: <20210430043424.19752-5-apeksha.gupta@nxp.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Originating-IP: [223.233.66.70] X-ClientProxiedBy: SG2PR02CA0012.apcprd02.prod.outlook.com (2603:1096:3:17::24) To PAXPR04MB8815.eurprd04.prod.outlook.com (2603:10a6:102:20e::23) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from [192.168.1.8] (223.233.66.70) by SG2PR02CA0012.apcprd02.prod.outlook.com (2603:1096:3:17::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Mon, 5 Jul 2021 08:48:37 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 91b9706e-4f8e-4ad7-7143-08d93f91aeef X-MS-TrafficTypeDiagnostic: PAXPR04MB8701: X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5236; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JWC4HZsZsPSnETYeaO1Yj6DyLbtw6KO92e7TllqmeaO45zC75tH+7Pu4Ll2PSSryILUqAM/Mn7eu/mqfPsTw+7jDD/85xtXgBA8HpZsaBrnGHM70TTxLxpplts1chmIDF8mJMCEJ6yDrss75VQHnuu0vS2yaTAYyRrcmHLlfVy/ZmObknVpDjVH2oT63gL1X1J9s+hykSphKDCae8ELhi0z5vkVtMvAdZ73XY4wu5d5T+fl1Mcn21+uPdjjc7P5F7c+7rDkNKt7sYmt+QoRFeM248ytmtLyX79cIwyVbIfdXjPQXkaNB1wlpCdFd0mMDIp8bFMupXupgOTT3LAetwUNQ/DmKoDet4+c2jlNJyosQWJHGrrtb1GYikFUyv6riXlRfQV1A0zjIm2z2E6XxdqFOvF2et4ks+mAaL9OUvi+djIsbnZfzqHWMov6JQKBYED6Hf3Ym0W2J4oj4+Or7trcWJkOrEYEd/kidBJcO2WJ9zixXdc9+nvDhhtePkEEp1+rIEPc4Wbm5l269jpkl7qcyWAkck7PC7UfIITmmSf8IcRV3J7kGmvIgU2ISm72XQyJFEhrixcJgNU0iDl851gXFPxw7OseGxyZvoVLVy58tdryc492D/F59BizabA7OojUf3RL6VOpk70axkktW2dt0bG5ORkIDm3uCeXHERM8hgQ48dQgQUVi+QJabOz+pSKqC0UTyLWTg1KAwmlRRoGKt9HkJgza+qd6S47G+pz3fb4jvEd1/PGGflL8EhQ9xrMCSNMGR62j9PXAe5wQswQ== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PAXPR04MB8815.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(396003)(346002)(39860400002)(136003)(366004)(6486002)(52116002)(31696002)(6666004)(478600001)(16576012)(316002)(86362001)(4326008)(186003)(16526019)(53546011)(956004)(38350700002)(31686004)(66556008)(5660300002)(66476007)(26005)(8936002)(30864003)(2906002)(2616005)(38100700002)(83380400001)(8676002)(66946007)(45980500001)(43740500002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?cDdzOEllUnpQMXlrVTFDL1Q0Uk5ISDd0THRRTFBXbjNPVlQrcXBuMkV4RUgy?= =?utf-8?B?eFFMT1lsWEdsZTdlekFDY3BkOFZTLzVmeldJV3BzVTl1TFhrc1ZzajNCZ1ZY?= =?utf-8?B?NjVlV2pGTHd1SXd6TWFlOXBERSs1c1Znc0UwWUUxZjcxQ2V4a2o4NXVrcGd5?= =?utf-8?B?SjVyK01xbitZRFNEK3grbjg4ZHpaVHYxcmtHamNnTjlSVnJZcmxVQzRYSDM2?= =?utf-8?B?Rzc5Yy9FWU01VWdlRzd1bUV1aG9pNVI5dHVxVGxYdGFORWtNQ1YxdlVWTGFq?= =?utf-8?B?NFFzaXlZa1pLS2VsNGgzamYzSzI4UCt4WG9nNzJCSm11SSs0VjZKcDY3UlMv?= =?utf-8?B?a1R0UU9Yc0tYOTg5ZG5ReWc5MGlYS21hOWZLSTJFUWdDa0VKdWVMbnpDdHFP?= =?utf-8?B?NGMydVNSV1luRmtNVDFqRkROYjlVZEM1Z2JhbStGc3lOakh2VVhQeUpMWHMr?= =?utf-8?B?bmtiRC93QW9pTVN4Q3BUQi9oL2F2R3YzZHNVcFdjaEF5TkNFQzFJbzlBRVcz?= =?utf-8?B?Y3pvbGdnbXZJZTRtMU9xUzVYYlJRaktUR2lhSytoRURGcW53dGRBSlhQaXQ5?= =?utf-8?B?eGxqeDJWMUhJOWMybGhpb0tlUEQxanUrL2dOT2NxTVVJdXVjQjRld0QydHY1?= =?utf-8?B?Nkwrc01GdjFmSTNMUXk4ZmRMdDZoMEZmL0Q0ZUJaQytQZERzaEluaE5tZWhF?= =?utf-8?B?aWFpc2Q0R2hLS0Iwd3FhaEg4UDUwWEgzWkIyZjJnU2NxcW5oUlJjQ1lXRG11?= =?utf-8?B?YmtxRUhiSGJISmdoTEFVZnZzM2dZMVl0dlpuYTNxKzZGSzh2NncxNzhhMUYz?= =?utf-8?B?dGZQYWt4N1lJOXE1cElla0VjQXNTVU9nL00wUEU3eHdDcWFjS3BuMTJXNk5V?= =?utf-8?B?aVBrbkV1NDM2K3NOVFU1U2haaWI4UnV5L0F4RUlRbWozWGwrQmV4N2VXa1FL?= =?utf-8?B?OE5TcU9WbWc4bDV5WHB2MnNHcGxRVzE0ZjI4a1h2WVRDTmozQnV1RmNScXgx?= =?utf-8?B?UkJ3anFBNDNZZUhYMjZPQjdhMGpEZjM5R25OOTVCOElLU3IvWWx1RFAzbCtQ?= =?utf-8?B?OC9vem9zYi84dElicUtqWlZ4VmFmWklvdWNKRnlzU0ZEQ1BFM2xiYmtzTVVr?= =?utf-8?B?T2RjcGF5bUxaSitZdTBpa2FRUEFhdVN1N0hLa0dWOWdhZzFsRjZDQ2JLa2ti?= =?utf-8?B?NTVWbGYwVmdQRVdFYUVEYnNoU1RvL05XR2FIMzhndmNtU0xGVTdnb1hHcnMx?= =?utf-8?B?cER5djlVWDRIdDB6UlgvL2JJYm82WFZiV1ZHZGY0RDZPNWIvYzlEbzJacm9j?= =?utf-8?B?ZFNSb1YwcWc1aTFQSEw3UmRPdGp5SWZNU3ZGeFJacms5aURxMmV0cDRLbTBV?= =?utf-8?B?OWNQZ0hURE43YTh6SlpNcXlBZmxGOU9IWkJjbDJvQWlEalZHWlR3VHFZWDFi?= =?utf-8?B?dTNTaXhzc1U4T3RNSjEvbmpzWnlOYjBWN1VDbCtYSktSM054R2JWTzh2a3hX?= =?utf-8?B?UmFsME1vQTZqc2orS1FDakNIYXRLUHdMSURHS1pkd3pySmxsQmpxMzJSSDVl?= =?utf-8?B?M2dQenFvRU56QXpIUHVMTmxodWx5RUNpUTNpN0g4NExQbC9LaGJCVlZFQ2ZR?= =?utf-8?B?UExqelY3bkNRL3RuWGJ1V1FRaHVoUmZ6cWQxdmg2dWZBaDZ3a3ppU2lQQU1O?= =?utf-8?B?bk5mSTZidHMzc292ZmE1bFhlalU0bzBDNE81MmVoUnk4U1ZUeWY5Z3czbys5?= =?utf-8?Q?RQMlJLKW6NqwO60yj9/p8u3/uuoKZinhZ5XdxOO?= X-OriginatorOrg: oss.nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 91b9706e-4f8e-4ad7-7143-08d93f91aeef X-MS-Exchange-CrossTenant-AuthSource: PAXPR04MB8815.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jul 2021 08:48:39.3145 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: /uVUxqVlfGoHx5CtUi2LBFvDgGzxc1zmhxky22WKC1S0rU0ojMhSgCPjqHja9HgJW3z1JmYLQfbTQRocaFj5jg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8701 Subject: Re: [dpdk-dev] [PATCH 4/4] drivers/net/enetfec: add enqueue and dequeue support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 30-Apr-21 10:04 AM, Apeksha Gupta wrote: > This patch supported checksum offloads and add burst enqueue and > dequeue operations to the enetfec PMD. > > Loopback mode is added, compile time flag 'ENETFEC_LOOPBACK' is > used to enable this feature. By default loopback mode is disabled. > Basic features added like promiscuous enable, basic stats. > > Signed-off-by: Sachin Saxena > Signed-off-by: Apeksha Gupta > --- > doc/guides/nics/enetfec.rst | 4 + > doc/guides/nics/features/enetfec.ini | 5 + > drivers/net/enetfec/enet_ethdev.c | 212 +++++++++++- > drivers/net/enetfec/enet_rxtx.c | 499 +++++++++++++++++++++++++++ > 4 files changed, 719 insertions(+), 1 deletion(-) > create mode 100644 drivers/net/enetfec/enet_rxtx.c > > diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst > index 10f495fb9..adbb52392 100644 > --- a/doc/guides/nics/enetfec.rst > +++ b/doc/guides/nics/enetfec.rst > @@ -75,6 +75,10 @@ ENETFEC driver. > ENETFEC Features > ~~~~~~~~~~~~~~~~~ > > +- Basic stats > +- Promiscuous > +- VLAN offload > +- L3/L4 checksum offload As suggested by Andrew, we should add features in separate patch/es. > - ARMv8 > > Supported ENETFEC SoCs > diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini > index 570069798..fcc217773 100644 > --- a/doc/guides/nics/features/enetfec.ini > +++ b/doc/guides/nics/features/enetfec.ini > @@ -4,5 +4,10 @@ > ; Refer to default.ini for the full list of available PMD features. > ; > [Features] > +Basic stats = Y > +Promiscuous mode = Y > +VLAN offload = Y > +L3 checksum offload = Y > +L4 checksum offload = Y > ARMv8 = Y > Usage doc = Y > diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c > index b4816179a..ca2cf929f 100644 > --- a/drivers/net/enetfec/enet_ethdev.c > +++ b/drivers/net/enetfec/enet_ethdev.c > @@ -46,6 +46,9 @@ > #define ENET_ENET_OPD_V 0xFFF0 > #define ENET_MDIO_PM_TIMEOUT 100 /* ms */ > > +/* Extended buffer descriptor */ > +#define ENETFEC_EXTENDED_BD 0 > + > int enetfec_logtype_pmd; > > /* Supported Rx offloads */ > @@ -61,6 +64,50 @@ static uint64_t dev_tx_offloads_sup = > DEV_TX_OFFLOAD_UDP_CKSUM | > DEV_TX_OFFLOAD_TCP_CKSUM; > > +static void enet_free_buffers(struct rte_eth_dev *dev) This should be part of previous patch 3/4. > +{ > + struct enetfec_private *fep = dev->data->dev_private; > + unsigned int i, q; > + struct rte_mbuf *mbuf; > + struct bufdesc *bdp; > + struct enetfec_priv_rx_q *rxq; > + struct enetfec_priv_tx_q *txq; > + > + for (q = 0; q < dev->data->nb_rx_queues; q++) { > + rxq = fep->rx_queues[q]; > + bdp = rxq->bd.base; > + for (i = 0; i < rxq->bd.ring_size; i++) { > + mbuf = rxq->rx_mbuf[i]; > + rxq->rx_mbuf[i] = NULL; > + if (mbuf) Compare vs NULL > + rte_pktmbuf_free(mbuf); > + bdp = enet_get_nextdesc(bdp, &rxq->bd); > + } > + } > + > + for (q = 0; q < dev->data->nb_tx_queues; q++) { > + txq = fep->tx_queues[q]; > + bdp = txq->bd.base; > + for (i = 0; i < txq->bd.ring_size; i++) { > + mbuf = txq->tx_mbuf[i]; > + txq->tx_mbuf[i] = NULL; > + if (mbuf) Compare vs NULL > + rte_pktmbuf_free(mbuf); > + } > + } > +} > + > +static void enet_free_queue(struct rte_eth_dev *dev) > +{ > + struct enetfec_private *fep = dev->data->dev_private; > + unsigned int i; > + > + for (i = 0; i < dev->data->nb_rx_queues; i++) > + rte_free(fep->rx_queues[i]); > + for (i = 0; i < dev->data->nb_tx_queues; i++) > + rte_free(fep->rx_queues[i]); > +} > + > /* > * This function is called to start or restart the FEC during a link > * change, transmit timeout or to reconfigure the FEC. The network > @@ -189,7 +236,6 @@ enetfec_eth_open(struct rte_eth_dev *dev) > return 0; > } > > - > static int > enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev) > { > @@ -395,12 +441,137 @@ enetfec_rx_queue_setup(struct rte_eth_dev *dev, > return -1; > } > > +static int > +enetfec_promiscuous_enable(__rte_unused struct rte_eth_dev *dev) > +{ > + struct enetfec_private *fep = dev->data->dev_private; > + uint32_t tmp; > + > + tmp = rte_read32(fep->hw_baseaddr_v + ENET_RCR); > + tmp |= 0x8; > + tmp &= ~0x2; > + rte_write32(tmp, fep->hw_baseaddr_v + ENET_RCR); > + > + return 0; > +} > + > +static int > +enetfec_eth_link_update(__rte_unused struct rte_eth_dev *dev, > + __rte_unused int wait_to_complete) > +{ > + return 0; > +} > + Remove unimplemented functions. > +static int > +enetfec_stats_get(struct rte_eth_dev *dev, > + struct rte_eth_stats *stats) > +{ > + struct enetfec_private *fep = dev->data->dev_private; > + struct rte_eth_stats *eth_stats = &fep->stats; > + > + if (stats == NULL) > + return -1; > + > + memset(stats, 0, sizeof(struct rte_eth_stats)); > + > + stats->ipackets = eth_stats->ipackets; > + stats->ibytes = eth_stats->ibytes; > + stats->ierrors = eth_stats->ierrors; > + stats->opackets = eth_stats->opackets; > + stats->obytes = eth_stats->obytes; > + stats->oerrors = eth_stats->oerrors; > + > + return 0; > +} > + > +static void > +enetfec_stop(__rte_unused struct rte_eth_dev *dev) > +{ > +/*TODO*/ > +} Remove unimplemented functions. > + > +static int > +enetfec_eth_close(__rte_unused struct rte_eth_dev *dev) > +{ > + /* phy_stop(ndev->phydev); */ > + enetfec_stop(dev); Not supported as of now. > + /* phy_disconnect(ndev->phydev); */ > + > + enet_free_buffers(dev); This should be part of previous patch 3/4. > + return 0; > +} > + > +static uint16_t > +enetfec_dummy_xmit_pkts(__rte_unused void *tx_queue, > + __rte_unused struct rte_mbuf **tx_pkts, > + __rte_unused uint16_t nb_pkts) > +{ > + return 0; > +} > + > +static uint16_t > +enetfec_dummy_recv_pkts(__rte_unused void *rxq, > + __rte_unused struct rte_mbuf **rx_pkts, > + __rte_unused uint16_t nb_pkts) > +{ > + return 0; > +} > + > +static int > +enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev) > +{ > + dev->rx_pkt_burst = &enetfec_dummy_recv_pkts; > + dev->tx_pkt_burst = &enetfec_dummy_xmit_pkts; > + > + return 0; > +} > + > +static int > +enetfec_multicast_enable(struct rte_eth_dev *dev) > +{ > + struct enetfec_private *fep = dev->data->dev_private; > + > + rte_write32(0xffffffff, fep->hw_baseaddr_v + ENET_GAUR); > + rte_write32(0xffffffff, fep->hw_baseaddr_v + ENET_GALR); > + dev->data->all_multicast = 1; > + > + rte_write32(0x04400002, fep->hw_baseaddr_v + ENET_GAUR); > + rte_write32(0x10800049, fep->hw_baseaddr_v + ENET_GALR); > + > + return 0; > +} Should be part of features addition patch. > + > +/* Set a MAC change in hardware. */ > +static int > +enetfec_set_mac_address(struct rte_eth_dev *dev, > + struct rte_ether_addr *addr) > +{ > + struct enetfec_private *fep = dev->data->dev_private; > + > + writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) | > + (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24), > + fep->hw_baseaddr_v + ENET_PALR); > + writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24), > + fep->hw_baseaddr_v + ENET_PAUR); > + > + rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]); > + > + return 0; > +} > + > static const struct eth_dev_ops ops = { > .dev_start = enetfec_eth_open, > + .dev_stop = enetfec_eth_stop, > + .dev_close = enetfec_eth_close, > .dev_configure = enetfec_eth_configure, > .dev_infos_get = enetfec_eth_info, > .rx_queue_setup = enetfec_rx_queue_setup, > .tx_queue_setup = enetfec_tx_queue_setup, > + .link_update = enetfec_eth_link_update, Not supported as of now. > + .mac_addr_set = enetfec_set_mac_address, > + .stats_get = enetfec_stats_get, > + .promiscuous_enable = enetfec_promiscuous_enable, Please implement enetfec_promiscuous_disable() as well. > + .allmulticast_enable = enetfec_multicast_enable > }; > > static int > @@ -434,6 +605,7 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev) > struct enetfec_private *fep; > const char *name; > int rc = -1; > + struct rte_ether_addr macaddr; > int i; > unsigned int bdsize; > > @@ -474,12 +646,37 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev) > fep->bd_addr_p = fep->bd_addr_p + bdsize; > } > > + /* Copy the station address into the dev structure, */ > + dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0); > + if (dev->data->mac_addrs == NULL) { > + ENET_PMD_ERR("Failed to allocate mem %d to store MAC addresses", > + ETHER_ADDR_LEN); > + rc = -ENOMEM; > + goto err; > + } > + > + /* TODO get mac address from device tree or get random addr. > + * Currently setting default as 1,1,1,1,1,1 > + */ > + macaddr.addr_bytes[0] = 1; > + macaddr.addr_bytes[1] = 1; > + macaddr.addr_bytes[2] = 1; > + macaddr.addr_bytes[3] = 1; > + macaddr.addr_bytes[4] = 1; > + macaddr.addr_bytes[5] = 1; > + > + enetfec_set_mac_address(dev, &macaddr); > + /* enable the extended buffer mode */ > + fep->bufdesc_ex = ENETFEC_EXTENDED_BD; > + > rc = enetfec_eth_init(dev); > if (rc) > goto failed_init; > return 0; > failed_init: > ENET_PMD_ERR("Failed to init"); > +err: > + rte_eth_dev_release_port(dev); > return rc; > } > > @@ -487,15 +684,28 @@ static int > pmd_enetfec_remove(struct rte_vdev_device *vdev) > { > struct rte_eth_dev *eth_dev = NULL; > + struct enetfec_private *fep; > + struct enetfec_priv_rx_q *rxq; > > /* find the ethdev entry */ > eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev)); > if (!eth_dev) > return -ENODEV; > > + fep = eth_dev->data->dev_private; > + /* Free descriptor base of first RX queue as it was configured > + * first in enetfec_eth_init(). > + */ > + rxq = fep->rx_queues[0]; Although we are supporting only 1 queue as of now, but code should be generic enough to handle multi-queue in future. > + rte_free(rxq->bd.base); Do we need similar handling for TX queues? > + enet_free_queue(eth_dev); > + > + enetfec_eth_stop(eth_dev); > rte_eth_dev_release_port(eth_dev); > > ENET_PMD_INFO("Closing sw device\n"); > + munmap(fep->hw_baseaddr_v, fep->cbus_size); > + > return 0; > } > > diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c > new file mode 100644 > index 000000000..1b9b86c35 > --- /dev/null > +++ b/drivers/net/enetfec/enet_rxtx.c > @@ -0,0 +1,499 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright 2020 NXP > + */ Please update Copyright syntax to current year. > + > +#include > +#include > +#include > +#include "enet_regs.h" > +#include "enet_ethdev.h" > +#include "enet_pmd_logs.h" > + > +#define ENETFEC_LOOPBACK 0 > +#define ENETFEC_DUMP 0 > + > +static volatile bool lb_quit; > + > +#if ENETFEC_DUMP > +static void > +enet_dump(struct enetfec_priv_tx_q *txq) Dump functions defined but never used. Either please try to use them at appropriate places or remove. > +{ > + struct bufdesc *bdp; > + int index = 0; > + > + ENET_PMD_DEBUG("TX ring dump\n"); > + ENET_PMD_DEBUG("Nr SC addr len MBUF\n"); > + > + bdp = txq->bd.base; > + do { > + ENET_PMD_DEBUG("%3u %c%c 0x%04x 0x%08x %4u %p\n", > + index, > + bdp == txq->bd.cur ? 'S' : ' ', > + bdp == txq->dirty_tx ? 'H' : ' ', > + rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)), > + rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)), > + rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)), > + txq->tx_mbuf[index]); > + bdp = enet_get_nextdesc(bdp, &txq->bd); > + index++; > + } while (bdp != txq->bd.base); > +} > + > +static void > +enet_dump_rx(struct enetfec_priv_rx_q *rxq) Unused. > +{ > + struct bufdesc *bdp; > + int index = 0; > + > + ENET_PMD_DEBUG("RX ring dump\n"); > + ENET_PMD_DEBUG("Nr SC addr len MBUF\n"); > + > + bdp = rxq->bd.base; > + do { > + ENET_PMD_DEBUG("%3u %c 0x%04x 0x%08x %4u %p\n", > + index, > + bdp == rxq->bd.cur ? 'S' : ' ', > + rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)), > + rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)), > + rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)), > + rxq->rx_mbuf[index]); > + rte_pktmbuf_dump(stdout, rxq->rx_mbuf[index], > + rxq->rx_mbuf[index]->pkt_len); > + bdp = enet_get_nextdesc(bdp, &rxq->bd); > + index++; > + } while (bdp != rxq->bd.base); > +} > +#endif > + > +#if ENETFEC_LOOPBACK > +static void fec_signal_handler(int signum) > +{ > + if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) { > + printf("\n\n %s: Signal %d received, preparing to exit...\n", > + __func__, signum); > + lb_quit = true; > + } > +} > + > +static void > +enetfec_lb_rxtx(void *rxq1) > +{ > + struct rte_mempool *pool; > + struct bufdesc *rx_bdp = NULL, *tx_bdp = NULL; > + struct rte_mbuf *mbuf = NULL, *new_mbuf = NULL; > + unsigned short status; > + unsigned short pkt_len = 0; > + int index_r = 0, index_t = 0; > + u8 *data; > + struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1; > + struct rte_eth_stats *stats = &rxq->fep->stats; > + unsigned int i; > + struct enetfec_private *fep; > + struct enetfec_priv_tx_q *txq; > + fep = rxq->fep->dev->data->dev_private; > + txq = fep->tx_queues[0]; > + > + pool = rxq->pool; > + rx_bdp = rxq->bd.cur; > + tx_bdp = txq->bd.cur; > + > + signal(SIGTSTP, fec_signal_handler); > + while (!lb_quit) { Compare vs NULL > +chk_again: > + status = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_sc)); > + if (status & RX_BD_EMPTY) { > + if (!lb_quit) > + goto chk_again; > + rxq->bd.cur = rx_bdp; > + txq->bd.cur = tx_bdp; > + return; > + } > + > + /* Check for errors. */ > + status ^= RX_BD_LAST; > + if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO | > + RX_BD_CR | RX_BD_OV | RX_BD_LAST | > + RX_BD_TR)) { > + stats->ierrors++; > + if (status & RX_BD_OV) { > + /* FIFO overrun */ > + ENET_PMD_ERR("rx_fifo_error\n"); > + goto rx_processing_done; > + } > + if (status & (RX_BD_LG | RX_BD_SH > + | RX_BD_LAST)) { > + /* Frame too long or too short. */ > + ENET_PMD_ERR("rx_length_error\n"); > + if (status & RX_BD_LAST) > + ENET_PMD_ERR("rcv is not +last\n"); > + } > + /* CRC Error */ > + if (status & RX_BD_CR) > + ENET_PMD_ERR("rx_crc_errors\n"); > + > + /* Report late collisions as a frame error. */ > + if (status & (RX_BD_NO | RX_BD_TR)) > + ENET_PMD_ERR("rx_frame_error\n"); > + mbuf = NULL; > + goto rx_processing_done; > + } > + > + new_mbuf = rte_pktmbuf_alloc(pool); > + if (unlikely(!new_mbuf)) { Compare vs NULL > + stats->ierrors++; > + break; > + } > + /* Process the incoming frame. */ > + pkt_len = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_datlen)); > + > + /* shows data with respect to the data_off field. */ > + index_r = enet_get_bd_index(rx_bdp, &rxq->bd); > + mbuf = rxq->rx_mbuf[index_r]; > + > + /* adjust pkt_len */ > + rte_pktmbuf_append((struct rte_mbuf *)mbuf, pkt_len - 4); > + if (rxq->fep->quirks & QUIRK_RACC) > + rte_pktmbuf_adj(mbuf, 2); > + > + /* Replace Buffer in BD */ > + rxq->rx_mbuf[index_r] = new_mbuf; > + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)), > + &rx_bdp->bd_bufaddr); > + > +rx_processing_done: > + /* when rx_processing_done clear the status flags > + * for this buffer > + */ > + status &= ~RX_BD_STATS; > + > + /* Mark the buffer empty */ > + status |= RX_BD_EMPTY; > + > + /* Make sure the updates to rest of the descriptor are > + * performed before transferring ownership. > + */ > + rte_wmb(); > + rte_write16(rte_cpu_to_le_16(status), &rx_bdp->bd_sc); > + > + /* Update BD pointer to next entry */ > + rx_bdp = enet_get_nextdesc(rx_bdp, &rxq->bd); > + > + /* Doing this here will keep the FEC running while we process > + * incoming frames. > + */ > + rte_write32(0, rxq->bd.active_reg_desc); > + > + /* TX begins: First clean the ring then process packet */ > + index_t = enet_get_bd_index(tx_bdp, &txq->bd); > + status = rte_le_to_cpu_16(rte_read16(&tx_bdp->bd_sc)); > + if (status & TX_BD_READY) > + stats->oerrors++; > + break; > + if (txq->tx_mbuf[index_t]) { > + rte_pktmbuf_free(txq->tx_mbuf[index_t]); > + txq->tx_mbuf[index_t] = NULL; > + } > + > + if (mbuf == NULL) > + continue; > + > + /* Fill in a Tx ring entry */ > + status &= ~TX_BD_STATS; > + > + /* Set buffer length and buffer pointer */ > + pkt_len = rte_pktmbuf_pkt_len(mbuf); > + status |= (TX_BD_LAST); > + data = rte_pktmbuf_mtod(mbuf, void *); > + > + for (i = 0; i <= pkt_len; i += RTE_CACHE_LINE_SIZE) > + dcbf(data + i); > + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)), > + &tx_bdp->bd_bufaddr); > + rte_write16(rte_cpu_to_le_16(pkt_len), &tx_bdp->bd_datlen); > + > + /* Make sure the updates to rest of the descriptor are performed > + * before transferring ownership. > + */ > + status |= (TX_BD_READY | TX_BD_TC); > + rte_wmb(); > + rte_write16(rte_cpu_to_le_16(status), &tx_bdp->bd_sc); > + > + /* Trigger transmission start */ > + rte_write32(0, txq->bd.active_reg_desc); > + > + /* Save mbuf pointer to clean later */ > + txq->tx_mbuf[index_t] = mbuf; > + > + /* If this was the last BD in the ring, start at the > + * beginning again. > + */ > + tx_bdp = enet_get_nextdesc(tx_bdp, &txq->bd); > + } > +} > +#endif > + > +/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue > + * When update through the ring, just set the empty indicator. > + */ > +uint16_t > +enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, > + uint16_t nb_pkts) > +{ > + struct rte_mempool *pool; > + struct bufdesc *bdp; > + struct rte_mbuf *mbuf, *new_mbuf = NULL; > + unsigned short status; > + unsigned short pkt_len; > + int pkt_received = 0, index = 0; > + void *data, *mbuf_data; > + uint16_t vlan_tag; > + struct bufdesc_ex *ebdp = NULL; > + bool vlan_packet_rcvd = false; > + struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1; > + struct rte_eth_stats *stats = &rxq->fep->stats; > + struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf; > + uint64_t rx_offloads = eth_conf->rxmode.offloads; > + pool = rxq->pool; > + bdp = rxq->bd.cur; > +#if ENETFEC_LOOPBACK > + enetfec_lb_rxtx(rxq1); > +#endif > + /* Process the incoming packet */ > + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc)); > + while (!(status & RX_BD_EMPTY)) { > + if (pkt_received >= nb_pkts) > + break; > + > + new_mbuf = rte_pktmbuf_alloc(pool); > + if (unlikely(!new_mbuf)) { Compare vs NULL. Please handle similar issues at all places. > + stats->ierrors++; > + break; > + } > + /* Check for errors. */ > + status ^= RX_BD_LAST; > + if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO | > + RX_BD_CR | RX_BD_OV | RX_BD_LAST | > + RX_BD_TR)) { > + stats->ierrors++; > + if (status & RX_BD_OV) { > + /* FIFO overrun */ > + /* enet_dump_rx(rxq); */ > + ENET_PMD_ERR("rx_fifo_error\n"); > + goto rx_processing_done; > + } > + if (status & (RX_BD_LG | RX_BD_SH > + | RX_BD_LAST)) { > + /* Frame too long or too short. */ > + ENET_PMD_ERR("rx_length_error\n"); > + if (status & RX_BD_LAST) > + ENET_PMD_ERR("rcv is not +last\n"); > + } > + if (status & RX_BD_CR) { /* CRC Error */ > + ENET_PMD_ERR("rx_crc_errors\n"); > + } > + /* Report late collisions as a frame error. */ > + if (status & (RX_BD_NO | RX_BD_TR)) > + ENET_PMD_ERR("rx_frame_error\n"); > + goto rx_processing_done; > + } > + > + /* Process the incoming frame. */ > + stats->ipackets++; > + pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen)); > + stats->ibytes += pkt_len; > + > + /* shows data with respect to the data_off field. */ > + index = enet_get_bd_index(bdp, &rxq->bd); > + mbuf = rxq->rx_mbuf[index]; > + > + data = rte_pktmbuf_mtod(mbuf, uint8_t *); > + mbuf_data = data; > + rte_prefetch0(data); > + rte_pktmbuf_append((struct rte_mbuf *)mbuf, > + pkt_len - 4); > + > + if (rxq->fep->quirks & QUIRK_RACC) > + data = rte_pktmbuf_adj(mbuf, 2); > + > + rx_pkts[pkt_received] = mbuf; > + pkt_received++; > + > + /* Extract the enhanced buffer descriptor */ > + ebdp = NULL; > + if (rxq->fep->bufdesc_ex) > + ebdp = (struct bufdesc_ex *)bdp; > + > + /* If this is a VLAN packet remove the VLAN Tag */ > + vlan_packet_rcvd = false; > + if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) && > + rxq->fep->bufdesc_ex && > + (rte_read32(&ebdp->bd_esc) & > + rte_cpu_to_le_32(BD_ENET_RX_VLAN))) { > + /* Push and remove the vlan tag */ > + struct rte_vlan_hdr *vlan_header = > + (struct rte_vlan_hdr *)(data + ETH_HLEN); > + vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci); > + > + vlan_packet_rcvd = true; > + memmove(mbuf_data + VLAN_HLEN, data, ETH_ALEN * 2); > + rte_pktmbuf_adj(mbuf, VLAN_HLEN); > + } > + > + /* Get receive timestamp from the mbuf */ > + if (rxq->fep->hw_ts_rx_en && rxq->fep->bufdesc_ex) > + mbuf->timestamp = > + rte_le_to_cpu_32(rte_read32(&ebdp->ts)); > + > + if (rxq->fep->bufdesc_ex && > + (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) { > + if (!(rte_read32(&ebdp->bd_esc) & > + rte_cpu_to_le_32(RX_FLAG_CSUM_ERR))) { > + /* don't check it */ > + mbuf->ol_flags = PKT_RX_IP_CKSUM_BAD; > + } else { > + mbuf->ol_flags = PKT_RX_IP_CKSUM_GOOD; > + } > + } > + > + /* Handle received VLAN packets */ > + if (vlan_packet_rcvd) { > + mbuf->vlan_tci = vlan_tag; > + mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN; > + } > + > + rxq->rx_mbuf[index] = new_mbuf; > + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)), > + &bdp->bd_bufaddr); > +rx_processing_done: > + /* when rx_processing_done clear the status flags > + * for this buffer > + */ > + status &= ~RX_BD_STATS; > + > + /* Mark the buffer empty */ > + status |= RX_BD_EMPTY; > + > + if (rxq->fep->bufdesc_ex) { > + struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp; > + rte_write32(rte_cpu_to_le_32(RX_BD_INT), > + &ebdp->bd_esc); > + rte_write32(0, &ebdp->bd_prot); > + rte_write32(0, &ebdp->bd_bdu); > + } > + > + /* Make sure the updates to rest of the descriptor are > + * performed before transferring ownership. > + */ > + rte_wmb(); > + rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc); > + > + /* Update BD pointer to next entry */ > + bdp = enet_get_nextdesc(bdp, &rxq->bd); > + > + /* Doing this here will keep the FEC running while we process > + * incoming frames. > + */ > + rte_write32(0, rxq->bd.active_reg_desc); > + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc)); > + } > + rxq->bd.cur = bdp; > + return pkt_received; > +} > + > +uint16_t > +enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) > +{ > + struct enetfec_priv_tx_q *txq = > + (struct enetfec_priv_tx_q *)tx_queue; > + struct rte_eth_stats *stats = &txq->fep->stats; > + struct bufdesc *bdp, *last_bdp; > + struct rte_mbuf *mbuf; > + unsigned short status; > + unsigned short buflen; > + unsigned int index, estatus = 0; > + unsigned int i, pkt_transmitted = 0; > + u8 *data; > + int tx_st = 1; > + > + while (tx_st) { > + if (pkt_transmitted >= nb_pkts) { > + tx_st = 0; > + break; > + } > + bdp = txq->bd.cur; > + /* First clean the ring */ > + index = enet_get_bd_index(bdp, &txq->bd); > + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc)); > + > + if (status & TX_BD_READY) { > + stats->oerrors++; > + break; > + } > + if (txq->tx_mbuf[index]) { Compare vs NULL > + rte_pktmbuf_free(txq->tx_mbuf[index]); > + txq->tx_mbuf[index] = NULL; > + } > + > + mbuf = *(tx_pkts); > + tx_pkts++; > + > + /* Fill in a Tx ring entry */ > + last_bdp = bdp; > + status &= ~TX_BD_STATS; > + > + /* Set buffer length and buffer pointer */ > + buflen = rte_pktmbuf_pkt_len(mbuf); > + stats->opackets++; > + stats->obytes += buflen; > + > + if (mbuf->nb_segs > 1) { > + ENET_PMD_DEBUG("SG not supported"); > + return -1; > + } > + status |= (TX_BD_LAST); > + data = rte_pktmbuf_mtod(mbuf, void *); > + for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE) > + dcbf(data + i); > + > + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)), > + &bdp->bd_bufaddr); > + rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen); > + > + if (txq->fep->bufdesc_ex) { > + struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp; > + > + if (mbuf->ol_flags == PKT_RX_IP_CKSUM_GOOD) > + estatus |= TX_BD_PINS | TX_BD_IINS; > + > + rte_write32(0, &ebdp->bd_bdu); > + rte_write32(rte_cpu_to_le_32(estatus), > + &ebdp->bd_esc); > + } > + > + index = enet_get_bd_index(last_bdp, &txq->bd); > + /* Save mbuf pointer */ > + txq->tx_mbuf[index] = mbuf; > + > + /* Make sure the updates to rest of the descriptor are performed > + * before transferring ownership. > + */ > + status |= (TX_BD_READY | TX_BD_TC); > + rte_wmb(); > + rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc); > + > + /* Trigger transmission start */ > + rte_write32(0, txq->bd.active_reg_desc); > + pkt_transmitted++; > + > + /* If this was the last BD in the ring, start at the > + * beginning again. > + */ > + bdp = enet_get_nextdesc(last_bdp, &txq->bd); > + > + /* Make sure the update to bdp and tx_skbuff are performed > + * before txq->bd.cur. > + */ > + txq->bd.cur = bdp; > + } > + return nb_pkts; > +}