From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4553FA0A0F; Sun, 4 Jul 2021 08:47:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BDA4040040; Sun, 4 Jul 2021 08:47:06 +0200 (CEST) Received: from EUR04-VI1-obe.outbound.protection.outlook.com (mail-eopbgr80079.outbound.protection.outlook.com [40.107.8.79]) by mails.dpdk.org (Postfix) with ESMTP id 3B2C04003F for ; Sun, 4 Jul 2021 08:47:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZthyaX7ruN1Fa7dJZcVLrgaFLEOq8QFPNdlSLicAngg8JANgN0ARZUazF3HNbrgkSwSg3Ix0FcKm9t58Y4R6q9VUg8FIio2W87CqoUyWpCA/nVWJFUC57swcAPLp6RThqX+OEltI6HdCG/ueVyLkkzvdQU6FyPCEw2HIrXcJodB0JVuJpKOiK3oG8hPV9n9Ls2jjIabh9+jfehQsm+TCOZuwec8uqVm0XtJ9aBNC4MSh/WurtC9HoOCP4DbKYcIeh+3JugttaKFkSd9hrCU6NrW/A+uX/SkwazTPM+sacE853Ruzs2UtZc6qJaunipu362IXbk1RLuG3K1lny1ZGqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bZCEx5jxQmPol837E3AlJ9hWv8yAPlvJonKHFRsi1yA=; b=KHaNnrz7LwhmwcdrbStvcRbscQW485hwabofCCMieyrgBcvy+srgH0DbyWr8IPq+UjM2DmnqmHm4GtT/UgdPZu7E/k9vpMMklqHIhmedHRICuNOG6tH7pzTTDjd/2uXPIrv3F7svlK+Fr8u9kjC0rXDl2i+GTydakLxDkb534BtsOLo4vlUIMEmTedi7uXs0m/TV/GUX7VAzQd/+7H8wUbAcjrt63d5xIQGMgVbXUoHlKKWltIcxkZyRpUR0IO+gw5fYRB3v8QLdd51K2IiGR62Tqkmv2wCZlLajESctoGnaJgU+azVNkRBWb6abBOjK3e01BNeO6ISxH8YUxZKuPg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com; dkim=pass header.d=oss.nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com; s=selector2-NXP1-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bZCEx5jxQmPol837E3AlJ9hWv8yAPlvJonKHFRsi1yA=; b=iPFl9AyZWEygazB3uDVulr5Sl1OKhL37lqrS6nYvaEy4kQbT767ui2MvdWGWk5B2TFUmIfBvz6e6xogPK/RRyUb6iF7MA0bsrD/Fr6+4ovTOaDQpbd4pFT3/ayZM+/3B8x1K+HfEpCvYJYMWk1r5LT4PCVOvUP90mIiGj+E3bPo= Authentication-Results: nxp.com; dkim=none (message not signed) header.d=none;nxp.com; dmarc=none action=none header.from=oss.nxp.com; Received: from PAXPR04MB8815.eurprd04.prod.outlook.com (2603:10a6:102:20e::23) by PAXPR04MB8974.eurprd04.prod.outlook.com (2603:10a6:102:20d::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Sun, 4 Jul 2021 06:47:04 +0000 Received: from PAXPR04MB8815.eurprd04.prod.outlook.com ([fe80::40f6:1464:ed:2902]) by PAXPR04MB8815.eurprd04.prod.outlook.com ([fe80::40f6:1464:ed:2902%7]) with mapi id 15.20.4287.033; Sun, 4 Jul 2021 06:47:04 +0000 To: Apeksha Gupta , ferruh.yigit@intel.com Cc: dev@dpdk.org, hemant.agrawal@nxp.com, sachin.saxena@nxp.com References: <20210430043424.19752-1-apeksha.gupta@nxp.com> <20210430043424.19752-4-apeksha.gupta@nxp.com> From: "Sachin Saxena (OSS)" Message-ID: Date: Sun, 4 Jul 2021 12:16:55 +0530 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 In-Reply-To: <20210430043424.19752-4-apeksha.gupta@nxp.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Originating-IP: [223.233.64.109] X-ClientProxiedBy: MAXPR0101CA0050.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:e::12) To PAXPR04MB8815.eurprd04.prod.outlook.com (2603:10a6:102:20e::23) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from [192.168.1.8] (223.233.64.109) by MAXPR0101CA0050.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:e::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Sun, 4 Jul 2021 06:47:02 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8efbf25a-f6fe-401b-d13c-08d93eb78866 X-MS-TrafficTypeDiagnostic: PAXPR04MB8974: X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5236; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 8emHU2kpEtucvdOSGlEM5HiJivOC3EXl55LbMdsPaYBENpn7Vh/hfCcnfF6WJZ4uobKbRQXDwBjg/9zeJpGnWCqxox4qqOo2xcerwQbFQRm7wr1xlivN3oIXvdHhZX3DqTI4rYdXkSNHWwsgGamN9AceRW48htCjv4sUptpV0wmcbV0lChWj6ShyjxLqUzN6snZaUX0JV8nE+G9tsg8OOmNTzXyLPiEDoV1WA5Kukk2FXZyNV9JA7I8sOAWoemQbXv50F2pmcrCOtNPYvnKT0tI9BxNnJC9uE4z8BrS2qDdl6un+zTc0zi0EtKKNZ2Kubw+MYMalvynkLdzwHSIYwBeFKxhx8jsAs//w2oq0g07EDub71Zr/2S0aOlijeKL/5PzZVPPaH/7NCrY+MLmlRp/oEogdRQ66RMF+SfveE4ZEZxpWnqyX7IyNZSK7tvm6xtbXGSzjR8Mi+P3Pur1cdff7tLShA1tBdvOILjsL+gBJJHjyj+7gVqdq3lpaKPQumcUhNSZhNOG83owNAaNA7bNrmY2/wrMLNVbvYa/c4SUk+Pb9gDYJM9m34vKybc4Klxv4gZRju+elyuSsDPqoxODp0N9SXDEk4CGc6ExooQqqpYJPI94j80EIpsBuKDCt+yFYqY9I8EddcnMzywQ8jOfFT2m9+znjw6p+AXhGIZqc+0okx7ocsviskEGl6H9r97BE9u1gne8Qlrx5pfcSDeveM5nW4KrsGIOLNakJ5xjUFXFV/lTSyUnkPvF7sazt8JbNLMNvLCCOmzxQPcv5Yg== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PAXPR04MB8815.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(39850400004)(366004)(346002)(376002)(396003)(136003)(16526019)(186003)(8676002)(4326008)(38100700002)(5660300002)(956004)(16576012)(38350700002)(66946007)(53546011)(31696002)(2616005)(66476007)(316002)(66556008)(8936002)(26005)(478600001)(31686004)(2906002)(6486002)(6666004)(52116002)(86362001)(43740500002)(45980500001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?b294dDhqNW1KWDh0WlpGbkpXeDRUUGZOcWpvRnNma0JVVG05Tkhldk15ZUVi?= =?utf-8?B?WUl6VTh2Q1pKSGd6TFU2RkRNM3pwWmNLNU1ndkw3VmliK0Z1bEhxSHYyY2JW?= =?utf-8?B?Y2JKUzFObVl6a1NIUnNYRVZqZi9mYXh0ZHhSeUY1eFNEVzIrZFg1dDlIK0h5?= =?utf-8?B?dUNhVjR0dEF2em95NGNvb3dDWXpsSFZFSUpZaVJtUEYwNzdibVNvdWxZSjFK?= =?utf-8?B?RFUyWmp2cXBLNVpVVHl1TmhIcSsveUxaUjRGeTBPN3Z6RjhTcHpIclhUZ0Zh?= =?utf-8?B?dFZxSTQ5MXlWMHJjM01wcXRhR2ZXZGhXaWYwSzVNR2JqZ0pvOFJ2TmRmdzJu?= =?utf-8?B?TFdXbGwwN0VMZnl6RkJKZEt6akYxRHdKWHN1bDdQOGVXYzZlTXRVM21IY1Jw?= =?utf-8?B?UjRZRXFPUHpFSTE2eDlkR3FXUjVJaHd1cDJ0TXplcGcwL3Vqc1dxQnJVOThU?= =?utf-8?B?eEdwcDVid2w5c3hMc1JRVzMxd2JidWh2WHE2bUY5NmxVMVJlNWxMQmQwNDB3?= =?utf-8?B?emFYM3djZ2N0eFY2cThrNWlNYy9PM2o1amt5REVudzlwVTZaSldiRlhBWW92?= =?utf-8?B?bzNGOHRKRG9ReUxVaEdQUzNTQ0dKNElUSTh4ZEJqNkRzaGswb1VOOHlxc010?= =?utf-8?B?cUtzTFJibm9PUmVVV0N2Ti9BMERXL2lzWW5teUlCbXk2WDd2d1YyblFlMlV4?= =?utf-8?B?TFZBWU1pRUZJWkhETUpLcDJTV0VTc1pneTNTd3hYbHRBeTFGSG1MOVgweGdM?= =?utf-8?B?NlVWdWtSSENuTmtlMVZzaHpFTjlMOFVsZ095elpqUGFtRjJnM1FFS1loWmtR?= =?utf-8?B?NCtabmM5aVJGMEN5Y08yR1NBNGNOUXlaT1BXSVl4cVZyamUvT0c1MysreWhr?= =?utf-8?B?T0ZiZitnczNvQzRPSUwrL2JkSDB4RzhENlZ5YTllVlgrTUI4U3orN2hBZlJo?= =?utf-8?B?UTJrU0lsVEV4MFY0MVZ2RFNodDN5RXNJS0pXcExGMGRTVk5xTGVzSFJsbGF3?= =?utf-8?B?Q2tidzhOV0dOcXdZRnFTeUVVUmo5YzhaSDA5ei84aEtMM1FaYXlubEQreHpp?= =?utf-8?B?Uzc1U1M2MHlGVC9KY005Q3hxTHZkT0ZPemtTNHdEVGdnTVRGdnh2RXVkaTZR?= =?utf-8?B?dEU0ekZMNUxjU3g1dmpaRmNwaTF5UEowUlJOMjEwTGFLeUdEaVZWZGJhZVRm?= =?utf-8?B?NHVBbGxCc2hFQzNtamF0cGp0ckQyTU9oMW1CZjNKWE9MT2pFT3Vhelg3aGd6?= =?utf-8?B?RnlGejZOa3B2TmxvK1RHUm9WY0NDT1VhNFhpWi9wM0RFN1hoQ0h2YkxPMDlo?= =?utf-8?B?WWZRVkxxZ0xBMlhqVkVHN1RLa0dhdktNeExucHUwc0RneENraXRsSGY0OTdP?= =?utf-8?B?ZWVnUkJqZ1M5cVRxdUF4eVBXZml5QVFmSU5uWHZUNmxCR0FpRDBhYnREKzBF?= =?utf-8?B?WHBtV2FINHJoaTJRT1dUcVFlNjQrL1hvaE5SRmlKdTM2OXhMR2w1SW9rekpU?= =?utf-8?B?RzV3WGl6T0hJb016MWk4K3hvQ09IYlFwL3V1QjNkUzNxTFE3Wm5OVndneDFV?= =?utf-8?B?Q25GOFNUVko0UzhkMklDVzYwelFyNzUvM0w2RU5TYXBiM1ExQUJtK050RmFI?= =?utf-8?B?M3NzRG5oU0k1RFRnNUxHVHV6M3h4R3dZMndFR0NNdy9GWVlyQVFzT0lOazBp?= =?utf-8?B?ajdlZHJiR0NzYjNCZUpYZkFHT3JRTGtsZ2daZy9UME9lUTY5b2lBUHdMUXpP?= =?utf-8?Q?KxAUbFDO7qpyNjNvAFihJDMeERwR6yd9BDnAI6b?= X-OriginatorOrg: oss.nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8efbf25a-f6fe-401b-d13c-08d93eb78866 X-MS-Exchange-CrossTenant-AuthSource: PAXPR04MB8815.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jul 2021 06:47:03.9854 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: WJGBOv6gtsS5HGiOE3+D7MCFKVwc6od80Fjhbb5c0KmAQlNFaUT+VdeOYigY9zbG4ZBcuvqcSd4dg0dXNNBJ/Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8974 Subject: Re: [dpdk-dev] [PATCH 3/4] drivers/net/enetfec: queue configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 30-Apr-21 10:04 AM, Apeksha Gupta wrote: > This patch added RX/TX queue configuration setup operations. added -> adds > On packet Rx the respective BD Ring status bit is set which is then Suggestion:  Rx -> reception > used for packet processing. > > Signed-off-by: Sachin Saxena > Signed-off-by: Apeksha Gupta > --- > drivers/net/enetfec/enet_ethdev.c | 223 ++++++++++++++++++++++++++++++ > 1 file changed, 223 insertions(+) > > diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c > index 5f4f2cf9e..b4816179a 100644 > --- a/drivers/net/enetfec/enet_ethdev.c > +++ b/drivers/net/enetfec/enet_ethdev.c > @@ -48,6 +48,19 @@ > > int enetfec_logtype_pmd; > > +/* Supported Rx offloads */ > +static uint64_t dev_rx_offloads_sup = > + DEV_RX_OFFLOAD_IPV4_CKSUM | > + DEV_RX_OFFLOAD_UDP_CKSUM | > + DEV_RX_OFFLOAD_TCP_CKSUM | > + DEV_RX_OFFLOAD_VLAN_STRIP | > + DEV_RX_OFFLOAD_CHECKSUM; > + > +static uint64_t dev_tx_offloads_sup = > + DEV_TX_OFFLOAD_IPV4_CKSUM | > + DEV_TX_OFFLOAD_UDP_CKSUM | > + DEV_TX_OFFLOAD_TCP_CKSUM; > + > /* > * This function is called to start or restart the FEC during a link > * change, transmit timeout or to reconfigure the FEC. The network > @@ -176,8 +189,218 @@ enetfec_eth_open(struct rte_eth_dev *dev) > return 0; > } > > + > +static int > +enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev) > +{ > + ENET_PMD_INFO("%s: returning zero ", __func__); > + return 0; > +} > + Remove this if not required. > +static int > +enetfec_eth_info(__rte_unused struct rte_eth_dev *dev, > + struct rte_eth_dev_info *dev_info) > +{ > + dev_info->max_rx_queues = ENET_MAX_Q; > + dev_info->max_tx_queues = ENET_MAX_Q; > + dev_info->min_mtu = RTE_ETHER_MIN_MTU; > + dev_info->rx_offload_capa = dev_rx_offloads_sup; > + dev_info->tx_offload_capa = dev_tx_offloads_sup; > + > + return 0; > +} > + > +static const unsigned short offset_des_active_rxq[] = { > + ENET_RDAR_0, ENET_RDAR_1, ENET_RDAR_2 > +}; > + > +static const unsigned short offset_des_active_txq[] = { > + ENET_TDAR_0, ENET_TDAR_1, ENET_TDAR_2 > +}; > + > +static int > +enetfec_tx_queue_setup(struct rte_eth_dev *dev, > + uint16_t queue_idx, > + uint16_t nb_desc, > + __rte_unused unsigned int socket_id, > + __rte_unused const struct rte_eth_txconf *tx_conf) > +{ > + struct enetfec_private *fep = dev->data->dev_private; > + unsigned int i; > + struct bufdesc *bdp, *bd_base; > + struct enetfec_priv_tx_q *txq; > + unsigned int size; > + unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) : > + sizeof(struct bufdesc); > + unsigned int dsize_log2 = fls64(dsize); > + > + /* allocate transmit queue */ > + txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE); > + if (!txq) { > + ENET_PMD_ERR("transmit queue allocation failed"); > + return -ENOMEM; > + } > + > + if (nb_desc > MAX_TX_BD_RING_SIZE) { > + nb_desc = MAX_TX_BD_RING_SIZE; > + ENET_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE\n"); > + } > + txq->bd.ring_size = nb_desc; > + fep->total_tx_ring_size += txq->bd.ring_size; > + fep->tx_queues[queue_idx] = txq; > + > + rte_write32(fep->bd_addr_p_t[queue_idx], > + fep->hw_baseaddr_v + ENET_TD_START(queue_idx)); Do we need  rte_cpu_to_le_* ? > + > + /* Set transmit descriptor base. */ > + txq = fep->tx_queues[queue_idx]; > + txq->fep = fep; > + size = dsize * txq->bd.ring_size; > + bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx]; > + txq->bd.que_id = queue_idx; > + txq->bd.base = bd_base; > + txq->bd.cur = bd_base; > + txq->bd.d_size = dsize; > + txq->bd.d_size_log2 = dsize_log2; > + txq->bd.active_reg_desc = > + fep->hw_baseaddr_v + offset_des_active_txq[queue_idx]; > + bd_base = (struct bufdesc *)(((void *)bd_base) + size); > + txq->bd.last = (struct bufdesc *)(((void *)bd_base) - dsize); > + bdp = txq->bd.base; > + bdp = txq->bd.cur; > + > + for (i = 0; i < txq->bd.ring_size; i++) { > + /* Initialize the BD for every fragment in the page. */ > + rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc); rte_cpu_to_le_16(0) has no effect on '0'. > + if (txq->tx_mbuf[i]) { Compare vs NULL > + rte_pktmbuf_free(txq->tx_mbuf[i]); > + txq->tx_mbuf[i] = NULL; > + } > + rte_write32(rte_cpu_to_le_32(0), &bdp->bd_bufaddr); rte_cpu_to_le_32 has no effect. > + bdp = enet_get_nextdesc(bdp, &txq->bd); > + } > + > + /* Set the last buffer to wrap */ > + bdp = enet_get_prevdesc(bdp, &txq->bd); > + rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) | > + rte_read16(&bdp->bd_sc)), &bdp->bd_sc); > + txq->dirty_tx = bdp; > + dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx]; > + return 0; > +} > + > +static int > +enetfec_rx_queue_setup(struct rte_eth_dev *dev, > + uint16_t queue_idx, > + uint16_t nb_rx_desc, > + __rte_unused unsigned int socket_id, > + __rte_unused const struct rte_eth_rxconf *rx_conf, > + struct rte_mempool *mb_pool) > +{ > + struct enetfec_private *fep = dev->data->dev_private; > + unsigned int i; > + struct bufdesc *bd_base; > + struct bufdesc *bdp; > + struct enetfec_priv_rx_q *rxq; > + unsigned int size; > + unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) : > + sizeof(struct bufdesc); > + unsigned int dsize_log2 = fls64(dsize); > + > + /* allocate receive queue */ > + rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE); > + if (!rxq) { > + ENET_PMD_ERR("receive queue allocation failed"); > + return -ENOMEM; > + } > + > + if (nb_rx_desc > MAX_RX_BD_RING_SIZE) { > + nb_rx_desc = MAX_RX_BD_RING_SIZE; > + ENET_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE\n"); > + } > + > + rxq->bd.ring_size = nb_rx_desc; > + fep->total_rx_ring_size += rxq->bd.ring_size; > + fep->rx_queues[queue_idx] = rxq; > + > + rte_write32(fep->bd_addr_p_r[queue_idx], > + fep->hw_baseaddr_v + ENET_RD_START(queue_idx)); > + rte_write32(PKT_MAX_BUF_SIZE, > + fep->hw_baseaddr_v + ENET_MRB_SIZE(queue_idx)); Please check if we need  rte_cpu_to_le_* . > + > + /* Set receive descriptor base. */ > + rxq = fep->rx_queues[queue_idx]; > + rxq->pool = mb_pool; > + size = dsize * rxq->bd.ring_size; > + bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx]; > + rxq->bd.que_id = queue_idx; > + rxq->bd.base = bd_base; > + rxq->bd.cur = bd_base; > + rxq->bd.d_size = dsize; > + rxq->bd.d_size_log2 = dsize_log2; > + rxq->bd.active_reg_desc = > + fep->hw_baseaddr_v + offset_des_active_rxq[queue_idx]; > + bd_base = (struct bufdesc *)(((void *)bd_base) + size); > + rxq->bd.last = (struct bufdesc *)(((void *)bd_base) - dsize); > + > + rxq->fep = fep; > + bdp = rxq->bd.base; > + rxq->bd.cur = bdp; > + > + for (i = 0; i < nb_rx_desc; i++) { > + /* Initialize Rx buffers from pktmbuf pool */ > + struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool); > + if (mbuf == NULL) { > + ENET_PMD_ERR("mbuf failed\n"); > + goto err_alloc; > + } > + > + /* Get the virtual address & physical address */ > + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)), > + &bdp->bd_bufaddr); > + > + rxq->rx_mbuf[i] = mbuf; > + rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc); > + > + bdp = enet_get_nextdesc(bdp, &rxq->bd); > + } > + > + /* Initialize the receive buffer descriptors. */ > + bdp = rxq->bd.cur; > + for (i = 0; i < rxq->bd.ring_size; i++) { > + /* Initialize the BD for every fragment in the page. */ > + if (rte_read32(&bdp->bd_bufaddr)) Compare vs 0 > + rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), > + &bdp->bd_sc); > + else > + rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc); No effect of rte_cpu_to_le_16. > + > + bdp = enet_get_nextdesc(bdp, &rxq->bd); > + } > + > + /* Set the last buffer to wrap */ > + bdp = enet_get_prevdesc(bdp, &rxq->bd); > + rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) | > + rte_read16(&bdp->bd_sc)), &bdp->bd_sc); > + dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx]; > + rte_write32(0x0, fep->rx_queues[queue_idx]->bd.active_reg_desc); > + return 0; > + > +err_alloc: > + for (i = 0; i < nb_rx_desc; i++) { > + rte_pktmbuf_free(rxq->rx_mbuf[i]); Only free, if mbuf was allocated earlier. (rxq->rx_mbuf[i] != NULL) > + rxq->rx_mbuf[i] = NULL; > + } > + rte_free(rxq); > + return -1; > +} > + > static const struct eth_dev_ops ops = { > .dev_start = enetfec_eth_open, > + .dev_configure = enetfec_eth_configure, Better to introduce function pointers later when they will be implemented. > + .dev_infos_get = enetfec_eth_info, > + .rx_queue_setup = enetfec_rx_queue_setup, > + .tx_queue_setup = enetfec_tx_queue_setup, > }; > > static int