From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D0ABCA034F; Fri, 4 Feb 2022 15:18:58 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 59BB84013F; Fri, 4 Feb 2022 15:18:58 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id ECE4C40041 for ; Fri, 4 Feb 2022 15:18:55 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643984336; x=1675520336; h=message-id:date:to:cc:references:from:subject: in-reply-to:content-transfer-encoding:mime-version; bh=2HvpNT8H3Yj+QsLGB2Efokbc7jjXRd6eAUb3jAYgjpk=; b=ERUCldveROLXhDfEYn1dunRjgvUcympVF7HW8PCUYdMN7mbgOtYPpP2E 8T7Qvt8HeYSJTKEmK4KB/P66i6VN6LhbQTaCyaCCXZSR7F2dZP7xOcTDG 67zpaq4rHisBoqnrNCgw8/AFNfi90dFBIqO0oC2yEyf3sfzHwB8hh77lr AE3oKd2PJOlPbGn35s6R/vQI7Fitb2TYkjPzLtjl5bzVw1x+cKzuhf7xR pPHuv5IvmQ4W5R0QQF0WoJPSnmZg7RdqJggNEcSmkJYqFJiN2zuaz3/Ca 1i/I5dC0qha1QNVBRxK1wFKKO8musubmpqX2Kg18x/1KDpcdIafZBzUJh Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10247"; a="311674782" X-IronPort-AV: E=Sophos;i="5.88,342,1635231600"; d="scan'208";a="311674782" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 06:18:54 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,342,1635231600"; d="scan'208";a="584141749" Received: from orsmsx606.amr.corp.intel.com ([10.22.229.19]) by fmsmga008.fm.intel.com with ESMTP; 04 Feb 2022 06:18:54 -0800 Received: from orsmsx608.amr.corp.intel.com (10.22.229.21) by ORSMSX606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Fri, 4 Feb 2022 06:18:54 -0800 Received: from orsmsx601.amr.corp.intel.com (10.22.229.14) by ORSMSX608.amr.corp.intel.com (10.22.229.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Fri, 4 Feb 2022 06:18:53 -0800 Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by orsmsx601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20 via Frontend Transport; Fri, 4 Feb 2022 06:18:53 -0800 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.44) by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2308.20; Fri, 4 Feb 2022 06:18:53 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f3hGSZgkDuVjs6cbdEPEFnrxtnFre+EiClsqfINafCB4Bq5sfpiixGqIQoD2CyQJ2r3StrhSI+yo3RLE0sqI73XJnBjFIOFK/yHbqA+K+cZyIJBfDlEOsPP6xgU7h6hxjYZn4aC4sgCLML4Qc9Rd5OooIcI2OOQfHqK4qzoi3AIf0m8S6WAchWVQCT9zGU99oHv+y6XbdqIJfPfuoNtQlT5FNwYvdoOzOweMOuvC9Mgc2knjgM0Che/U/q/Crh8Tn631/WjANtv88hgMRmJue53ReNFBpXXNx4DwO0OJcL2xPhHpaz9I7N+fSZGrwzNnnfYlKhtuRh10JOgQ3KK5+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1GpSZCyRU21SwExUC5Wb5EcuMzuPgjhz3lJOBssgcdc=; b=fd6VzJMWWYAqzo0Fsc/ndOPcHXsegQxMuR376pB20lZoIrM/WEav3bGuZaOs2uLkcPRRKJzlQkOSJ0J2/R9TjKWkNeqdXB1iOKr70NkaktpDpDfk4OiyDEJPqAc6sVR3ADpGjvaZQj52G8qRmzJijg5mUQSmn3d9FR/bG3H3OsMjWlSzeTOEWepW6Pr2exVEtlZC7IjzTnmudwl62JvPOkH73MrUXgwaCLt0hS26u3bzYjS/IeVvLXNStGw+8ODewPEKYRc0i4h4umrFxkaAzIS9n6ziT/zMdWq6JjQO91vMHQXa9SWZYn519xUUszBC0l+oYw6vXxvPozpR8mNb/g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; Received: from PH0PR11MB5000.namprd11.prod.outlook.com (2603:10b6:510:41::19) by DM6PR11MB4506.namprd11.prod.outlook.com (2603:10b6:5:205::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.14; Fri, 4 Feb 2022 14:18:51 +0000 Received: from PH0PR11MB5000.namprd11.prod.outlook.com ([fe80::5046:8550:928d:850e]) by PH0PR11MB5000.namprd11.prod.outlook.com ([fe80::5046:8550:928d:850e%7]) with mapi id 15.20.4951.012; Fri, 4 Feb 2022 14:18:51 +0000 Message-ID: <927b094f-2d84-1dbb-0ad5-37dcf1e1c98e@intel.com> Date: Fri, 4 Feb 2022 14:18:46 +0000 Content-Language: en-US To: Ciara Loftus , CC: References: <20220112075406.54121-1-ciara.loftus@intel.com> <20220204125436.30397-1-ciara.loftus@intel.com> From: Ferruh Yigit Subject: Re: [PATCH v2] net/af_xdp: re-enable secondary process support X-User: ferruhy In-Reply-To: <20220204125436.30397-1-ciara.loftus@intel.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-ClientProxiedBy: LO2P265CA0491.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:13a::16) To PH0PR11MB5000.namprd11.prod.outlook.com (2603:10b6:510:41::19) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: bac43f3f-b6b3-4246-5d2c-08d9e7e94452 X-MS-TrafficTypeDiagnostic: DM6PR11MB4506:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6108; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6AZBTXVfZmNg24LQ4Mqiec7DjN5vei7rh3lvfF0yx1L9F4GtDfDgqr+hORalpE99aVJ2AEiLV2AVFGZ7r8MbP6FETKczXCI4TqLgPvwyij7i093wfdtnBl1naV/NjkvvR9qMDvdUeg8SWXwvKhCh0uXhpp2bthvyLPw7Dydctjm6zmdOES5H0+VZdpTpjAMDq5vevKF/nP99WOAWvHIXdE1zKKZ0ic/Re/B0RJTMOOyPOOuhCtcCEsgyPPNHTWiDlBjznOpNEewkznDH3TUvHfZzu2vXwkjLaK1hQYv8I7iPzcdQFjUqbcDUFUHAycJlRakFRIFghENp57NWZZFAzR2f6mM8tRdhfloDq25uYKMG0ewbBkMLhI0mF4EKb84gDzZB1sTcEGo/fj38M/h525pEA99t6is6YEuDCouia0bXK0bWXeuSSfXGtCmYriRHzxBPs0MgzfkYn27OftQjHNzE1er3B8dAhAmgCHfrLHF70eVe5kawuKVDBCUNZcrTKvoM0JNQfD+9oWhHPob84ZDNeqNtOoYd15R9U+liPL1ZBd6q7o9k3RRZsVn8VEV/4iOnO4rg/5f0UigWBCFO+W12FunpXvRJDwZWX3BLC5a2bsLg/ANLBvGEmXVmJPwyLHLAfHy2yGhyKsNTMhJmhCzzaNpgOstT3UealXQsGej5SfPzQnt3+AfljBI7aGP4x5wB03oRt16sqp0x18hidXwT/t8C/utdCYFOvrt9+ZwS4fgzQcGxCdNFcR8wn7ZhuHcZ34H8eY0BwTqAD+FSgv7DuvvUWKUPKtVOkY0gOr1d/aV0CK6MQCbLidyYJs9E4+N11aL+5ZrvK2PdeWPqxg== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH0PR11MB5000.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(366004)(5660300002)(86362001)(82960400001)(316002)(31686004)(44832011)(38100700002)(31696002)(6486002)(2616005)(966005)(6512007)(66556008)(66476007)(4326008)(8936002)(36756003)(8676002)(66946007)(26005)(6506007)(55236004)(53546011)(186003)(2906002)(6666004)(30864003)(83380400001)(508600001)(45980500001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?aDBEdHlMWDVkVW42aGRGUlcyWDJCbUpZRWxVZW9McGVEWVBhc0xkaERpZnVk?= =?utf-8?B?ajJHNnRCLzB5MXlqQlkvNUIxWGd6TTVQencxaVpCVFM4STlkb3JtMEdXOHVB?= =?utf-8?B?K2hVdjJ3M2Z3K0VwK3ZpM1pCdWcwS2h1UUovVGlGNzFidGdoTXhndzBIbm00?= =?utf-8?B?WEpubTk3Mms2c3BSNXFmNlJveXVUVzFTTFVEMU54THRpUTdEenQ2eEpRYXJi?= =?utf-8?B?RkwxY25BY3NoKzJXSVNxVEJ4N2RJMm1WdUlHeTN2ZU16UVdqbGYvVnBCTTc1?= =?utf-8?B?dkEvUGJpTjQ0NnJPdi9zNTliMDJJQ29sOUZwdkN3VTB4VXZZM3dObC9Gc0Rq?= =?utf-8?B?M2d5UGp5M1VoMUcvSi8vUi9sRkxBdnduQkI1L004czJsaXpQZUZSQkNjOEZ4?= =?utf-8?B?YWlObStlWkhxa0REbnNDdFYrMkh2OVpUbVBvRkc3SHN5TXl2aU5IYnVqelMx?= =?utf-8?B?cVZZS0IyOXJUcGdxaGpHbk12Ui9yN2RRbEV5d0FIODZkTUZLY0g2cnF2QjB3?= =?utf-8?B?WHhlTHhEUThveXd0M1IxQVFkQURZT3RLWTNsTVkxTU1LaGNSU0ROZm5WbHBY?= =?utf-8?B?aC92Wm1DZjljV1JCOWUyVkxJZ3gzQXBPYTFGZ3Bra0NvUWkxWnhuRkg5Vmpk?= =?utf-8?B?c21YM3p0T3N2Rk05TGxMOFlWMGhldFB6NFhXK1ZEd09wc052a3pBaXpiMlJt?= =?utf-8?B?akIyVWlrOTREemg3TnBwaEl5VjFrQmJ3Q0JRbDhaR3BhN3JhQUI1VzloRyt6?= =?utf-8?B?YTJ5NjA5NkttUFpCM0NublIzRVNPK0toZ0duMGVWc0ZBbnRsMUdwM0ZhbG52?= =?utf-8?B?VjdDQkVTVGlZNzlCNWp0VUlVM3NwK3hDdFU4dmRYeVNsc0RnU2o1akszM1ZS?= =?utf-8?B?YW11bjM3L055Qlk0dDJDREg0UEY2TktGNzIvVUNSODl6UnBnelRPUld3dU9Q?= =?utf-8?B?RW5vdHdhaXMyRnNZQ1JvWHBWaElqM1ZoZXd3R1FZUFhMdVBpUUFjdkRneC9z?= =?utf-8?B?K0hFSVFTZUNWekJETDhiYnFHYWV3ekl1elZCK1h5R2t4eXN2UnE4SEVKdFJB?= =?utf-8?B?TWpjN0lLakhFbk1uWVlGSzNzZ3pnRVN4a1FHV05DbzBVSzRlTFFqbmVtaTNr?= =?utf-8?B?ZGNHS0l3Ni9oWWlTMEtXYndEbXh5N0xXOUx1OXRTWGZvRGFSNHEvY3hqeG1z?= =?utf-8?B?NTNRdkRjZkxjbUNtQ2E1UUt4OVhPb1RIUWRTYTB4V0tHelVVN1AwMFU4MkYw?= =?utf-8?B?azRpM0ZSS3hDUEwwNDlFM1Vrci84ZGFHUE5LYlFXdXhrZ1p2SkZHNmpaY2pr?= =?utf-8?B?eW1qTVBkOHhYdkM0d2tVSUhiY0d4NmhlWmJyVjRTZ0hDUWV0aHpsMmtuZGtQ?= =?utf-8?B?KzdxZlcvNU5IQngvN2FSUUQzdXhNOE5JZkVwTnZTTllMTW5xWmV0SGdpUXBE?= =?utf-8?B?RW9OZGN4STlaYThhTmtuZUh4YmhnaUlzZmRCVU5MZk1sZGd2NDdFNm5WVFYv?= =?utf-8?B?blYrcWpFOHhpY2lOTTVmeDBMMXFuZFZHTG5wVVp4WStSRk9CUVliU2JRdjlE?= =?utf-8?B?OEZVcVNPSXkzSDZobHlOQTFTemZPNDRFNzBCQU1pRmdrdS9iUnlIOVRVWmJ1?= =?utf-8?B?WHYzN2hueDNqcTNNUjNhK3RIQ2NZcHdHNStXZXl3Q1BmMlVxZWtpN3BOZGVL?= =?utf-8?B?UE1nTmlMcUk2OTdVOGdBVyttUXBMZVpCSGNDTFdMYUJocEpGdHo2MUFDcU96?= =?utf-8?B?b3lpc2QzSmVqREhueVJsWmlkbHZJU2dIc2FHVkpwWUpSUHpLaXhBY2xHWGxv?= =?utf-8?B?aDJQYlVOQUxyOGk0cnBiUHhCTlJrdUdjcHcybDQ5WG1qcTk2T3h3Z0tVc0Ft?= =?utf-8?B?b1R1TW1SckRWUDhjT3JFSVVSM2tta0tOWGo5TjNMWVNiWkt1K0czNE43VEtB?= =?utf-8?B?RGNiRkVOVGhSTk5aZEFteUNZUFltQUVLTkJGdVM3aEs4bERJaDZsMGR2MHQw?= =?utf-8?B?U3BaNWhrcUZmbWwyaVlWQ3RJUWJJN3FYS21HWS9PcVBic1lNUy9zckl3QUgv?= =?utf-8?B?aTlxRmF3d3VDMXQ2QjZlZ214QzI2a3pheCt2dXJOcVEvTTl4Nk9xZFlmYUdk?= =?utf-8?B?RWFBbnhGMWRXSm4yWTc1ajFoUmZNV08wKysvRlBOQkVCUXZZdVRSRlhVZmJ3?= =?utf-8?Q?7KHvS9ASqH/14DcJ8ue4gGw=3D?= X-MS-Exchange-CrossTenant-Network-Message-Id: bac43f3f-b6b3-4246-5d2c-08d9e7e94452 X-MS-Exchange-CrossTenant-AuthSource: PH0PR11MB5000.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2022 14:18:51.1780 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: aecUWdo/rq2SrPN3oD7mkz1q2Mdb/RhvLAYJ3MuXMgjKjP7HxZfTD0Rsc3kzcMsYNAPvh0WXCAU8efP1g4nyfg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR11MB4506 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2/4/2022 12:54 PM, Ciara Loftus wrote: > Secondary process support had been disabled for the AF_XDP PMD > because there was no logic in place to share the AF_XDP socket > file descriptors between the processes. This commit introduces > this logic using the IPC APIs. > > Since AF_XDP rings are single-producer single-consumer, rx/tx > in the secondary process is disabled. However other operations > including retrieval of stats are permitted. > > Signed-off-by: Ciara Loftus > > --- > v1 -> v2: > * Rebase to next-net > > RFC -> v1: > * Added newline to af_xdp.rst > * Fixed spelling errors > * Fixed potential NULL dereference in init_internals > * Fixed potential free of address-of expression in afxdp_mp_request_fds > --- > doc/guides/nics/af_xdp.rst | 9 ++ > doc/guides/nics/features/af_xdp.ini | 1 + > doc/guides/rel_notes/release_22_03.rst | 1 + > drivers/net/af_xdp/rte_eth_af_xdp.c | 210 +++++++++++++++++++++++-- > 4 files changed, 207 insertions(+), 14 deletions(-) > > diff --git a/doc/guides/nics/af_xdp.rst b/doc/guides/nics/af_xdp.rst > index db02ea1984..eb4eab28a8 100644 > --- a/doc/guides/nics/af_xdp.rst > +++ b/doc/guides/nics/af_xdp.rst > @@ -141,4 +141,13 @@ Limitations > NAPI context from a watchdog timer instead of from softirqs. More information > on this feature can be found at [1]. > > +- **Secondary Processes** > + > + Rx and Tx are not supported for secondary processes due to the single-producer > + single-consumer nature of the AF_XDP rings. However other operations including > + statistics retrieval are permitted. Hi Ciara, Isn't this limitation same for all PMDs, like not both primary & secondary can Rx/Tx from same queue at the same time. But primary can initiallize the PMD and secondary can do the datapath, or isn't af_xdp supports multiple queue, if so some queues can be used by primary and some by secondary for datapath. Is there anyhing special for af_xdp that prevents it? > + The maximum number of queues permitted for PMDs operating in this model is 8 > + as this is the maximum number of fds that can be sent through the IPC APIs as > + defined by RTE_MP_MAX_FD_NUM. > + > [1] https://lwn.net/Articles/837010/ > diff --git a/doc/guides/nics/features/af_xdp.ini b/doc/guides/nics/features/af_xdp.ini > index 54b738e616..8e7e075aaf 100644 > --- a/doc/guides/nics/features/af_xdp.ini > +++ b/doc/guides/nics/features/af_xdp.ini > @@ -9,4 +9,5 @@ Power mgmt address monitor = Y > MTU update = Y > Promiscuous mode = Y > Stats per queue = Y > +Multiprocess aware = Y > x86-64 = Y > diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst > index bf2e3f78a9..dfd2cbbccf 100644 > --- a/doc/guides/rel_notes/release_22_03.rst > +++ b/doc/guides/rel_notes/release_22_03.rst > @@ -58,6 +58,7 @@ New Features > * **Updated AF_XDP PMD** > > * Added support for libxdp >=v1.2.2. > + * Re-enabled secondary process support. RX/TX is not supported. > > * **Updated Cisco enic driver.** > > diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c > index 1b6192fa44..407f6d8dbe 100644 > --- a/drivers/net/af_xdp/rte_eth_af_xdp.c > +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c > @@ -80,6 +80,18 @@ RTE_LOG_REGISTER_DEFAULT(af_xdp_logtype, NOTICE); > > #define ETH_AF_XDP_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN) > > +#define ETH_AF_XDP_MP_KEY "afxdp_mp_send_fds" > + > +static int afxdp_dev_count; > + > +/* Message header to synchronize fds via IPC */ > +struct ipc_hdr { > + char port_name[RTE_DEV_NAME_MAX_LEN]; > + /* The file descriptors are in the dedicated part > + * of the Unix message to be translated by the kernel. > + */ > +}; > + > struct xsk_umem_info { > struct xsk_umem *umem; > struct rte_ring *buf_ring; > @@ -147,6 +159,10 @@ struct pmd_internals { > struct pkt_tx_queue *tx_queues; > }; > > +struct pmd_process_private { > + int rxq_xsk_fds[RTE_MAX_QUEUES_PER_PORT]; > +}; > + > #define ETH_AF_XDP_IFACE_ARG "iface" > #define ETH_AF_XDP_START_QUEUE_ARG "start_queue" > #define ETH_AF_XDP_QUEUE_COUNT_ARG "queue_count" > @@ -795,11 +811,12 @@ static int > eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) > { > struct pmd_internals *internals = dev->data->dev_private; > + struct pmd_process_private *process_private = dev->process_private; > struct xdp_statistics xdp_stats; > struct pkt_rx_queue *rxq; > struct pkt_tx_queue *txq; > socklen_t optlen; > - int i, ret; > + int i, ret, fd; > > for (i = 0; i < dev->data->nb_rx_queues; i++) { > optlen = sizeof(struct xdp_statistics); > @@ -815,8 +832,9 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) > stats->ibytes += stats->q_ibytes[i]; > stats->imissed += rxq->stats.rx_dropped; > stats->oerrors += txq->stats.tx_dropped; > - ret = getsockopt(xsk_socket__fd(rxq->xsk), SOL_XDP, > - XDP_STATISTICS, &xdp_stats, &optlen); > + fd = process_private->rxq_xsk_fds[i]; > + ret = fd >= 0 ? getsockopt(fd, SOL_XDP, XDP_STATISTICS, > + &xdp_stats, &optlen) : -1; > if (ret != 0) { > AF_XDP_LOG(ERR, "getsockopt() failed for XDP_STATISTICS.\n"); > return -1; > @@ -883,8 +901,10 @@ eth_dev_close(struct rte_eth_dev *dev) > struct pkt_rx_queue *rxq; > int i; > > - if (rte_eal_process_type() != RTE_PROC_PRIMARY) > + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { > + rte_free(dev->process_private); > return 0; > + } > > AF_XDP_LOG(INFO, "Closing AF_XDP ethdev on numa socket %u\n", > rte_socket_id()); > @@ -1349,6 +1369,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev, > struct rte_mempool *mb_pool) > { > struct pmd_internals *internals = dev->data->dev_private; > + struct pmd_process_private *process_private = dev->process_private; > struct pkt_rx_queue *rxq; > int ret; > > @@ -1387,6 +1408,8 @@ eth_rx_queue_setup(struct rte_eth_dev *dev, > rxq->fds[0].fd = xsk_socket__fd(rxq->xsk); > rxq->fds[0].events = POLLIN; > > + process_private->rxq_xsk_fds[rx_queue_id] = rxq->fds[0].fd; > + > dev->data->rx_queues[rx_queue_id] = rxq; > return 0; > > @@ -1688,6 +1711,7 @@ init_internals(struct rte_vdev_device *dev, const char *if_name, > { > const char *name = rte_vdev_device_name(dev); > const unsigned int numa_node = dev->device.numa_node; > + struct pmd_process_private *process_private; > struct pmd_internals *internals; > struct rte_eth_dev *eth_dev; > int ret; > @@ -1753,9 +1777,17 @@ init_internals(struct rte_vdev_device *dev, const char *if_name, > if (ret) > goto err_free_tx; > > + process_private = (struct pmd_process_private *) > + rte_zmalloc_socket(name, sizeof(struct pmd_process_private), > + RTE_CACHE_LINE_SIZE, numa_node); > + if (process_private == NULL) { > + AF_XDP_LOG(ERR, "Failed to alloc memory for process private\n"); > + goto err_free_tx; > + } Need to free 'process_private' in the PMD, in 'close()' and 'remove()' paths. > + > eth_dev = rte_eth_vdev_allocate(dev, 0); > if (eth_dev == NULL) > - goto err_free_tx; > + goto err_free_pp; > > eth_dev->data->dev_private = internals; > eth_dev->data->dev_link = pmd_link; > @@ -1764,6 +1796,10 @@ init_internals(struct rte_vdev_device *dev, const char *if_name, > eth_dev->dev_ops = &ops; > eth_dev->rx_pkt_burst = eth_af_xdp_rx; > eth_dev->tx_pkt_burst = eth_af_xdp_tx; > + eth_dev->process_private = process_private; > + > + for (i = 0; i < queue_cnt; i++) > + process_private->rxq_xsk_fds[i] = -1; > > #if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG) > AF_XDP_LOG(INFO, "Zero copy between umem and mbuf enabled.\n"); > @@ -1771,6 +1807,8 @@ init_internals(struct rte_vdev_device *dev, const char *if_name, > > return eth_dev; > > +err_free_pp: > + rte_free(process_private); > err_free_tx: > rte_free(internals->tx_queues); > err_free_rx: > @@ -1780,6 +1818,115 @@ init_internals(struct rte_vdev_device *dev, const char *if_name, > return NULL; > } > > +/* Secondary process requests rxq fds from primary. */ > +static int > +afxdp_mp_request_fds(const char *name, struct rte_eth_dev *dev) > +{ > + struct pmd_process_private *process_private = dev->process_private; > + struct timespec timeout = {.tv_sec = 1, .tv_nsec = 0}; > + struct rte_mp_msg request, *reply; > + struct rte_mp_reply replies; > + struct ipc_hdr *request_param = (struct ipc_hdr *)request.param; > + int i, ret; > + > + /* Prepare the request */ > + memset(&request, 0, sizeof(request)); > + strlcpy(request.name, ETH_AF_XDP_MP_KEY, sizeof(request.name)); > + strlcpy(request_param->port_name, name, > + sizeof(request_param->port_name)); > + request.len_param = sizeof(*request_param); > + > + /* Send the request and receive the reply */ > + AF_XDP_LOG(DEBUG, "Sending IPC request for %s\n", name); > + ret = rte_mp_request_sync(&request, &replies, &timeout); > + if (ret < 0 || replies.nb_received != 1) { > + AF_XDP_LOG(ERR, "Failed to request fds from primary: %d", > + rte_errno); > + return -1; > + } > + reply = replies.msgs; > + AF_XDP_LOG(DEBUG, "Received IPC reply for %s\n", name); I think message can mention "multi-process IPC" for clarification. > + if (dev->data->nb_rx_queues != reply->num_fds) { > + AF_XDP_LOG(ERR, "Incorrect number of fds received: %d != %d\n", > + reply->num_fds, dev->data->nb_rx_queues); > + return -EINVAL; > + } > + > + for (i = 0; i < reply->num_fds; i++) > + process_private->rxq_xsk_fds[i] = reply->fds[i]; > + > + free(reply); > + return 0; > +} > + > +/* Primary process sends rxq fds to secondary. */ > +static int > +afxdp_mp_send_fds(const struct rte_mp_msg *request, const void *peer) > +{ > + struct rte_eth_dev *dev; > + struct pmd_process_private *process_private; > + struct rte_mp_msg reply; > + const struct ipc_hdr *request_param = > + (const struct ipc_hdr *)request->param; > + struct ipc_hdr *reply_param = > + (struct ipc_hdr *)reply.param; > + const char *request_name = request_param->port_name; > + uint16_t port_id; > + int i, ret; > + > + AF_XDP_LOG(DEBUG, "Received IPC request for %s\n", request_name); > + > + /* Find the requested port */ > + ret = rte_eth_dev_get_port_by_name(request_name, &port_id); > + if (ret) { > + AF_XDP_LOG(ERR, "Failed to get port id for %s\n", request_name); > + return -1; > + } > + dev = &rte_eth_devices[port_id]; Better to not access the global array, there is a new API and a cleanup already done [1] in other PMDs, can you please apply the same here. [1] https://patches.dpdk.org/project/dpdk/patch/20220203082412.79028-1-kumaraparamesh92@gmail.com/ > + process_private = dev->process_private; > + > + /* Populate the reply with the xsk fd for each queue */ > + reply.num_fds = 0; > + if (dev->data->nb_rx_queues > RTE_MP_MAX_FD_NUM) { > + AF_XDP_LOG(ERR, "Number of rx queues (%d) exceeds max number of fds (%d)\n", > + dev->data->nb_rx_queues, RTE_MP_MAX_FD_NUM); > + return -EINVAL; > + } > + > + for (i = 0; i < dev->data->nb_rx_queues; i++) > + reply.fds[reply.num_fds++] = process_private->rxq_xsk_fds[i]; > + > + /* Send the reply */ > + strlcpy(reply.name, request->name, sizeof(reply.name)); > + strlcpy(reply_param->port_name, request_name, > + sizeof(reply_param->port_name)); > + reply.len_param = sizeof(*reply_param); > + AF_XDP_LOG(DEBUG, "Sending IPC reply for %s\n", reply_param->port_name); > + if (rte_mp_reply(&reply, peer) < 0) { > + AF_XDP_LOG(ERR, "Failed to reply to IPC request\n"); > + return -1; > + } > + return 0; > +} > + > +/* Secondary process rx function. RX is disabled because rings are SPSC */ > +static uint16_t > +eth_af_xdp_rx_noop(void *queue __rte_unused, > + struct rte_mbuf **bufs __rte_unused, > + uint16_t nb_pkts __rte_unused) > +{ > + return 0; > +} > + > +/* Secondary process tx function. TX is disabled because rings are SPSC */ > +static uint16_t > +eth_af_xdp_tx_noop(void *queue __rte_unused, > + struct rte_mbuf **bufs __rte_unused, > + uint16_t nb_pkts __rte_unused) > +{ > + return 0; > +} > + Now there are multiple PMDs using noop/dummy Rx/Tx burst functions, what do you think to add static inline functions in the 'ethdev_driver.h' (and clean existing drivers to use it) with a separate patch, later in this patch use those functions? > static int > rte_pmd_af_xdp_probe(struct rte_vdev_device *dev) > { > @@ -1789,19 +1936,39 @@ rte_pmd_af_xdp_probe(struct rte_vdev_device *dev) > int xsk_queue_cnt = ETH_AF_XDP_DFLT_QUEUE_COUNT; > int shared_umem = 0; > char prog_path[PATH_MAX] = {'\0'}; > - int busy_budget = -1; > + int busy_budget = -1, ret; > struct rte_eth_dev *eth_dev = NULL; > - const char *name; > + const char *name = rte_vdev_device_name(dev); > > - AF_XDP_LOG(INFO, "Initializing pmd_af_xdp for %s\n", > - rte_vdev_device_name(dev)); > + AF_XDP_LOG(INFO, "Initializing pmd_af_xdp for %s\n", name); > > - name = rte_vdev_device_name(dev); > if (rte_eal_process_type() == RTE_PROC_SECONDARY) { > - AF_XDP_LOG(ERR, "Failed to probe %s. " > - "AF_XDP PMD does not support secondary processes.\n", > - name); > - return -ENOTSUP; > + eth_dev = rte_eth_dev_attach_secondary(name); > + if (eth_dev == NULL) { > + AF_XDP_LOG(ERR, "Failed to probe %s\n", name); > + return -EINVAL; > + } > + eth_dev->dev_ops = &ops; > + eth_dev->device = &dev->device; > + eth_dev->rx_pkt_burst = eth_af_xdp_rx_noop; > + eth_dev->tx_pkt_burst = eth_af_xdp_tx_noop; > + eth_dev->process_private = (struct pmd_process_private *) > + rte_zmalloc_socket(name, > + sizeof(struct pmd_process_private), > + RTE_CACHE_LINE_SIZE, > + eth_dev->device->numa_node); > + if (eth_dev->process_private == NULL) { > + AF_XDP_LOG(ERR, > + "Failed to alloc memory for process private\n"); > + return -ENOMEM; > + } > + > + /* Obtain the xsk fds from the primary process. */ > + if (afxdp_mp_request_fds(name, eth_dev)) > + return -1; > + > + rte_eth_dev_probing_finish(eth_dev); > + return 0; > } > > kvlist = rte_kvargs_parse(rte_vdev_device_args(dev), valid_arguments); > @@ -1836,6 +2003,17 @@ rte_pmd_af_xdp_probe(struct rte_vdev_device *dev) > return -1; > } > > + /* Register IPC callback which shares xsk fds from primary to secondary */ > + if (!afxdp_dev_count) { > + ret = rte_mp_action_register(ETH_AF_XDP_MP_KEY, afxdp_mp_send_fds); > + if (ret < 0) { > + AF_XDP_LOG(ERR, "%s: Failed to register IPC callback: %s", > + name, strerror(rte_errno)); > + return -1; > + } > + } > + afxdp_dev_count++; > + > rte_eth_dev_probing_finish(eth_dev); > > return 0; > @@ -1858,6 +2036,10 @@ rte_pmd_af_xdp_remove(struct rte_vdev_device *dev) > return 0; > > eth_dev_close(eth_dev); > + rte_free(eth_dev->process_private); > + if (afxdp_dev_count == 1) > + rte_mp_action_unregister(ETH_AF_XDP_MP_KEY); > + afxdp_dev_count--; > rte_eth_dev_release_port(eth_dev); > >