From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 22C11436D2; Tue, 12 Dec 2023 15:25:25 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3CD1142DB2; Tue, 12 Dec 2023 15:25:23 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by mails.dpdk.org (Postfix) with ESMTP id 73BF04025C for ; Tue, 12 Dec 2023 15:25:21 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702391121; x=1733927121; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=oq+BFiNUGWb+MRR/NKlAjwKLSR0U5Z+u53n+TIg57Sc=; b=CTipyeOlCALb5MAqRiTXG2y1lpOhrfqHXOSNmTOBf/veoIRsDcLYsUdm r0dZmSqq3d6AFwGfbunwwwtWCIZUtZK4AtmOfMjWrZ9co5HjeBszsutf/ gCIYLtgX8z4jNpsZ/DkLy+//uCKbaBiWwVH9ojByXpoqQWK4NPJg3zfVG GZMbH1GeJN+NtVnzm0VqCiX/mP96diyrA7VR+50v79ER6DMLIqbvVWQw3 mji6FDdDE91eCxSupgKXWEQoozMv/7vI4hoS62nsajm8C3VoAcwx+d5KN JdNYzF7U65Jly8osOMdCV1+SRNCKhSr4XhkZJVw+th32Q1PNx+jnd+8aY A==; X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="13501043" X-IronPort-AV: E=Sophos;i="6.04,270,1695711600"; d="scan'208";a="13501043" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2023 06:25:20 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="777117018" X-IronPort-AV: E=Sophos;i="6.04,270,1695711600"; d="scan'208";a="777117018" Received: from orsmsx602.amr.corp.intel.com ([10.22.229.15]) by fmsmga007.fm.intel.com with ESMTP/TLS/AES256-GCM-SHA384; 12 Dec 2023 06:25:20 -0800 Received: from orsmsx612.amr.corp.intel.com (10.22.229.25) by ORSMSX602.amr.corp.intel.com (10.22.229.15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 12 Dec 2023 06:25:19 -0800 Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by ORSMSX612.amr.corp.intel.com (10.22.229.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 12 Dec 2023 06:25:19 -0800 Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35 via Frontend Transport; Tue, 12 Dec 2023 06:25:19 -0800 Received: from NAM04-DM6-obe.outbound.protection.outlook.com (104.47.73.41) by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 12 Dec 2023 06:25:18 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CjYP8uqbdhZuMijdCmZ+GEvt8gYhGk3SVeaZCgOOsH9QxfRUMFHdhOHAnV1XI5su9mQNHm63k03/Afgx4SFPKfv9JD5ibI+LC2+tSdDfFPgsuDieGUcKpKOV7NKwH2VdqlSNNmDm5nMpjeKvGcDEl9fbEUJbzGcGgp+MIBY2IBRLtF8IEHhsWkJvYRPNYjda1D2QsM3BjlnptM/ZjUpu9LY6oK1/VeeOxiQgygiUiH3EKdUHTxudPdLp0WPz6XyvLs3sZXxHexYaAO7ibTrjhpw3scoiKu4Fb/rDqrvGUPYRkX8XLqSJOIqHbG+fncsFC8hcVTNV1JN7G2Rf5OCENw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=l4Hyc//sptYUBIqTSLREqysLIY4Ak3DPHDn8WhYMtUo=; b=GoyZoauq30OMPGkAn15d+YZvobR3N9Qb4L9doMZ2Th8gB4sHfUuRRRGJlnlv07UwjHWIVtj7EMoJxX9XJRPCWFvf1DQPj2HUDu6zHiL1Tqai8Jnikl01hLoZ6JPAOITwa6XHYQgD4QlgqF8gcpq0sZjscHghDaJ/fuvnufGLd6L5a4zIMlmXVf67/7WQVsV9OS2edVte8kpyRAlfSEdjlBZMVrTi4tlWBHo9eurpXJh6950jlhk8CamoapSH+tqctAiGqZjkeDSp6egAvFNUue65tnXSynXa6by3vkjJaNLp6WXsDVegO+sqqYVQYlwXwCPNIV4XSV7/gj5LyNKj3w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from DM6PR11MB3995.namprd11.prod.outlook.com (2603:10b6:5:6::12) by DM6PR11MB4739.namprd11.prod.outlook.com (2603:10b6:5:2a0::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7091.26; Tue, 12 Dec 2023 14:25:13 +0000 Received: from DM6PR11MB3995.namprd11.prod.outlook.com ([fe80::7c33:4ba9:947f:d875]) by DM6PR11MB3995.namprd11.prod.outlook.com ([fe80::7c33:4ba9:947f:d875%4]) with mapi id 15.20.7068.031; Tue, 12 Dec 2023 14:25:13 +0000 From: "Koikkara Reeny, Shibin" To: "Tahhan, Maryam" , "ferruh.yigit@amd.com" , "stephen@networkplumber.org" , "lihuisong@huawei.com" , "fengchengwen@huawei.com" , "liuyonglong@huawei.com" , "Loftus, Ciara" CC: "dev@dpdk.org" , "Tahhan, Maryam" Subject: RE: [v3] net/af_xdp: enable uds-path instead of use_cni Thread-Topic: [v3] net/af_xdp: enable uds-path instead of use_cni Thread-Index: AQHaLD/jJZc6mPhl9ky1/jFq+JAaKbClhr+A Date: Tue, 12 Dec 2023 14:25:13 +0000 Message-ID: References: <20231211143926.3502839-1-mtahhan@redhat.com> In-Reply-To: <20231211143926.3502839-1-mtahhan@redhat.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: DM6PR11MB3995:EE_|DM6PR11MB4739:EE_ x-ms-office365-filtering-correlation-id: 2361412c-3dfc-4f6b-aa8b-08dbfb1e27b2 x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: FTR/XT1C8QXJB5uMWabLFAljQXd1sSZ1ho+sZ6q7wVmc85hHNU7pRvNPfOeJg9i8Oba4VmF4VgLL7Tf/SDoEx1TgWW8v26FE+wRuBJ2HvEAYh8mxS3+Tjyt1t98CoP1meooTpsopHgy1dKVBdtW/iEne5DtzXQLYgLQWwdt6pxDfGkeK6KA0XtjEoSFKRr30syBoIPxP5f84zgbR2VQEieTUmStVXKoXvoKA4rPnIJLM9okAj2K2c0i/9oQ7bRtnpAh7nRlU9XARQVpMzXfzZZwEVO3cBty60NdTCdxeoqn5q1WUi2cDod7qzzpY/lnU6Blwm5J56fqAskNzqLhvD4xzuGQgW7X2N1Pm3kwSwDiEoxxTgot6vR2b63tkrAb1eOkchTy3Ln4ntjcjNw8Ylz6ZkJQlEe0VSudGQ8CF0QJQLpjGZ96aXboChzB1GolLxPnc39c1XlBzugK4hZ+jOcxrmdRrc20/A/Oz2QE2UeG0D5RQgKm7+YUFlnQMemS5QG9YN6IVqDFkyLBVaHliatfAUzjk2+qljGIoeSH3BKm6CyHTO6lYi9dzTaFTo5N/zPJ+FjlKX0YZ2PTPRciXAZGKwCe+azd1PakTYAflIdGPnYxFb/P/SVxnOig17GgH1vNy3kMthqGlbphdCv4U5A== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR11MB3995.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(396003)(136003)(39860400002)(346002)(366004)(376002)(230922051799003)(451199024)(186009)(1800799012)(64100799003)(33656002)(5660300002)(122000001)(38100700002)(30864003)(2906002)(41300700001)(86362001)(82960400001)(478600001)(38070700009)(966005)(316002)(6636002)(54906003)(64756008)(66446008)(66476007)(66556008)(76116006)(66946007)(110136005)(26005)(55016003)(71200400001)(6506007)(7696005)(53546011)(9686003)(4326008)(8676002)(8936002)(52536014)(84970400001)(83380400001)(559001)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?pClMebUrEpOaY/5b5kqe3LQm6qKRQQz/USLP+Z+Utd0jRoGfJwQNoOFyBnmt?= =?us-ascii?Q?rQfplbwRQYlz+qt0116uLfxVTUW2CXBsfYYrgyNBx6Bo5YXfe485dIgPvaDb?= =?us-ascii?Q?O1YGpXxrwe6Jf9tc6VPrtpyOkUHu0sgjcSFvQHWVo+ix/tKWRygRwu//Dweh?= =?us-ascii?Q?Kaz+PNh0+CzBPmY27Ns37VnqOeqqJaIFl6y5ZL+XwqHVGqYVPHCqZxShejZZ?= =?us-ascii?Q?ewAmnMQ2qINOrBns6+XpPc3bWSCtvZb1k9n5PE0MTo/ou0w6gMLCKnSfVX6W?= =?us-ascii?Q?Wyy8jxfRHoM9dSlOouydq2APIIfZpBIikFVlU8ViHhM+gRwFCSOz/EOznTpL?= =?us-ascii?Q?WMdhoQzpjNbX2Ysk7RtuR7QUe7bS4fwRbY+SnjqY2kT9KFkL+N8NYF3a7ef+?= =?us-ascii?Q?qTXbfKfCfq+rYBPIlPUXBgvO3n3N3p+yZW0TvFcPssjljlq6/GYx+vIQa8oR?= =?us-ascii?Q?CFBV2x/WV3yaXbdPj41rNXnGagxaZDgXrjIb0L+DeBPqiy8rxvH+i0w77vIX?= =?us-ascii?Q?DeUaCseFVJkXGnWJ2Rku6Lke785xyVaCPwsit3eMDMWNXwocylRqpJaWCgLd?= =?us-ascii?Q?9q7j9IH4X/ycl9217fvZFVTY2dCQWWVxB9t+T1ag2rVP8rDlKlJUo17oX1hg?= =?us-ascii?Q?QPEO+0pTWUEDVxrFUgvQ7I3Ju11uX6mnBi4kLlTQbvx5SMNII23HYORiQxP5?= =?us-ascii?Q?39txbul9JJEyEpWoeyJj+fKESJu6acujBZd1PSWyLIaHK86tGKgXEHarakV2?= =?us-ascii?Q?iiavaW/30OgppCdMwSYKMM+uUEkGXz4Gf6t2m4Thl0s9jCcn3YnHjRDaxTQK?= =?us-ascii?Q?Zo1Z/AmfbOvJ5z/QeefvWVngEmQt/6xNLPiM7LFIqrdSOcdDwYo8z0lXYt4v?= =?us-ascii?Q?vetHKaKQfD209c/Z+FhsLNuILrEuHnwaO4eI3eDSxQDg9qrlyp8erfFT0Hof?= =?us-ascii?Q?9zHO9Hb3QqymZ88NWUYFlDfA2oHdabqgjv5nE2udIeTc/JYtCDsjvsuc+SxD?= =?us-ascii?Q?VTG7NhaV4SuRo8eC6Mmo+SSgfSDX28j+B5OFKGN64l1RHnjJSa3CvmeLAXzT?= =?us-ascii?Q?nOZ8UWA9Xd+6iNLnCkCEoyCx14MLcQYS5jUpMHzQlboQZlBps4vDV6bwhlxw?= =?us-ascii?Q?CcSaXRs+b32tF9m6/hyG/CrDtcS/CJb1APMhIL5j09OuDn/7sRIuX/G9ffAo?= =?us-ascii?Q?b9VW0Nhj9hnWd9++jHPtdRUe9VeM+8OMkPu7juKtIjXpObwT4PStrjpmZ2fP?= =?us-ascii?Q?HPZcHHipTjAk4HOxyUDa3TPkaMTrP8v9dM/7vH50foiap7pICkw6b2uWrTjA?= =?us-ascii?Q?dQEeMprW4cfycEYOZtXBv666bQuOWM+CryYv44Gml8zdxl1WQTbXSE7nhvgn?= =?us-ascii?Q?n3oMaBTPOolFdn1kZKkkVN3hgQNdVqZQYs6N4+uhsnaxDLx11z1KSae3H/6W?= =?us-ascii?Q?7NzcBOaMUUfqAv5slmbFUUqt/8xd/DW8I4zpGb6ckZPTIeQ9G+kAh6pq2kI/?= =?us-ascii?Q?22JQn8y563FTXKKVoS+AplpnAd3RKZuSqnLv0vq+XtafQBeepJoM8uI0IDJ4?= =?us-ascii?Q?5AMpMHVtx2RrBUswZTPuUZwDC2gGjl266HvsaR9DNOPDSr8uzOqIQKRQGfC3?= =?us-ascii?Q?Ng=3D=3D?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM6PR11MB3995.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2361412c-3dfc-4f6b-aa8b-08dbfb1e27b2 X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Dec 2023 14:25:13.6405 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Xq3YP3l0hrJ6dR02FlmMHGEIPuH4qfODb2cWYNASOe19HxLlCsaPhH7y+g9Fcscmz6W+djip23pU/UyGgPO/6tapce4jOV50B4w5p8Ynt+M= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR11MB4739 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Thank you Maryam updating the document. I have added some comment below.=20 Also what do you think about changing the name of the document file "af_xdp= _cni.rst" to "af_xdp_dp.rst" ? Regards, Shibin > -----Original Message----- > From: Maryam Tahhan > Sent: Monday, December 11, 2023 2:39 PM > To: ferruh.yigit@amd.com; stephen@networkplumber.org; > lihuisong@huawei.com; fengchengwen@huawei.com; > liuyonglong@huawei.com; Koikkara Reeny, Shibin > ; Loftus, Ciara > Cc: dev@dpdk.org; Tahhan, Maryam > Subject: [v3] net/af_xdp: enable uds-path instead of use_cni >=20 > With the original 'use_cni' implementation, (using a hardcoded socket rat= her > than a configurable one), if a single pod is requesting multiple net devi= ces > and these devices are from different pools, then the container attempts t= o > mount all the netdev UDSes in the pod as /tmp/afxdp.sock. Which means > that at best only 1 netdev will handshake correctly with the AF_XDP DP. T= his > patch addresses this by making the socket parameter configurable using a > new vdev param called 'uds_path' and removing the previous 'use_cni' > param. > Tested with the AF_XDP DP CNI PR 81, single and multiple interfaces. >=20 > v3: > * Remove `use_cni` vdev argument as it's no longer needed. > * Update incorrect CNI references for the AF_XDP DP in the > documentation. > * Update the documentation to run a simple example with the > AF_XDP DP plugin in K8s. >=20 > v2: > * Rename sock_path to uds_path. > * Update documentation to reflect when CAP_BPF is needed. > * Fix testpmd arguments in the provided example for Pods. > * Use AF_XDP API to update the xskmap entry. >=20 > Signed-off-by: Maryam Tahhan > --- > doc/guides/howto/af_xdp_cni.rst | 334 +++++++++++++++------------- > drivers/net/af_xdp/rte_eth_af_xdp.c | 76 +++---- > 2 files changed, 216 insertions(+), 194 deletions(-) >=20 > diff --git a/doc/guides/howto/af_xdp_cni.rst > b/doc/guides/howto/af_xdp_cni.rst index a1a6d5b99c..b71fef61c7 100644 > --- a/doc/guides/howto/af_xdp_cni.rst > +++ b/doc/guides/howto/af_xdp_cni.rst > @@ -1,71 +1,65 @@ > .. SPDX-License-Identifier: BSD-3-Clause > Copyright(c) 2023 Intel Corporation. >=20 > -Using a CNI with the AF_XDP driver > -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > +Using the AF_XDP Device Plugin with the AF_XDP driver > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D >=20 > Introduction > ------------ >=20 > -CNI, the Container Network Interface, is a technology for configuring - > container network interfaces -and which can be used to setup Kubernetes > networking. > +The `AF_XDP Device Plugin for Kubernetes`_ is a project that provisions > +and advertises interfaces (that can be used with AF_XDP) to Kubernetes. > +The project also includes a `CNI`_. > + > AF_XDP is a Linux socket Address Family that enables an XDP program to > redirect packets to a memory buffer in userspace. >=20 > -This document explains how to enable the `AF_XDP Plugin for Kubernetes`_ > within -a DPDK application using the :doc:`../nics/af_xdp` to connect and= use > these technologies. > - > -.. _AF_XDP Plugin for Kubernetes: https://github.com/intel/afxdp-plugins= - > for-kubernetes > +This document explains how to use the `AF_XDP Device Plugin for > +Kubernetes`_ with a DPDK :doc:`../nics/af_xdp` based application running= in > a Pod. >=20 > +.. _AF_XDP Device Plugin for Kubernetes: > +https://github.com/intel/afxdp-plugins-for-kubernetes > +.. _CNI: https://github.com/containernetworking/cni >=20 > Background > ---------- >=20 > -The standard :doc:`../nics/af_xdp` initialization process involves loadi= ng an > eBPF program -onto the kernel netdev to be used by the PMD. > -This operation requires root or escalated Linux privileges -and thus pre= vents > the PMD from working in an unprivileged container. > -The AF_XDP CNI plugin handles this situation -by providing a device plug= in > that performs the program loading. > - > -At a technical level the CNI opens a Unix Domain Socket and listens for = a > client -to make requests over that socket. > -A DPDK application acting as a client connects and initiates a configura= tion > "handshake". > -The client then receives a file descriptor which points to the XSKMAP - > associated with the loaded eBPF program. > -The XSKMAP is a BPF map of AF_XDP sockets (XSK). > -The client can then proceed with creating an AF_XDP socket -and insertin= g > that socket into the XSKMAP pointed to by the descriptor. > - > -The EAL vdev argument ``use_cni`` is used to indicate that the user wish= es - > to run the PMD in unprivileged mode and to receive the XSKMAP file > descriptor -from the CNI. > -When this flag is set, > -the ``XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD`` libbpf flag -should be > used when creating the socket -to instruct libbpf not to load the default > libbpf program on the netdev. > -Instead the loading is handled by the CNI. > +The standard :doc:`../nics/af_xdp` initialization process involves > +loading an eBPF program onto the kernel netdev to be used by the PMD. > +This operation requires root or escalated Linux privileges and prevents > +the PMD from working in an unprivileged container. The AF_XDP Device > +plugin addresses this situation by providing an entity that manages > +eBPF program lifecycle for Pod interfaces that wish to use AF_XDP, this > +in turn allows the pod to be used without privilege escalation. > + > +In order for the pod to run without privilege escalation, the AF_XDP DP It will good add DP is an abbreviation for Device Plugin. > +creates a Unix Domain Socket (UDS) and listens for Pods to make > +requests for XSKMAP(s) File Descriptors (FDs) for interfaces in their > network namespace. > +In other words, the DPDK application running in the Pod connects to > +this UDS and initiates a "handshake" to retrieve the XSKMAP(s) FD(s). > +Upon a successful "handshake", the DPDK application receives the FD(s) > +for the XSKMAP(s) associated with the relevant netdevs. The DPDK > +application can then create the AF_XDP socket(s), and attach the socket(= s) > to the netdev queue(s) by inserting the socket(s) into the XSKMAP(s). > + > +The EAL vdev argument ``uds_path`` is used to indicate that the user > +wishes to run the AF_XDP PMD in unprivileged mode and to receive the > +XSKMAP FD from the AF_XDP DP. When this param is used, the > +``XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD`` libbpf flag is used when > +creating the AF_XDP socket to instruct libbpf/libxdp not to load the > +default eBPF redirect program for AF_XDP on the netdev. Instead the > +lifecycle management of the eBPF program is handled by the AF_XDP DP. >=20 > .. note:: >=20 > - The Unix Domain Socket file path appear in the end user is > "/tmp/afxdp.sock". > - > + The UDS file path inside the pod appears at > "/tmp/afxdp_dp//afxdp.sock". >=20 Initially 'Note' was created since it was not explicitly know to the user w= here the sock was created inside the Pod. Now since we are passing it as ar= gument if you want you can remove it. > Prerequisites > ------------- >=20 > -Docker and container prerequisites: > - > -* Set up the device plugin > - as described in the instructions for `AF_XDP Plugin for Kubernetes`_. > - > -* The Docker image should contain the libbpf and libxdp libraries, > - which are dependencies for AF_XDP, > - and should include support for the ``ethtool`` command. > +Device Plugin and DPDK container prerequisites: > +* Create a DPDK container image. Formatting is need here. It get displayed as: "Device Plugin and DPDK container prerequisites: * Create a DPDK container = image." >=20 > -* The Pod should have enabled the capabilities ``CAP_NET_RAW`` and > ``CAP_BPF`` > - for AF_XDP along with support for hugepages. > +* Set up the device plugin and prepare the Pod Spec as described in > + the instructions for `AF_XDP Device Plugin for Kubernetes`_. >=20 > * Increase locked memory limit so containers have enough memory for > packet buffers. > For example: > @@ -85,115 +79,142 @@ Docker and container prerequisites: > Example > ------- >=20 > -Howto run dpdk-testpmd with CNI plugin: > +How to run dpdk-testpmd with AF_XDP Device plugin: >=20 > -* Clone the CNI plugin > +* Clone the AF_XDP Device plugin >=20 > .. code-block:: console >=20 > # git clone https://github.com/intel/afxdp-plugins-for-kubernetes.g= it >=20 > -* Build the CNI plugin > +* Build the AF_XDP Device plugin and the CNI >=20 > .. code-block:: console >=20 > # cd afxdp-plugins-for-kubernetes/ > - # make build > + # make image >=20 > - .. note:: > +* Make sure to modify the image used by the `daemonset.yml`_ file in > +the deployments directory with > + the following configuration: >=20 > - CNI plugin has a dependence on the config.json. > + .. _daemonset.yml : > + https://github.com/intel/afxdp-plugins-for-kubernetes/blob/main/deploy > + ments/daemonset.yml >=20 > - Sample Config.json > + .. code-block:: yaml >=20 > - .. code-block:: json > + image: afxdp-device-plugin:latest >=20 > - { > - "logLevel":"debug", > - "logFile":"afxdp-dp-e2e.log", > - "pools":[ > - { > - "name":"e2e", > - "mode":"primary", > - "timeout":30, > - "ethtoolCmds" : ["-L -device- combined 1"], > - "devices":[ > - { > - "name":"ens785f0" > - } > - ] > - } > - ] > - } > + .. note:: "Config.json" is removed. Is it because "ethtoolCmds" is moved to the "nad.= yaml"? What about the "drivers or devices" ? >=20 > - For further reference please use the `config.json`_ > + This will select the AF_XDP DP image that was built locally. Detaile= d > configuration > + options can be found in the AF_XDP Device Plugin `readme`_ . >=20 > - .. _config.json: https://github.com/intel/afxdp-plugins-for- > kubernetes/blob/v0.0.2/test/e2e/config.json > + .. _readme: > + https://github.com/intel/afxdp-plugins-for-kubernetes#readme >=20 > -* Create the Network Attachment definition > +* Deploy the AF_XDP Device Plugin and CNI >=20 > .. code-block:: console >=20 > - # kubectl create -f nad.yaml > + # kubectl create -f deployments/daemonset.yml > + > +* Create a Network Attachment Definition (NAD) > + > + .. code-block:: console > + > + # kubectl create -f nad.yaml >=20 > Sample nad.yml >=20 > .. code-block:: yaml >=20 > - apiVersion: "k8s.cni.cncf.io/v1" > - kind: NetworkAttachmentDefinition > - metadata: > - name: afxdp-e2e-test > - annotations: > - k8s.v1.cni.cncf.io/resourceName: afxdp/e2e > - spec: > - config: '{ > - "cniVersion": "0.3.0", > - "type": "afxdp", > - "mode": "cdq", > - "logFile": "afxdp-cni-e2e.log", > - "logLevel": "debug", > - "ipam": { > - "type": "host-local", > - "subnet": "192.168.1.0/24", > - "rangeStart": "192.168.1.200", > - "rangeEnd": "192.168.1.216", > - "routes": [ > - { "dst": "0.0.0.0/0" } > - ], > - "gateway": "192.168.1.1" > - } > - }' > - > - For further reference please use the `nad.yaml`_ > - > - .. _nad.yaml: https://github.com/intel/afxdp-plugins-for- > kubernetes/blob/v0.0.2/test/e2e/nad.yaml > - > -* Build the Docker image > + apiVersion: "k8s.cni.cncf.io/v1" > + kind: NetworkAttachmentDefinition > + metadata: > + name: afxdp-network > + annotations: > + k8s.v1.cni.cncf.io/resourceName: afxdp/myPool > + spec: > + config: '{ > + "cniVersion": "0.3.0", > + "type": "afxdp", > + "mode": "primary", > + "logFile": "afxdp-cni.log", > + "logLevel": "debug", > + "ethtoolCmds" : ["-N -device- rx-flow-hash udp4 fn", > + "-N -device- flow-type udp4 dst-port 2152 act= ion 22" > + ], > + "ipam": { > + "type": "host-local", > + "subnet": "192.168.1.0/24", > + "rangeStart": "192.168.1.200", > + "rangeEnd": "192.168.1.220", > + "routes": [ > + { "dst": "0.0.0.0/0" } > + ], > + "gateway": "192.168.1.1" > + } > + }' > + > + For further reference please use the example provided by the AF_XDP > + DP `nad.yaml`_ > + > + .. _nad.yaml: > + https://github.com/intel/afxdp-plugins-for-kubernetes/blob/main/exampl > + es/network-attachment-definition.yaml > + > +* Build a DPDK container image (using Docker) >=20 > .. code-block:: console >=20 > - # docker build -t afxdp-e2e-test -f Dockerfile . > + # docker build -t dpdk -f Dockerfile . >=20 > - Sample Dockerfile: > + Sample Dockerfile (should be placed in top level DPDK directory): >=20 > .. code-block:: console >=20 > - FROM ubuntu:20.04 > - RUN apt-get update -y > - RUN apt install build-essential libelf-dev -y > - RUN apt-get install iproute2 acl -y > - RUN apt install python3-pyelftools ethtool -y > - RUN apt install libnuma-dev libjansson-dev libpcap-dev net-tools -y > - RUN apt-get install clang llvm -y > - COPY ./libbpf.tar.gz /tmp > - RUN cd /tmp && tar -xvmf libbpf.tar.gz && cd libbpf/src && > make install > - COPY ./libxdp.tar.gz /tmp > - RUN cd /tmp && tar -xvmf libxdp.tar.gz && cd libxdp && mak= e > install > + FROM fedora:38 > + > + # Setup container to build DPDK applications > + RUN dnf -y upgrade && dnf -y install \ > + libbsd-devel \ > + numactl-libs \ > + libbpf-devel \ > + libbpf \ > + meson \ > + ninja-build \ > + libxdp-devel \ > + libxdp \ > + numactl-devel \ > + python3-pyelftools \ > + python38 \ > + iproute > + RUN dnf groupinstall -y 'Development Tools' > + > + # Create DPDK dir and copy over sources > + WORKDIR /dpdk > + COPY app app > + COPY builddir builddir > + COPY buildtools buildtools > + COPY config config > + COPY devtools devtools > + COPY drivers drivers > + COPY dts dts > + COPY examples examples > + COPY kernel kernel > + COPY lib lib > + COPY license license > + COPY MAINTAINERS MAINTAINERS > + COPY Makefile Makefile > + COPY meson.build meson.build > + COPY meson_options.txt meson_options.txt > + COPY usertools usertools > + COPY VERSION VERSION > + COPY ABI_VERSION ABI_VERSION > + COPY doc doc > + > + # Build DPDK > + RUN meson setup build > + RUN ninja -C build >=20 > .. note:: >=20 > - All the files that need to COPY-ed should be in the same directory = as the > Dockerfile > + Ensure the Dockerfile is placed in the top level DPDK directory. Do you mean the Dockerfile should be in same directory where "DPDK director= y" is? >=20 > * Run the Pod >=20 > @@ -205,49 +226,52 @@ Howto run dpdk-testpmd with CNI plugin: >=20 > .. code-block:: yaml >=20 > - apiVersion: v1 > - kind: Pod > - metadata: > - name: afxdp-e2e-test > - annotations: > - k8s.v1.cni.cncf.io/networks: afxdp-e2e-test > - spec: > - containers: > - - name: afxdp > - image: afxdp-e2e-test:latest > - imagePullPolicy: Never > - env: > - - name: LD_LIBRARY_PATH > - value: /usr/lib64/:/usr/local/lib/ > - command: ["tail", "-f", "/dev/null"] > - securityContext: > + apiVersion: v1 > + kind: Pod > + metadata: > + name: dpdk > + annotations: > + k8s.v1.cni.cncf.io/networks: afxdp-network > + spec: > + containers: > + - name: testpmd > + image: dpdk:latest > + command: ["tail", "-f", "/dev/null"] > + securityContext: > capabilities: > - add: > - - CAP_NET_RAW > - - CAP_BPF > - resources: > - requests: > - hugepages-2Mi: 2Gi > - memory: 2Gi > - afxdp/e2e: '1' > - limits: > - hugepages-2Mi: 2Gi > - memory: 2Gi > - afxdp/e2e: '1' > + add: > + - NET_RAW > + - IPC_LOCK Should we add both NET_RAW and IPC_LOCK to the Prerequisites ? > + resources: > + requests: > + afxdp/myPool: '1' > + limits: > + hugepages-1Gi: 2Gi > + cpu: 2 > + memory: 256Mi > + afxdp/myPool: '1' > + volumeMounts: > + - name: hugepages > + mountPath: /dev/hugepages > + volumes: > + - name: hugepages > + emptyDir: > + medium: HugePages >=20 > For further reference please use the `pod.yaml`_ >=20 > - .. _pod.yaml: https://github.com/intel/afxdp-plugins-for- > kubernetes/blob/v0.0.2/test/e2e/pod-1c1d.yaml > + .. _pod.yaml: > + https://github.com/intel/afxdp-plugins-for-kubernetes/blob/main/exampl > + es/pod-spec.yaml >=20 > -* Run DPDK with a command like the following: > +.. note:: >=20 > - .. code-block:: console > + For Kernel versions older than 5.19 `CAP_BPF` is also required in > + the container capabilities stanza. >=20 > - kubectl exec -i --container -- \ > - //dpdk-testpmd -l 0,1 --no-pci \ > - --vdev=3Dnet_af_xdp0,use_cni=3D1,iface=3D \ > - -- --no-mlockall --in-memory > +* Run DPDK with a command like the following: >=20 > -For further reference please use the `e2e`_ test case in `AF_XDP Plugin = for > Kubernetes`_ > + .. code-block:: console >=20 > - .. _e2e: https://github.com/intel/afxdp-plugins-for- > kubernetes/tree/v0.0.2/test/e2e > + kubectl exec -i dpdk --container testpmd -- \ > + ./build/app/dpdk-testpmd -l 0-2 --no-pci --main-lcore=3D2 \ > + --vdev net_af_xdp,iface=3D name>,start_queue=3D22,queue_count=3D1,uds_path=3D/tmp/afxdp_dp/ ce name>/afxdp.sock \ > + -- -i --a --nb-cores=3D2 --rxq=3D1 --txq=3D1 > + --forward-mode=3Dmacswap; Do you think if we should add "uds_path=3D" in the comma= nd ? And after that add a note or example generally uds_path is of format = "/tmp/afxdp_dp//afxdp.sock "? QQ : uds_path argument name, do you think we should add something to show i= f this UDS passed here is for AF_XDP ex:- "cni_uds_path" ? In future other = features will also use UDS and want to pass the socket path?=20 > diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c > b/drivers/net/af_xdp/rte_eth_af_xdp.c > index 353c8688ec..c13b8038f8 100644 > --- a/drivers/net/af_xdp/rte_eth_af_xdp.c > +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c > @@ -88,7 +88,6 @@ RTE_LOG_REGISTER_DEFAULT(af_xdp_logtype, > NOTICE); > #define UDS_MAX_CMD_LEN 64 > #define UDS_MAX_CMD_RESP 128 > #define UDS_XSK_MAP_FD_MSG "/xsk_map_fd" > -#define UDS_SOCK "/tmp/afxdp.sock" > #define UDS_CONNECT_MSG "/connect" > #define UDS_HOST_OK_MSG "/host_ok" > #define UDS_HOST_NAK_MSG "/host_nak" > @@ -170,7 +169,7 @@ struct pmd_internals { > char prog_path[PATH_MAX]; > bool custom_prog_configured; > bool force_copy; > - bool use_cni; > + char uds_path[PATH_MAX]; > struct bpf_map *map; >=20 > struct rte_ether_addr eth_addr; > @@ -190,7 +189,7 @@ struct pmd_process_private { > #define ETH_AF_XDP_PROG_ARG "xdp_prog" > #define ETH_AF_XDP_BUDGET_ARG "busy_budget" > #define ETH_AF_XDP_FORCE_COPY_ARG "force_copy" > -#define ETH_AF_XDP_USE_CNI_ARG "use_cni" > +#define ETH_AF_XDP_USE_CNI_UDS_PATH_ARG "uds_path" >=20 > static const char * const valid_arguments[] =3D { > ETH_AF_XDP_IFACE_ARG, > @@ -200,7 +199,7 @@ static const char * const valid_arguments[] =3D { > ETH_AF_XDP_PROG_ARG, > ETH_AF_XDP_BUDGET_ARG, > ETH_AF_XDP_FORCE_COPY_ARG, > - ETH_AF_XDP_USE_CNI_ARG, > + ETH_AF_XDP_USE_CNI_UDS_PATH_ARG, > NULL > }; >=20 > @@ -1351,7 +1350,7 @@ configure_preferred_busy_poll(struct > pkt_rx_queue *rxq) } >=20 > static int > -init_uds_sock(struct sockaddr_un *server) > +init_uds_sock(struct sockaddr_un *server, const char *uds_path) > { > int sock; >=20 > @@ -1362,7 +1361,7 @@ init_uds_sock(struct sockaddr_un *server) > } >=20 > server->sun_family =3D AF_UNIX; > - strlcpy(server->sun_path, UDS_SOCK, sizeof(server->sun_path)); > + strlcpy(server->sun_path, uds_path, sizeof(server->sun_path)); >=20 > if (connect(sock, (struct sockaddr *)server, sizeof(struct > sockaddr_un)) < 0) { > close(sock); > @@ -1382,7 +1381,7 @@ struct msg_internal { }; >=20 > static int > -send_msg(int sock, char *request, int *fd) > +send_msg(int sock, char *request, int *fd, const char *uds_path) > { > int snd; > struct iovec iov; > @@ -1393,7 +1392,7 @@ send_msg(int sock, char *request, int *fd) >=20 > memset(&dst, 0, sizeof(dst)); > dst.sun_family =3D AF_UNIX; > - strlcpy(dst.sun_path, UDS_SOCK, sizeof(dst.sun_path)); > + strlcpy(dst.sun_path, uds_path, sizeof(dst.sun_path)); >=20 > /* Initialize message header structure */ > memset(&msgh, 0, sizeof(msgh)); > @@ -1471,7 +1470,7 @@ read_msg(int sock, char *response, struct > sockaddr_un *s, int *fd) >=20 > static int > make_request_cni(int sock, struct sockaddr_un *server, char *request, > - int *req_fd, char *response, int *out_fd) > + int *req_fd, char *response, int *out_fd, const char > *uds_path) > { > int rval; >=20 > @@ -1483,7 +1482,7 @@ make_request_cni(int sock, struct sockaddr_un > *server, char *request, > if (req_fd =3D=3D NULL) > rval =3D write(sock, request, strlen(request)); > else > - rval =3D send_msg(sock, request, req_fd); > + rval =3D send_msg(sock, request, req_fd, uds_path); >=20 > if (rval < 0) { > AF_XDP_LOG(ERR, "Write error %s\n", strerror(errno)); @@ > -1507,7 +1506,7 @@ check_response(char *response, char *exp_resp, long > size) } >=20 > static int > -get_cni_fd(char *if_name) > +get_cni_fd(char *if_name, const char *uds_path) > { > char request[UDS_MAX_CMD_LEN], > response[UDS_MAX_CMD_RESP]; > char hostname[MAX_LONG_OPT_SZ], > exp_resp[UDS_MAX_CMD_RESP]; @@ -1520,14 +1519,14 @@ > get_cni_fd(char *if_name) > return -1; >=20 > memset(&server, 0, sizeof(server)); > - sock =3D init_uds_sock(&server); > + sock =3D init_uds_sock(&server, uds_path); > if (sock < 0) > return -1; >=20 > /* Initiates handshake to CNI send: /connect,hostname */ > snprintf(request, sizeof(request), "%s,%s", UDS_CONNECT_MSG, > hostname); > memset(response, 0, sizeof(response)); > - if (make_request_cni(sock, &server, request, NULL, response, > &out_fd) < 0) { > + if (make_request_cni(sock, &server, request, NULL, response, > &out_fd, > +uds_path) < 0) { > AF_XDP_LOG(ERR, "Error in processing cmd [%s]\n", > request); > goto err_close; > } > @@ -1541,7 +1540,7 @@ get_cni_fd(char *if_name) > /* Request for "/version" */ > strlcpy(request, UDS_VERSION_MSG, UDS_MAX_CMD_LEN); > memset(response, 0, sizeof(response)); > - if (make_request_cni(sock, &server, request, NULL, response, > &out_fd) < 0) { > + if (make_request_cni(sock, &server, request, NULL, response, > &out_fd, > +uds_path) < 0) { > AF_XDP_LOG(ERR, "Error in processing cmd [%s]\n", > request); > goto err_close; > } > @@ -1549,7 +1548,7 @@ get_cni_fd(char *if_name) > /* Request for file descriptor for netdev name*/ > snprintf(request, sizeof(request), "%s,%s", > UDS_XSK_MAP_FD_MSG, if_name); > memset(response, 0, sizeof(response)); > - if (make_request_cni(sock, &server, request, NULL, response, > &out_fd) < 0) { > + if (make_request_cni(sock, &server, request, NULL, response, > &out_fd, > +uds_path) < 0) { > AF_XDP_LOG(ERR, "Error in processing cmd [%s]\n", > request); > goto err_close; > } > @@ -1571,7 +1570,7 @@ get_cni_fd(char *if_name) > /* Initiate close connection */ > strlcpy(request, UDS_FIN_MSG, UDS_MAX_CMD_LEN); > memset(response, 0, sizeof(response)); > - if (make_request_cni(sock, &server, request, NULL, response, > &out_fd) < 0) { > + if (make_request_cni(sock, &server, request, NULL, response, > &out_fd, > +uds_path) < 0) { > AF_XDP_LOG(ERR, "Error in processing cmd [%s]\n", > request); > goto err_close; > } > @@ -1640,7 +1639,7 @@ xsk_configure(struct pmd_internals *internals, > struct pkt_rx_queue *rxq, #endif >=20 > /* Disable libbpf from loading XDP program */ > - if (internals->use_cni) > + if (strnlen(internals->uds_path, PATH_MAX)) > cfg.libbpf_flags |=3D > XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD; >=20 > if (strnlen(internals->prog_path, PATH_MAX)) { @@ -1694,18 > +1693,17 @@ xsk_configure(struct pmd_internals *internals, struct > pkt_rx_queue *rxq, > } > } >=20 > - if (internals->use_cni) { > - int err, fd, map_fd; > + if (strnlen(internals->uds_path, PATH_MAX)) { > + int err, map_fd; >=20 > /* get socket fd from CNI plugin */ > - map_fd =3D get_cni_fd(internals->if_name); > + map_fd =3D get_cni_fd(internals->if_name, internals- > >uds_path); > if (map_fd < 0) { > AF_XDP_LOG(ERR, "Failed to receive CNI plugin > fd\n"); > goto out_xsk; > } > - /* get socket fd */ > - fd =3D xsk_socket__fd(rxq->xsk); > - err =3D bpf_map_update_elem(map_fd, &rxq- > >xsk_queue_idx, &fd, 0); > + > + err =3D xsk_socket__update_xskmap(rxq->xsk, map_fd); > if (err) { > AF_XDP_LOG(ERR, "Failed to insert unprivileged xsk > in map.\n"); > goto out_xsk; > @@ -1957,7 +1955,7 @@ parse_name_arg(const char *key __rte_unused, >=20 > /** parse xdp prog argument */ > static int > -parse_prog_arg(const char *key __rte_unused, > +parse_path_arg(const char *key __rte_unused, > const char *value, void *extra_args) { > char *path =3D extra_args; > @@ -2023,7 +2021,7 @@ xdp_get_channels_info(const char *if_name, int > *max_queues, static int parse_parameters(struct rte_kvargs *kvlist, cha= r > *if_name, int *start_queue, > int *queue_cnt, int *shared_umem, char *prog_path, > - int *busy_budget, int *force_copy, int *use_cni) > + int *busy_budget, int *force_copy, char *uds_path) > { > int ret; >=20 > @@ -2050,7 +2048,7 @@ parse_parameters(struct rte_kvargs *kvlist, char > *if_name, int *start_queue, > goto free_kvlist; >=20 > ret =3D rte_kvargs_process(kvlist, ETH_AF_XDP_PROG_ARG, > - &parse_prog_arg, prog_path); > + &parse_path_arg, prog_path); > if (ret < 0) > goto free_kvlist; >=20 > @@ -2064,8 +2062,8 @@ parse_parameters(struct rte_kvargs *kvlist, char > *if_name, int *start_queue, > if (ret < 0) > goto free_kvlist; >=20 > - ret =3D rte_kvargs_process(kvlist, ETH_AF_XDP_USE_CNI_ARG, > - &parse_integer_arg, use_cni); > + ret =3D rte_kvargs_process(kvlist, > ETH_AF_XDP_USE_CNI_UDS_PATH_ARG, > + &parse_path_arg, uds_path); > if (ret < 0) > goto free_kvlist; >=20 > @@ -2108,7 +2106,7 @@ static struct rte_eth_dev * init_internals(struct > rte_vdev_device *dev, const char *if_name, > int start_queue_idx, int queue_cnt, int shared_umem, > const char *prog_path, int busy_budget, int force_copy, > - int use_cni) > + const char *uds_path) > { > const char *name =3D rte_vdev_device_name(dev); > const unsigned int numa_node =3D dev->device.numa_node; @@ - > 2137,7 +2135,7 @@ init_internals(struct rte_vdev_device *dev, const char > *if_name, #endif > internals->shared_umem =3D shared_umem; > internals->force_copy =3D force_copy; > - internals->use_cni =3D use_cni; > + strlcpy(internals->uds_path, uds_path, PATH_MAX); >=20 > if (xdp_get_channels_info(if_name, &internals->max_queue_cnt, > &internals->combined_queue_cnt)) { @@ - > 2196,7 +2194,7 @@ init_internals(struct rte_vdev_device *dev, const char > *if_name, > eth_dev->data->dev_link =3D pmd_link; > eth_dev->data->mac_addrs =3D &internals->eth_addr; > eth_dev->data->dev_flags |=3D > RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; > - if (!internals->use_cni) > + if (!strnlen(internals->uds_path, PATH_MAX)) > eth_dev->dev_ops =3D &ops; > else > eth_dev->dev_ops =3D &ops_cni; > @@ -2327,7 +2325,7 @@ rte_pmd_af_xdp_probe(struct rte_vdev_device > *dev) > char prog_path[PATH_MAX] =3D {'\0'}; > int busy_budget =3D -1, ret; > int force_copy =3D 0; > - int use_cni =3D 0; > + char uds_path[PATH_MAX] =3D {'\0'}; > struct rte_eth_dev *eth_dev =3D NULL; > const char *name =3D rte_vdev_device_name(dev); >=20 > @@ -2370,20 +2368,20 @@ rte_pmd_af_xdp_probe(struct rte_vdev_device > *dev) >=20 > if (parse_parameters(kvlist, if_name, &xsk_start_queue_idx, > &xsk_queue_cnt, &shared_umem, prog_path, > - &busy_budget, &force_copy, &use_cni) < 0) { > + &busy_budget, &force_copy, uds_path) < 0) > { > AF_XDP_LOG(ERR, "Invalid kvargs value\n"); > return -EINVAL; > } >=20 > - if (use_cni && busy_budget > 0) { > + if (strnlen(uds_path, PATH_MAX) && busy_budget > 0) { > AF_XDP_LOG(ERR, "When '%s' parameter is used, '%s' > parameter is not valid\n", > - ETH_AF_XDP_USE_CNI_ARG, > ETH_AF_XDP_BUDGET_ARG); > + ETH_AF_XDP_USE_CNI_UDS_PATH_ARG, > ETH_AF_XDP_BUDGET_ARG); > return -EINVAL; > } >=20 > - if (use_cni && strnlen(prog_path, PATH_MAX)) { > + if (strnlen(uds_path, PATH_MAX) && strnlen(prog_path, > PATH_MAX)) { > AF_XDP_LOG(ERR, "When '%s' parameter is used, '%s' > parameter is not valid\n", > - ETH_AF_XDP_USE_CNI_ARG, > ETH_AF_XDP_PROG_ARG); > + ETH_AF_XDP_USE_CNI_UDS_PATH_ARG, > ETH_AF_XDP_PROG_ARG); > return -EINVAL; > } >=20 > @@ -2410,7 +2408,7 @@ rte_pmd_af_xdp_probe(struct rte_vdev_device > *dev) >=20 > eth_dev =3D init_internals(dev, if_name, xsk_start_queue_idx, > xsk_queue_cnt, shared_umem, prog_path, > - busy_budget, force_copy, use_cni); > + busy_budget, force_copy, uds_path); > if (eth_dev =3D=3D NULL) { > AF_XDP_LOG(ERR, "Failed to init internals\n"); > return -1; > @@ -2471,4 +2469,4 @@ > RTE_PMD_REGISTER_PARAM_STRING(net_af_xdp, > "xdp_prog=3D " > "busy_budget=3D " > "force_copy=3D " > - "use_cni=3D "); > + "uds_path=3D "); > -- > 2.41.0