From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 22D50A0471 for ; Tue, 16 Jul 2019 09:49:18 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AAAA62BD5; Tue, 16 Jul 2019 09:49:16 +0200 (CEST) Received: from rcdn-iport-2.cisco.com (rcdn-iport-2.cisco.com [173.37.86.73]) by dpdk.org (Postfix) with ESMTP id 7641A2BAB for ; Tue, 16 Jul 2019 09:49:14 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=4362; q=dns/txt; s=iport; t=1563263354; x=1564472954; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=6E+7THoaD1GrnGFqIPXjRRVIwvVmk+gM/pWpAt9pgCc=; b=M8I9Wq59QcQafo+9SH6nkOZLxndM/3FaS3pD840M2Vx8ho/M7UdcQsOg o4HYYK3UvghY/L2DW2cmLEeJsfxJPTtqVcU2UKQrQHBv/moNLptPjWZAe C3QUxVCEsJkYrkVvWdO2uUOenkrrs6C8WhFF5p55+qk1Zi4XveOQxOxnd g=; IronPort-PHdr: =?us-ascii?q?9a23=3AFQRejRKXIH/lsyXYY9mcpTVXNCE6p7X5OBIU4Z?= =?us-ascii?q?M7irVIN76u5InmIFeBvad2lFGcW4Ld5roEkOfQv636EU04qZea+DFnEtRXUg?= =?us-ascii?q?Mdz8AfngguGsmAXE3qK/jpbikSF8VZX1gj9Ha+YgBY?= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0AIAACigC1d/5JdJa1lGgEBAQEBAgE?= =?us-ascii?q?BAQEHAgEBAQGBUwUBAQEBCwGBQyknA2pVIAQLKIdjA4RSiXuCW5dPgS6BJAN?= =?us-ascii?q?UCQEBAQwBASMKAgEBhEACgmcjNAkOAQMBAQQBAQIBBW2FPAyFSgEBAQECARI?= =?us-ascii?q?VEwYBATcBBAcEAgEIFSEQMiUBAQQBDQUIGoMBgWoDDg8BDqEGAoE4iGCBcDO?= =?us-ascii?q?CeQEBBYEGASsBg1oYghMJgTQBi14XgUA/gRABRoJMPoJhA4FiJAyDCoImjAI?= =?us-ascii?q?znjIJAoIZhliNT4MalHCNNVWGc5AIAgQCBAUCDgEBBYFQOIFYcBVIDIJTCYI?= =?us-ascii?q?4DBeBAwEBgkmFFIUIATZyAYEojxwBAQ?= X-IronPort-AV: E=Sophos;i="5.63,497,1557187200"; d="scan'208";a="600595253" Received: from rcdn-core-10.cisco.com ([173.37.93.146]) by rcdn-iport-2.cisco.com with ESMTP/TLS/DHE-RSA-SEED-SHA; 16 Jul 2019 07:49:13 +0000 Received: from XCH-RCD-002.cisco.com (xch-rcd-002.cisco.com [173.37.102.12]) by rcdn-core-10.cisco.com (8.15.2/8.15.2) with ESMTPS id x6G7nD8L016519 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=FAIL); Tue, 16 Jul 2019 07:49:13 GMT Received: from xhs-aln-002.cisco.com (173.37.135.119) by XCH-RCD-002.cisco.com (173.37.102.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 16 Jul 2019 02:49:12 -0500 Received: from xhs-rtp-001.cisco.com (64.101.210.228) by xhs-aln-002.cisco.com (173.37.135.119) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 16 Jul 2019 02:49:12 -0500 Received: from NAM03-CO1-obe.outbound.protection.outlook.com (64.101.32.56) by xhs-rtp-001.cisco.com (64.101.210.228) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 16 Jul 2019 03:49:11 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FFjuApB7z55DhRVccHs/5CEeS/9QnqsOGegV40jBBx1adqlxbERJCHS6jF+3hs6aQfXRSYDsKnrOiPAopo+ee9HKZXRHOr/2aVxS+4pFsGOF+07YBPsRBlF+Nr6wpMV+BLZokkUgJc/ld8/b/pQNQ2I+JHTBAUa+arrJLOLung5KhPz4eofvrGJE/4fLXQ4gafAYE1bZRtmioySrlYk3RME2+aGzk5b4A4dX25gtdpMSfCRaFhjONSTTXm6YAZ2U4YKkQ7Efs9yEXfJmcAZDwSVmzocOUjGbzGN99E83gE6aF5yhwnXxibi8p0FqCHotZ5vitCvfG39N6aUAYSDM/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gsl/Oywfw2SyM7SaIRlCfIOTMiN7hQcrLpSsULxBkJ8=; b=YFAvgmAa/QiGLS/fIMihlLLWqNcdEkiaQBe9mV2xlZPlCf49K9/z3uyGLCpYaOFSSNLrzgVVXuuTV+dX2nXGWFj4ojdshWkEsb7VatmeRaumSUSaWu2+gCeG+dSN+6o/XhnQVNpThzbguEaeJ91RbPNtxrTIgHJ1HinWQX9iilF9YqQlaEL6dwiyNBRjczMCEYOgu8ayRP/3VxjOCHaexvVZJRNaU04gOJpEeNazLLB3CiDQ8RxTCHw5Z0ThqJHkM25S16WsIDIsD5wUxU0clyKfX/uVa8JJwffcDxHlZGFKEoQBXZPHAWfjRHfTZ+CJFd//16ypVSwq+yDMnLFY8g== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=cisco.com;dmarc=pass action=none header.from=cisco.com;dkim=pass header.d=cisco.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cisco.onmicrosoft.com; s=selector2-cisco-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gsl/Oywfw2SyM7SaIRlCfIOTMiN7hQcrLpSsULxBkJ8=; b=CeuJ1NkaaddDUN58l4UxrYVX9fuVmj4lHNj1rHqd3hzk1YJ/WuOvUs+z/ZL3HNs61ZUY633qeORR6zdVpzQwiJBHDJj+LmztfjWUD4ckYCmzsy0MHXSZ8iCZzRQh7tgkMYexzLTZCDU5IbEm99tXpnY6gZwN0BrO6Bm3GDesz4A= Received: from MWHPR11MB1839.namprd11.prod.outlook.com (10.175.53.12) by MWHPR11MB1949.namprd11.prod.outlook.com (10.175.54.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2073.10; Tue, 16 Jul 2019 07:49:10 +0000 Received: from MWHPR11MB1839.namprd11.prod.outlook.com ([fe80::3cef:35b5:8800:39f1]) by MWHPR11MB1839.namprd11.prod.outlook.com ([fe80::3cef:35b5:8800:39f1%6]) with mapi id 15.20.2073.012; Tue, 16 Jul 2019 07:49:10 +0000 From: "Hyong Youb Kim (hyonkim)" To: Jerin Jacob Kollanukkaran , David Marchand , Thomas Monjalon , "Ferruh Yigit" , Alejandro Lucero , Anatoly Burakov CC: "dev@dpdk.org" , "John Daley (johndale)" , Shahed Shaikh , "Nithin Kumar Dabilpuram" Thread-Topic: [RFC PATCH] vfio: avoid re-installing irq handler Thread-Index: AdU7LOBENm/7fWlUSVuvO2rop1WEPwAbdfHgAAD0wuAAAgb8YA== Date: Tue, 16 Jul 2019 07:49:10 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=hyonkim@cisco.com; x-originating-ip: [2001:420:c0dc:1001::46] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 9d814318-09f4-428f-e940-08d709c216b0 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600148)(711020)(4605104)(1401327)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7193020); SRVR:MWHPR11MB1949; x-ms-traffictypediagnostic: MWHPR11MB1949: x-ms-exchange-purlcount: 1 x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:10000; x-forefront-prvs: 0100732B76 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(396003)(376002)(39860400002)(136003)(346002)(366004)(13464003)(199004)(189003)(76116006)(33656002)(486006)(9686003)(66946007)(55016002)(99286004)(8936002)(256004)(6306002)(66476007)(66556008)(11346002)(46003)(8676002)(476003)(64756008)(446003)(966005)(81156014)(478600001)(14454004)(53936002)(74316002)(76176011)(4326008)(66446008)(305945005)(7696005)(71190400001)(7736002)(186003)(25786009)(68736007)(86362001)(6506007)(110136005)(6116002)(54906003)(5660300002)(316002)(6436002)(71200400001)(81166006)(229853002)(102836004)(52536014)(2906002)(6246003); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR11MB1949; H:MWHPR11MB1839.namprd11.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: cisco.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: OxIKQY81wmGLO00Hya/ZJM+RVQGRBJmbReP3kE4KsvCe4tYzw/n/Y00Cn5tx/sqHbbkKGvduVOy/Xtn1n1ZgNVurQXKokN7RfVqFXbLEXmM6gvpMmBAf9m5AZr03RlpUtL9YC/QqI1pvguZsDcJQmOcoMoH9agcI3EulymUmKFTU+XFeZy9y+mofYPxD2DPSb+R1lVg90o/vTAYxqJaEiG/nRALLbku0zWIfI0DGtGYJH0qBriPsVUs4lK1qdWHbgBkadLmueNiLGagZ0mf0B2WB1xpyAkuUgb2G7GaufGw5khs3nv2xoZM4+ynPtFWcNOkBkEwn9lsn1xAoEN8zq4YXQtD0WpS3yINLWs2mwDs5IGQ/JTRnIhAC2oz98EXIXmtEpW2/mtb57ofy8wBUSwmZSse9pxMjdOnkA9Qfp70= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 9d814318-09f4-428f-e940-08d709c216b0 X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Jul 2019 07:49:10.4081 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 5ae1af62-9505-4097-a69a-c1553ef7840e X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: hyonkim@cisco.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1949 X-OriginatorOrg: cisco.com X-Outbound-SMTP-Client: 173.37.102.12, xch-rcd-002.cisco.com X-Outbound-Node: rcdn-core-10.cisco.com Subject: Re: [dpdk-dev] [RFC PATCH] vfio: avoid re-installing irq handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Jerin Jacob Kollanukkaran [...] > > > > A rough patch for the approach mentioned earlier. It is only for > > discussion. > > > > http://mails.dpdk.org/archives/dev/2019-July/138113.html > > > > > > > > To try this out, first revert the following then apply. > > > > commit 89aac60e0be9 ("vfio: fix interrupts race condition") > > > > > > Yes. This patch has to be to reverted. It changes the existing > > > interrupt behavior and does not address the MSIX case as well. > > > > > > I think, The clean fix would be to introduce rte_intr_mask() and > > > rte_intr_unmask() by abstracting the INTX and MSIX differences And le= t > > > qede driver call it as needed. > > > > > > Thoughts? > > > > Hi, >=20 > Hi Hyong, >=20 > > > > You are proposing these? > > - Add rte_intr_mask_intx, rte_intr_unmask_intx. > > No APIs for masking MSI/MSI-X as vfio-pci does not support that. > > - Modify PMD irq handlers to use rte_intr_unmask_intx as necessary. >=20 > No, introduce the rte_intr_mask() and rte_intr_unmask(). > For MSIX + Linux VFIO, That API can return -ENOSUP as Linux VFIO+MSIX is > not supporting. > Another platform/eal may support it. >=20 These generic names would invite people to use API, only to see it fail, since it only works with INTx.. > Mask and unmask is operation is known to all IRQ controllers. > So, IMO, As far as abstraction is concerned it will be good fit. >=20 > > That might be too intrusive. And too much work for the sake of INTx.. > > Anyone really using/needing INTx these days? :-) >=20 > Yup. Mask needs to called only for only qede INTx. Looks like qede > Has MSIX and INTX separate handler. So this mask can go to qede INTx >=20 > > > > The following drivers call rte_intr_enable from their irq handlers. So = with > > explicit rte_intr_unmask_intx, all these would need to do "if using int= x, > > unmask"? > > > > atlantic, avp, axgbe, bnx2x, e1000, fm10k, ice, ixgbe, nfp, qede, sfc, > > vmxnet3 >=20 > No change on these PMDs. >=20 Why is that? These drivers potentially have the same "lost" interrupt issue mentioned in the original redhat bz (qede + MSI). I *think* this observation led David to address them all through vfio changes, rather than fixing qede alone. You want to introduce unmask API and use it only for qede in this cycle, and ask respective maintainers to fix their drivers in 19.11? > > And nfp seems to rely on rte_intr_enable to re-install irq handler to > unmask > > a vector in MSI-X Table? > > > > if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) { > > /* If MSI-X auto-masking is used, clear the entry */ > > rte_wmb(); > > rte_intr_enable(&pci_dev->intr_handle); > > > > With David's patch and mine, this handler would have to first > > rte_intr_disable() and then enable, if such unmasking is really necessa= ry.. > > > > As for the semantics of rte_intr_enable/disable, I am ok as is. > > - "enable": put things in a state where NIC can send an interrupt, and > > PMD/app gets a callback. > > Whether this involves unmasking for INTx is hidden. > > - "disable": put things in a state where NIC cannot send an interrupt. >=20 > It looks OK to me. My only thought was, Since mask and unmask > is a common irq controller operation. We may not need to add > A lot of common code(Introducing a state) to hide unmask INTx. > More over as you said, There is may only handful of devices uses INTX. >=20 > IMO, mask and unmask API is good fit as eal abstraction. > But Using a separate API or hide inside eal to solve this problem is good > question. > May be more thoughts from another quys will be good. >=20 > We will try to send a version with mask/unmask API to see the changes > required. >=20 > > > > Regardless of vfio changes, we should probably remove rte_intr_enable > > from qede_interrupt_handler (the MSI/MSI-X interrupt handler), to make > > usage/intention clear.. >=20 > Yes. Anyway this change is required. >=20 That change fixes the immediate problem (redhat bz) that started all this discussion. And allow us to kick the can down the road, for potential issues in other PMDs :-) Not suggesting we do, but it becomes an option.. Thanks. -Hyong