From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 20549A04FA; Mon, 3 Feb 2020 10:15:46 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E9B6E1BFE2; Mon, 3 Feb 2020 10:15:45 +0100 (CET) Received: from EUR03-VE1-obe.outbound.protection.outlook.com (mail-eopbgr50041.outbound.protection.outlook.com [40.107.5.41]) by dpdk.org (Postfix) with ESMTP id 0B1B21BFA1 for ; Mon, 3 Feb 2020 10:15:44 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bOZ+cPOeDbBhE4m8RidVppajM2oWsXnEvPhX7rKXgoQSgHDS3YE1vZS0DhTpt933sj/0OE137RMpQa1YMrsMnuVgALgG9PzRf1diNayeQ8kged7WSQD7oPQS1Z37gRv0q3oyjsunZgQG5yRyC4GT37pDEOfzhE88TO9Kf2qKxEd1DL796bzxK4mKPlVXVi6SD2+uPwa3FIpgXaT2CGK9sVUbEVKGf72pqcyCEdTaTrjKEM3lkAh/jItm1rh4DZwzh2m/2duA7buNQUAtIUPlnXbdccamXkzNJ5UdS3l6DdrZfLE+aVcKJ/zTtucnJtiaO2N/lf1LQzP+viYoHSFKfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ci/TVr5Fmo5b+bbJnMWGJpNia5UBqrAfOtpVo9/pYek=; b=aJBFBiUQDiB0oE5En5uz+jqRQ3Yk6uCUdH68YGyVDDO+iaPy46IYCo0CBJjFubSsdntDetzN0SsU4o/zT9gB6ZiDfolIMmcHTRI6oRuzwpZe9fr39JK57Vx6THuEL/wCJAkpNtEFsf+iWlphxf8ObAMmP5p8uH4fDOYabZwKXcD1muWwDOS1iK7kvw6R+Di1sO+iDHHqN/rgkovL19Y3BuGlzn1tBHULWOOUmXlwMy3PlDf8TEzg2jb9nWys9r6ZFn4w/XXxCW1GYi2aUxGq9y1ma4VGEmnsRZ7nqmqoi+tx+QKGl0lWSvFBcX/gcyC455eDO0Bt+q2DjBaRdUiNOg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass header.d=nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ci/TVr5Fmo5b+bbJnMWGJpNia5UBqrAfOtpVo9/pYek=; b=iSXU74Mk7YMbB6cgvwnR1cA+eiLr9ozRxWmw3UHErsn/BcR/Lfdbm9zP6eiI59M7QZFqPCAdaedfMyITFoR3BnRkAjucW1DdBVwxn4x7SB6iUV8BIRPeYlsdoljk7NeKdG166+/5G5maKSV4Z8ZsU2JuGz9C8DWxC3TRDE3XXR0= Received: from VE1PR04MB6639.eurprd04.prod.outlook.com (10.255.118.11) by VE1PR04MB6687.eurprd04.prod.outlook.com (20.179.234.30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2665.22; Mon, 3 Feb 2020 09:15:42 +0000 Received: from VE1PR04MB6639.eurprd04.prod.outlook.com ([fe80::25b0:b1ac:aed0:63e1]) by VE1PR04MB6639.eurprd04.prod.outlook.com ([fe80::25b0:b1ac:aed0:63e1%7]) with mapi id 15.20.2686.031; Mon, 3 Feb 2020 09:15:42 +0000 From: Akhil Goyal To: Anoob Joseph , "Ananyev, Konstantin" , "Nicolau, Radu" CC: Jerin Jacob Kollanukkaran , Lukas Bartosik , Narayana Prasad Raju Athreya , "dev@dpdk.org" Thread-Topic: [PATCH] examples/ipsec-secgw: increase number of qps to lcore_params Thread-Index: AQHVx8TaARu+ZZNhakuhFPOtMRTCEKgCUt6AgAEyNwCAAZNSAIAACTEAgAAdNQCABBPLgIAAAWEQ Date: Mon, 3 Feb 2020 09:15:42 +0000 Message-ID: References: <1578667598-18942-1-git-send-email-anoobj@marvell.com> In-Reply-To: Accept-Language: en-IN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=akhil.goyal@nxp.com; x-originating-ip: [92.120.1.72] x-ms-publictraffictype: Email x-ms-office365-filtering-ht: Tenant x-ms-office365-filtering-correlation-id: 05637423-fd0d-41ac-f9f4-08d7a889a4d6 x-ms-traffictypediagnostic: VE1PR04MB6687: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:2958; x-forefront-prvs: 0302D4F392 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(366004)(396003)(136003)(376002)(346002)(39860400002)(199004)(189003)(86362001)(4326008)(9686003)(2906002)(33656002)(186003)(55016002)(26005)(64756008)(66946007)(81156014)(44832011)(66446008)(66476007)(81166006)(66556008)(76116006)(8676002)(8936002)(7696005)(478600001)(71200400001)(110136005)(5660300002)(6506007)(52536014)(316002)(54906003); DIR:OUT; SFP:1101; SCL:1; SRVR:VE1PR04MB6687; H:VE1PR04MB6639.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: trWvBI6vk2eZz+PAfhfITGtjJaSfk/NymbQY1K7Mkj3jU/NS0I47kP2oU9wuL65Ya35FAaqeAW2Hge3mBsNmzIBDAlZt0MpJYOR7kagecke6gabQ9eH+7Q5e1M3wEnWGE3CHaEjIeigb6Ne99GEVrJOu90BxBxC6AsMWQT3gWBGeQlH3dKeoKoCHxWAoyM//5RfHY9ZId8uKp6wi4GG0icYAAPhQQgJH6h8DNbeN57d9E4beXf7sw3XmS3JWR+1D8jVeJL3P2VWcfarGCleBDtYyix+WPYyCttUAkkyegUFS86LxqE+po+tp7SFVYD28DoNugwvXdtVbkOQO3NC6Ru9nKyObkTA7F5E3FY/9ddpuHgqy9whxr3h7zfNj0hRwCwweDyW7kUY8w7sMA5BIh5Sv5pXz1diAbeyHvtIdAzRcv3+LY/SDapgX2XUhzvXp x-ms-exchange-antispam-messagedata: D7y4LKvvkPK0O6SqD+7HgianragSPjgYLrRa58L1OrEJAZN3p7+LjJOEm8a8h0KmyvYaD1HCGZLXXYUpeYBqN26Yx6BSeHTAUtQDOjh/rq8i1BNg1QXvOMd02mhTN9b3uvV/r+6SHqqY+bLJS8JOWg== x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 05637423-fd0d-41ac-f9f4-08d7a889a4d6 X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Feb 2020 09:15:42.3983 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 9POAnSn1iNWtAJBKL1GQi+5piwu4kvSD+vW+u79ySzbz6Hy0EYUgj1DVjaKIStxo4RDRZXYbeVfPglLF6GicWw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6687 Subject: Re: [dpdk-dev] [PATCH] examples/ipsec-secgw: increase number of qps to lcore_params X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > > > > > > > Currently only one qp will be used for one core. The number o= f > > > > > > > qps can be increased to match the number of lcore params. > > > > > > > > > > > > I don't really understand the purpose of that patch.... > > > > > > As I understand, all it does - unconditionally increases number > > > > > > of crypto-queues mapped to the same lcore. > > > > > > > > > > [Anoob] With the current code, we will have only crypto qp mapped > > > > > to one lcore. So even if you have large number of crypto qps, you > > > > > would be only > > > > using as much as the number of lcores. > > > > > > > > > > This is an inefficient model as a fat flow(say with large packet > > > > > sizes) on one eth queue hitting one core can starve another flow > > > > > which > > > > happens to hit the same core, because both flows would get queued t= o > > > > the same qp. > > > > > And, we cannot just randomly submit to multiple qps from the same > > > > > core as then the ordering would be messed up. > > > > > > > > No-one suggesting that one I believe. > > > > > > > > So the best possible usage model would be to map one eth queue to > > > > one > > > > > crypto qp. That way, the core wouldn't be unnecessarily pipeline > > > > > the crypto > > > > processing. > > > > > > > > I doubt it is really the 'best possible usage model'. > > > > There could be cases when forcing lcore to use/manage more > > > > crypto-queues will lead to negative effects: perf drop, not enough > > > > crypto queues for all eth queues, etc. > > > > If your HW/SW requires to match each eth queue with a separate > > > > crypto-queue, then I think it should be an optional feature, while > > > > keeping default behavior intact. > > > > > > [Anoob] I think the question here is more to do with s/w crypto PMDs > > > vs h/w crypto PMDs. For s/w PMDs, having more queues doesn't really > > make sense and for h/w PMDs it's better. > > > > Not always. > > If these queues belongs to the same device, sometimes it is faster to u= se just > > one queue for device per core. > > HW descriptor status polling, etc. is not free. > > > > > I'll see how we can make this an optional feature. Would you be okay > > > with allowing such behavior if the underlying PMD can support as many > > queues as lcore_params? > > > > > > As in, if the PMD doesn't support enough queues, we do 1 qp per core. > > Would that work for you? > > > > I am not fond of idea to change default mapping method silently. > > My preference would be a new command-line option (--cdev-maping or so). > > Another thought, make it more fine-grained and user-controlled by > > extending eth-port-queue-lcore mapping option. > > from current: ,, > > to ,,,. > > > > So let say with 2 cores , 2 eth ports/2 queues per port for current map= ping > > user would do: > > # use cdev queue 0 on all cdevs for lcore 6 # use cdev queue 1 on all c= devs for > > lcore 7 --lcores=3D"6,7" ... -- --config=3D"(0,0,6,0),(1,0,6,0),(0,1,7,= 1),(1,1,7,1)" > > > > for the mapping you'd like to have: > > --lcores=3D"6,7" ... -- --config=3D"(0,0,6,0),(1,0,6,1),(0,1,7,2),(1,1,= 7,3)" >=20 > [Anoob] I like this idea. This would work for inline case as well. >=20 > @Akhil, do you have any comments? >=20 > Also, I think we should make it ,,,, >=20 Looks good to me, but I believe this would need more changes and testing in= event patches. Also it does not have any changes for lookaside cases. Can we move this to next release and add lookaside case as well in a single= go. We need to close the RC2 on 5th Feb, and I don't want to push this series f= or RC3 as it is a massive change in ipsec-secgw. -Akhil