From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6BA8942E52 for ; Thu, 13 Jul 2023 06:12:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D723140DDA; Thu, 13 Jul 2023 06:12:17 +0200 (CEST) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by mails.dpdk.org (Postfix) with ESMTP id 188E84003C for ; Thu, 13 Jul 2023 06:12:16 +0200 (CEST) Received: from kwepemm600002.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4R1h4p0WfCz18KCq; Thu, 13 Jul 2023 12:11:38 +0800 (CST) Received: from dggpeml500020.china.huawei.com (7.185.36.88) by kwepemm600002.china.huawei.com (7.193.23.29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 13 Jul 2023 12:12:13 +0800 Received: from dggpeml500020.china.huawei.com ([7.185.36.88]) by dggpeml500020.china.huawei.com ([7.185.36.88]) with mapi id 15.01.2507.027; Thu, 13 Jul 2023 12:12:13 +0800 From: "jiangheng (G)" To: "olivier.matz@6wind.com" , "andrew.rybchenko@oktetlabs.ru" CC: "users@dpdk.org" Subject: rte_mempool_ops support "ring_sp_sc" in single thread mode Thread-Topic: rte_mempool_ops support "ring_sp_sc" in single thread mode Thread-Index: Adm1QCMlvcA+38UASSqyVDT11AwOuw== Date: Thu, 13 Jul 2023 04:12:12 +0000 Message-ID: Accept-Language: zh-CN, en-US Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.136.117.195] Content-Type: multipart/alternative; boundary="_000_e2743445d690447ba305bf33402ba020huaweicom_" MIME-Version: 1.0 X-CFilter-Loop: Reflected X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --_000_e2743445d690447ba305bf33402ba020huaweicom_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi, Whether rte_mempool_ops need supports the scenario where the producer and c= onsumer are in the same thread? Currently, "ring_sp_sc" can be used in this scenario, but its dequeue and e= nqueue functions have memory barriers, which are not required in the same s= ingle thread. In addition, the number of r.prod and r.cons keeps increasing, and the rece= ntly released mbuf cannot be reused, which also affects the performance. If= the cache is not enabled. we can refer to mempool cache to implement "ring_single_thread_sp_sc". In t= his simple scenario, the performance should be improved. This scenario is common: Such as mbuf mempool. dpdk NICs receives a packet, the protocol stack proc= esses and releases the packet. The alloc and release of the mbuf are in the= same thread. If it makes sense, I can offer a patch Thanks --_000_e2743445d690447ba305bf33402ba020huaweicom_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Hi,

Whether rte_mempool_ops need su= pports the scenario where the producer and consumer are in the same thread?

Currently, “ring_sp_sc= 221; can be used in this scenario, but its dequeue and enqueue functions ha= ve memory barriers, which are not required in the same single thread.

In addition, the number of r.pr= od and r.cons keeps increasing, and the recently released mbuf cannot be re= used, which also affects the performance. If the cache is not enabled.=

 

we can refer to mempool cache t= o implement “ring_single_thread_sp_sc”. In this simple scenario= , the performance should be improved.

 

This scenario is common:

Such as mbuf mempool.  dpd= k NICs receives a packet, the protocol stack processes and releases the pac= ket. The alloc and release of the mbuf are in the same thread.

 

If it makes sense, I can offer = a patch

 

Thanks

--_000_e2743445d690447ba305bf33402ba020huaweicom_--