From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <huawei.xie@intel.com>
Received: from mga01.intel.com (mga01.intel.com [192.55.52.88])
 by dpdk.org (Postfix) with ESMTP id 4BF5C2BD0
 for <dev@dpdk.org>; Thu, 25 Feb 2016 06:12:40 +0100 (CET)
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by fmsmga101.fm.intel.com with ESMTP; 24 Feb 2016 21:12:40 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.22,496,1449561600"; d="scan'208";a="752863894"
Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206])
 by orsmga003.jf.intel.com with ESMTP; 24 Feb 2016 21:12:38 -0800
Received: from shsmsx104.ccr.corp.intel.com (10.239.110.15) by
 FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Wed, 24 Feb 2016 21:12:37 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.136]) by
 SHSMSX104.ccr.corp.intel.com ([169.254.5.132]) with mapi id 14.03.0248.002;
 Thu, 25 Feb 2016 13:12:35 +0800
From: "Xie, Huawei" <huawei.xie@intel.com>
To: Ilya Maximets <i.maximets@samsung.com>, Thomas Monjalon
 <thomas.monjalon@6wind.com>
Thread-Topic: [PATCH RFC 4/4] doc: add note about rte_vhost_enqueue_burst
 thread safety.
Thread-Index: AdFq8Jaz1/1ngahpRXGW43kFOae+SQ==
Date: Thu, 25 Feb 2016 05:12:35 +0000
Message-ID: <C37D651A908B024F974696C65296B57B4C608145@SHSMSX101.ccr.corp.intel.com>
References: <1455863563-15751-1-git-send-email-i.maximets@samsung.com>
 <56C6DACA.7040109@samsung.com>
 <C37D651A908B024F974696C65296B57B4C5EC0DB@SHSMSX101.ccr.corp.intel.com>
 <3343596.CjhAKlsm75@xps13>
 <C37D651A908B024F974696C65296B57B4C5F068D@SHSMSX101.ccr.corp.intel.com>
 <56CD3A66.5020804@samsung.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: Dyasly Sergey <s.dyasly@samsung.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH RFC 4/4] doc: add note about
 rte_vhost_enqueue_burst thread safety.
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Thu, 25 Feb 2016 05:12:41 -0000

On 2/24/2016 1:07 PM, Ilya Maximets wrote:=0A=
> On 23.02.2016 08:56, Xie, Huawei wrote:=0A=
>> On 2/22/2016 6:16 PM, Thomas Monjalon wrote:=0A=
>>> 2016-02-22 02:07, Xie, Huawei:=0A=
>>>> On 2/19/2016 5:05 PM, Ilya Maximets wrote:=0A=
>>>>> On 19.02.2016 11:36, Xie, Huawei wrote:=0A=
>>>>>> On 2/19/2016 3:10 PM, Yuanhan Liu wrote:=0A=
>>>>>>> On Fri, Feb 19, 2016 at 09:32:43AM +0300, Ilya Maximets wrote:=0A=
>>>>>>>> Signed-off-by: Ilya Maximets <i.maximets@samsung.com>=0A=
>>>>>>>> ---=0A=
>>>>>>>>  doc/guides/prog_guide/thread_safety_dpdk_functions.rst | 1 +=0A=
>>>>>>>>  1 file changed, 1 insertion(+)=0A=
>>>>>>>>=0A=
>>>>>>>> diff --git a/doc/guides/prog_guide/thread_safety_dpdk_functions.rs=
t b/doc/guides/prog_guide/thread_safety_dpdk_functions.rst=0A=
>>>>>>>> index 403e5fc..13a6c89 100644=0A=
>>>>>>>> --- a/doc/guides/prog_guide/thread_safety_dpdk_functions.rst=0A=
>>>>>>>> +++ b/doc/guides/prog_guide/thread_safety_dpdk_functions.rst=0A=
>>>>>>>>  The mempool library is based on the DPDK lockless ring library an=
d therefore is also multi-thread safe.=0A=
>>>>>>>> +rte_vhost_enqueue_burst() is also thread safe because based on lo=
ckless ring-buffer algorithm like the ring library.=0A=
>>>>>>> FYI, Huawei meant to make rte_vhost_enqueue_burst() not be thread-s=
afe,=0A=
>>>>>>> to aligh with the usage of rte_eth_tx_burst().=0A=
>>>>>>>=0A=
>>>>>>> 	--yliu=0A=
>>>>>> I have a patch to remove the lockless enqueue. Unless there is stron=
g=0A=
>>>>>> reason, i prefer vhost PMD to behave like other PMDs, with no intern=
al=0A=
>>>>>> lockless algorithm. In future, for people who really need it, we cou=
ld=0A=
>>>>>> have dynamic/static switch to enable it.=0A=
>>>> Thomas, what is your opinion on this and my patch removing lockless en=
queue?=0A=
>>> The thread safety behaviour is part of the API specification.=0A=
>>> If we want to enable/disable such behaviour, it must be done with an AP=
I=0A=
>>> function. But it would introduce a conditional statement in the fast pa=
th.=0A=
>>> That's why the priority must be to keep a simple and consistent behavio=
ur=0A=
>>> and try to build around. An API complexity may be considered only if th=
ere=0A=
>>> is a real (measured) gain.=0A=
>> Let us put the gain aside temporarily. I would do the measurement.=0A=
>> Vhost is wrapped as a PMD in Tetsuya's patch. And also in DPDK OVS's=0A=
>> case, it is wrapped as a vport like all other physical ports. The DPDK=
=0A=
>> app/OVS will treat all ports equally.=0A=
> That is not true. Currently vhost in Open vSwitch implemented as a separa=
te=0A=
> netdev class. So, to use concurrency of vhost we just need to remove=0A=
> 2 lines (rte_spinlock_lock and rte_spinlock_unlock) from function=0A=
> __netdev_dpdk_vhost_send(). This will not change behaviour of other types=
=0A=
> of ports.=0A=
=0A=
I checked OVS implementation. It raised several concerns.=0A=
For physical ports, it uses multiple queues to solve concurrent tx.=0A=
For vhost ports,  a) The thread safe behavior of vhost isn't used.=0A=
rte_spinlock is used outside. Yes, it could be removed.  b) If a packet=0A=
is send to vhost, it is directly enqueued to guest without buffering. We=0A=
could use thread safe ring to queue packets first and then enqueued to=0A=
guest at appropriate time later, then vhost internal lockless isn't needed.=
=0A=
Besides, IMO thread safe implementation adds the complexity of vhost=0A=
implementation.=0A=
=0A=
>> It will add complexity if the app=0A=
>> needs to know that some supports concurrency while some not. Since all=
=0A=
>> other PMDs doesn't support thread safety, it doesn't make sense for=0A=
>> vhost PMD to support that. I believe the APP will not use that behavior.=
=0A=
>> >From the API's point of view, if we previously implemented it wrongly,=
=0A=
>> we need to fix it as early as possible.=0A=
=0A=