From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BD581A0C4A; Fri, 16 Jul 2021 11:03:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8735E40DDD; Fri, 16 Jul 2021 11:03:05 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 7118040151 for ; Fri, 16 Jul 2021 11:03:03 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626426183; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=86TU2QxhUBB2bdidqPHNSv31vcchrkVY24QEpQt48dc=; b=CXpyUdjevT8LaUmd3mz+bBZ10fBg66AepcWYPQdQpkTfzNbUF/IHGZnlTMH+Psp6WsdJQm 7oaSNmsBprz/OYhPb9D85amnAGS0KqK4Bm0VrB+9nAUDqPJvsITcqWn/n5zY0JuUQ9gfoS 8DsZt9KRnd0b3nCYxo582tF+BcNXHMg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-590-tylSAE_VNneHYP3kt1PCnw-1; Fri, 16 Jul 2021 05:02:59 -0400 X-MC-Unique: tylSAE_VNneHYP3kt1PCnw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5E7D6100C660; Fri, 16 Jul 2021 09:02:58 +0000 (UTC) Received: from [10.36.110.39] (unknown [10.36.110.39]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D9F585DA61; Fri, 16 Jul 2021 09:02:56 +0000 (UTC) To: "Hu, Jiayu" , "Ma, WenwuX" , "dev@dpdk.org" Cc: "Xia, Chenbo" , "Jiang, Cheng1" , "Wang, YuanX" References: <20210602083110.5530-1-yuanx.wang@intel.com> <20210705181151.141752-1-wenwux.ma@intel.com> <20210705181151.141752-4-wenwux.ma@intel.com> <74bd35ee-5548-f32d-638f-9ea1748e8e35@redhat.com> From: Maxime Coquelin Message-ID: Date: Fri, 16 Jul 2021 11:02:55 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v5 3/4] vhost: support async dequeue for split ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 7/16/21 9:55 AM, Hu, Jiayu wrote: > > >> -----Original Message----- >> From: Maxime Coquelin >> Sent: Friday, July 16, 2021 3:46 PM >> To: Hu, Jiayu ; Ma, WenwuX ; >> dev@dpdk.org >> Cc: Xia, Chenbo ; Jiang, Cheng1 >> ; Wang, YuanX >> Subject: Re: [PATCH v5 3/4] vhost: support async dequeue for split ring >> >> Hi, >> >> On 7/16/21 3:10 AM, Hu, Jiayu wrote: >>> Hi, Maxime, >>> >>>> -----Original Message----- >>>> From: Maxime Coquelin >>>> Sent: Thursday, July 15, 2021 9:18 PM >>>> To: Hu, Jiayu ; Ma, WenwuX >> ; >>>> dev@dpdk.org >>>> Cc: Xia, Chenbo ; Jiang, Cheng1 >>>> ; Wang, YuanX >>>> Subject: Re: [PATCH v5 3/4] vhost: support async dequeue for split >>>> ring >>>> >>>> >>>> >>>> On 7/14/21 8:50 AM, Hu, Jiayu wrote: >>>>> Hi Maxime, >>>>> >>>>> Thanks for your comments. Applies are inline. >>>>> >>>>>> -----Original Message----- >>>>>> From: Maxime Coquelin >>>>>> Sent: Tuesday, July 13, 2021 10:30 PM >>>>>> To: Ma, WenwuX ; dev@dpdk.org >>>>>> Cc: Xia, Chenbo ; Jiang, Cheng1 >>>>>> ; Hu, Jiayu ; Wang, >>>>>> YuanX >>>>>> Subject: Re: [PATCH v5 3/4] vhost: support async dequeue for split >>>>>> ring >>>>>>> struct async_inflight_info { >>>>>>> struct rte_mbuf *mbuf; >>>>>>> - uint16_t descs; /* num of descs inflight */ >>>>>>> + union { >>>>>>> + uint16_t descs; /* num of descs in-flight */ >>>>>>> + struct async_nethdr nethdr; >>>>>>> + }; >>>>>>> uint16_t nr_buffers; /* num of buffers inflight for packed ring >>>>>>> */ -}; >>>>>>> +} __rte_cache_aligned; >>>>>> >>>>>> Does it really need to be cache aligned? >>>>> >>>>> How about changing to 32-byte align? So a cacheline can hold 2 objects. >>>> >>>> Or not forcing any alignment at all? Would there really be a >>>> performance regression? >>>> >>>>>> >>>>>>> >>>>>>> /** >>>>>>> * dma channel feature bit definition @@ -193,4 +201,34 @@ >>>>>>> __rte_experimental uint16_t rte_vhost_poll_enqueue_completed(int >>>>>>> vid, uint16_t queue_id, >>>>>>> struct rte_mbuf **pkts, uint16_t count); >>>>>>> >>>>>>> +/** >>>>>>> + * This function tries to receive packets from the guest with >>>>>>> +offloading >>>>>>> + * large copies to the DMA engine. Successfully dequeued packets >>>>>>> +are >>>>>>> + * transfer completed, either by the CPU or the DMA engine, and >>>>>>> +they are >>>>>>> + * returned in "pkts". There may be other packets that are sent >>>>>>> +from >>>>>>> + * the guest but being transferred by the DMA engine, called >>>>>>> +in-flight >>>>>>> + * packets. The amount of in-flight packets by now is returned in >>>>>>> + * "nr_inflight". This function will return in-flight packets >>>>>>> +only after >>>>>>> + * the DMA engine finishes transferring. >>>>>> >>>>>> I am not sure to understand that comment. Is it still "in-flight" >>>>>> if the DMA transfer is completed? >>>>> >>>>> "in-flight" means packet copies are submitted to the DMA, but the >>>>> DMA hasn't completed copies. >>>>> >>>>>> >>>>>> Are we ensuring packets are not reordered with this way of working? >>>>> >>>>> There is a threshold can be set by users. If set it to 0, which >>>>> presents all packet copies assigned to the DMA, the packets sent >>>>> from the guest will not be reordered. >>>> >>>> Reordering packets is bad in my opinion. We cannot expect the user to >>>> know that he should set the threshold to zero to have packets ordered. >>>> >>>> Maybe we should consider not having threshold, and so have every >>>> descriptors handled either by the CPU (sync datapath) or by the DMA >>>> (async datapath). Doing so would simplify a lot the code, and would >>>> make performance/latency more predictable. >>>> >>>> I understand that we might not get the best performance for every >>>> packet size doing that, but that may be a tradeoff we would make to >>>> have the feature maintainable and easily useable by the user. >>> >>> I understand and agree in some way. But before changing the existed >>> design in async enqueue and dequeue, we need more careful tests, as >>> current design is well validated and performance looks good. So I suggest >> to do it in 21.11. >> >> My understanding was that for enqueue path packets were not reordered, >> thinking the used ring was written in order, but it seems I was wrong. >> >> What kind of validation and performance testing has been done? I can >> imagine reordering to have a bad impact on L4+ benchmarks. > > Iperf and scp in V2V scenarios. > > One thing to notice is that if we guarantee in-order, small packets will be blocked > by large packets, especially for control packets in TCP, which significantly increases > latency. In iperf tests, it will impact connection setup and increase latency. Current > design doesn't show big impacts on iperf and scp tests, but I am not sure about more > complex networking scenarios. > Ok, I see. I guess that depending on the payload size, one can see perf improvement if all the data segments are larger than the threshold. Or it could cause perf penalty if last segments arrives before the previous ones. >> >> Let's first fix this for enqueue path, then submit new revision for dequeue >> path without packet reordering. > > Sure. The way to fix it needs to be very careful, IMO. So I'd suggest more tests > before any modification. > > Thanks, > Jiayu >> >> Regards, >> Maxime >> >>> Thanks, >>> Jiayu >>> >