From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E72F8A0546; Wed, 7 Apr 2021 10:48:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8BC931410AB; Wed, 7 Apr 2021 10:48:23 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id A5417407FF for ; Wed, 7 Apr 2021 10:48:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1617785300; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1gqjBKcYoxRDOeQpiWLyR1KhRk5LnTmDA12Tmcmlfsw=; b=GJ+SGCEw5I1624U4KcTI4AhplipJxLWkAq1rm6wixWeHM1M13bO4YO+TNkPsAqqKFXr9t2 rRy0n5eYqg/l9NOgjofj+4v/FhAvZixGjM/2aLNVQErqwjo4XTwM0gDCGeZ2BKQiHpttPF Z97zA0DNeACL5Wpp/V4iIiVJX1P2iZk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-261-fiScUupgNgKU8DJbR2Dmgw-1; Wed, 07 Apr 2021 04:48:19 -0400 X-MC-Unique: fiScUupgNgKU8DJbR2Dmgw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 265BA18766D5; Wed, 7 Apr 2021 08:48:17 +0000 (UTC) Received: from [10.36.110.28] (unknown [10.36.110.28]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D302E6A038; Wed, 7 Apr 2021 08:48:14 +0000 (UTC) To: Thomas Monjalon , Cheng Jiang , bruce.richardson@intel.com Cc: chenbo.xia@intel.com, dev@dpdk.org, jiayu.hu@intel.com, yvonnex.yang@intel.com, yinan.wang@intel.com, alexr@nvidia.com, shahafs@nvidia.com References: <20210317054054.34616-1-Cheng1.jiang@intel.com> <23c4f6e7-4895-44b7-4ff0-3a02f9f3f86a@redhat.com> <2267082.chWHW8dCnR@thomas> From: Maxime Coquelin Message-ID: <2cbf905c-bbc2-6959-6606-e1a84cddc0ac@redhat.com> Date: Wed, 7 Apr 2021 10:48:13 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <2267082.chWHW8dCnR@thomas> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH] examples/vhost: fix ioat ring space in callbacks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 4/7/21 10:26 AM, Thomas Monjalon wrote: > 07/04/2021 09:47, Maxime Coquelin: >> >> On 3/17/21 6:40 AM, Cheng Jiang wrote: >>> We use ioat ring space for determining if ioat callbacks can enqueue a >>> packet to ioat device. But there is one slot can't be used in ioat >>> ring due to the ioat driver design, so we need to reduce one slot in >>> ioat ring to prevent ring size mismatch in ioat callbacks. >>> >>> Fixes: 2aa47e94bfb2 ("examples/vhost: add ioat ring space count and check") >>> Cc: stable@dpdk.org >>> >>> Signed-off-by: Cheng Jiang >>> --- >>> examples/vhost/ioat.c | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c >>> index 60b73be93..9cb5e0d50 100644 >>> --- a/examples/vhost/ioat.c >>> +++ b/examples/vhost/ioat.c >>> @@ -113,7 +113,7 @@ open_ioat(const char *value) >>> goto out; >>> } >>> rte_rawdev_start(dev_id); >>> - cb_tracker[dev_id].ioat_space = IOAT_RING_SIZE; >>> + cb_tracker[dev_id].ioat_space = IOAT_RING_SIZE - 1; >> >> That really comforts me in thinking we need a generic abstraction for >> DMA devices. How is the application developer supposed to know that >> the DMA driver has such weird limitations? > > Having a generic DMA API may be interesting. > Do you know any other HW candidate for such an API? > Do you think rte_memcpy can be used as a SW driver? Yes, I guess we could create a vdev driver with MEM_TO_MEM capability using rte_memcpy(). Note that IOAT in the Kernel is supported by the DMA framework. Regards, Maxime