From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10E57A052A for ; Wed, 11 Nov 2020 06:09:07 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BA116DED; Wed, 11 Nov 2020 06:09:05 +0100 (CET) Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by dpdk.org (Postfix) with ESMTP id 11E85DED; Wed, 11 Nov 2020 06:09:03 +0100 (CET) Received: by mail-pl1-f196.google.com with SMTP id z1so351654plo.12; Tue, 10 Nov 2020 21:09:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=zLqdE271hYA/ZKsKaL4qA33qqfZwC29sxnDhQGJ4yfE=; b=Dbto32U/RDvaEcvZ3oZOz6WYa1BTG/Xxo74w/k8LdNUTtOvTdvuDNla6PgSRi/gNBW zog13awjDAcXLhb8Xz/KUn1/rcwgmckzqw0mk6WcdoHzqzVBiYkgq5YLbSr5wTY887S7 3Xu3YBKpV8P704E9QDK0cWEGUn7PMDwyMEqp4FmDdDQ222vx2zb/0Zox7hHSiuT9xu1Q Ig7UPNZGzl2Qv7rra/b7qJiCjznM5Sq4saER4GX4v3vvUPNFEsrqz9k3Ia/sJHpZFN6n eoNLyhXAVVGQDWfT65zCxvoT9F/p/z2lCjmW7NS7MVNuumBNNcVM9B8XtGcp7NA2AD/Z tM7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=zLqdE271hYA/ZKsKaL4qA33qqfZwC29sxnDhQGJ4yfE=; b=lWCgSLuO5d076Lg6MagUuIQPURJHHoYQotbjpCkCajQxdBKx6BWNdyNOrnUrmXgws/ Z5OX/iP270l7vbYrdefIYJhjckSgdYNijyykkzIjU3u3AQOFmCLUnzb5LA5wMmdfLH7r 3a6jW9dYnZ8adGrRFOqQmsMEQFaulZ/2E5xVp82t5cvmPFGsfC7rsyMGzKBURc/jDFXj ryqCjXxiGhCACL41KX+ebHAX8A4DoNhPQL6DWm9azWrirUd1Bo8alKcwysbdkI12ePmK cMSl+j0wRgehwYFwXR/E/19T7fBylFcIKOWYPDbVPE2ZIYWqqovDgG93oU1H4uw+gTAx jZqA== X-Gm-Message-State: AOAM530y0zkPLRrYhbpxehmbCKj2AUVOOeLOoDIgvRqeHrHzQVrBeGTM KaBd34KCNxJIRNUKIM3P5SI= X-Google-Smtp-Source: ABdhPJz3r4qhuxglBdPJIwqVDIpSnZHvEGgfOela02BLv1wSDDHpFbLIgwO62+zoljoKk8A2etcmdA== X-Received: by 2002:a17:90b:784:: with SMTP id l4mr2073300pjz.146.1605071341045; Tue, 10 Nov 2020 21:09:01 -0800 (PST) Received: from gmail.com ([1.6.215.26]) by smtp.gmail.com with ESMTPSA id 126sm844930pfw.10.2020.11.10.21.08.58 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 10 Nov 2020 21:09:00 -0800 (PST) Date: Wed, 11 Nov 2020 10:38:53 +0530 From: Nithin Dabilpuram To: "Burakov, Anatoly" Cc: jerinj@marvell.com, dev@dpdk.org, stable@dpdk.org Message-ID: References: <20201012081106.10610-1-ndabilpuram@marvell.com> <20201105090423.11954-1-ndabilpuram@marvell.com> <20201105090423.11954-3-ndabilpuram@marvell.com> <2d2b628e-be4c-0abb-6fb0-9bf98d28cc26@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2d2b628e-be4c-0abb-6fb0-9bf98d28cc26@intel.com> Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH v2 2/3] vfio: fix DMA mapping granularity for type1 iova as va X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On Tue, Nov 10, 2020 at 02:17:39PM +0000, Burakov, Anatoly wrote: > On 05-Nov-20 9:04 AM, Nithin Dabilpuram wrote: > > Partial unmapping is not supported for VFIO IOMMU type1 > > by kernel. Though kernel gives return as zero, the unmapped size > > returned will not be same as expected. So check for > > returned unmap size and return error. > > > > For IOVA as PA, DMA mapping is already at memseg size > > granularity. Do the same even for IOVA as VA mode as > > DMA map/unmap triggered by heap allocations, > > maintain granularity of memseg page size so that heap > > expansion and contraction does not have this issue. > > > > For user requested DMA map/unmap disallow partial unmapping > > for VFIO type1. > > > > Fixes: 73a639085938 ("vfio: allow to map other memory regions") > > Cc: anatoly.burakov@intel.com > > Cc: stable@dpdk.org > > > > Signed-off-by: Nithin Dabilpuram > > --- > > > > > @@ -525,12 +528,19 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, > > /* for IOVA as VA mode, no need to care for IOVA addresses */ > > if (rte_eal_iova_mode() == RTE_IOVA_VA && msl->external == 0) { > > uint64_t vfio_va = (uint64_t)(uintptr_t)addr; > > - if (type == RTE_MEM_EVENT_ALLOC) > > - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va, > > - len, 1); > > - else > > - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va, > > - len, 0); > > + uint64_t page_sz = msl->page_sz; > > + > > + /* Maintain granularity of DMA map/unmap to memseg size */ > > + for (; cur_len < len; cur_len += page_sz) { > > + if (type == RTE_MEM_EVENT_ALLOC) > > + vfio_dma_mem_map(default_vfio_cfg, vfio_va, > > + vfio_va, page_sz, 1); > > + else > > + vfio_dma_mem_map(default_vfio_cfg, vfio_va, > > + vfio_va, page_sz, 0); > > I think you're mapping the same address here, over and over. Perhaps you > meant `vfio_va + cur_len` for the mapping addresses? There is a 'vfio_va += page_sz;' in next line right ? > > -- > Thanks, > Anatoly