From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C5D79A0C4C; Tue, 21 Sep 2021 16:57:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A0A840DF8; Tue, 21 Sep 2021 16:57:02 +0200 (CEST) Received: from mail-io1-f42.google.com (mail-io1-f42.google.com [209.85.166.42]) by mails.dpdk.org (Postfix) with ESMTP id 3E9684003C for ; Tue, 21 Sep 2021 16:57:01 +0200 (CEST) Received: by mail-io1-f42.google.com with SMTP id d18so15065294iof.13 for ; Tue, 21 Sep 2021 07:57:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=2HiYtv0OyfoIXbzWdBoEVwybhobuFMSlfkWojUwPdhM=; b=FVAKtQXbtxGLu04zGbYb67DBRuL0aiNvZSGeCsCRI6/bKezH8eW4CMViz3kC8odPoe KPQ5Q0RPEruPtS7YY5GgsYP9j+hoXIh3b8Ffr281sVLLgLPxQGhmUACHCf7cHcEw/xQs ASTsTzdLMmpakChnohBevXbyABd3sODsanQk6r2UGfvlsERRCQt3ZiKtf9coN8H0N3xq FxcBs4GSq64yvKj3OqgbrXYSr7QBb/fD9RCKGBh/q477SnBs0x1r8Oz1Bav7h8LlRBpQ dMDIbHOkn8gINPlKXu8DdxLN2qb2qhJjDumY+pd1JcfyTmbWLumk/FerhnQL2xSv2vps GXQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=2HiYtv0OyfoIXbzWdBoEVwybhobuFMSlfkWojUwPdhM=; b=CfcIYYwd9oKtmnUYr8Mv+WsuKHQTq7ZEZgTXnATxiEtShigRVlf0Lamjh+0dV6KrAb HRdYqvQWbFsZlJO2XxbtsK48A1UYmyYCiJ0gXIpX/2BzbnzFmvaa5vX1HzI+XcO4sI0X 2oxsfWsmc9EYAFoMjhzXI/PRR+iEBbiX2AuxZdL+FsIjD2/MhEXDT09dGzWBguOdWsFc IEVbi2LfKfV4NLFczI0XRbb8FRsKdVRTvDSYbQWJ870d28TbAIVVikaGrMIHOYD8KgO7 02e4wJEeXUGEtgv4C00fij+5FPKYC/6KFDt94vRv6CaP77q5C+i1yxX53Zs+FYNODFzZ sVNQ== X-Gm-Message-State: AOAM530u/0kqkwozZl2sTklLy3YMtGmSyFYI2b3jcKgi9V4MDpVmFH4J jgr7FmEro1Vi0biKJrweoheT0QgwiqlGYxYp1K0= X-Google-Smtp-Source: ABdhPJx0xzpJtLj63uhvwqHzM1B8CWeKoAo/2Ap69gZd8EyWuOq+aApUmNCHVieHt2+OmmJgql6BQvMpa/ABYxIAKvk= X-Received: by 2002:a5d:88d7:: with SMTP id i23mr347817iol.38.1632236220363; Tue, 21 Sep 2021 07:57:00 -0700 (PDT) MIME-Version: 1.0 References: <20210826183301.333442-1-bruce.richardson@intel.com> <20210907164925.291904-1-bruce.richardson@intel.com> <20210907164925.291904-3-bruce.richardson@intel.com> <8622d4b44e8e4b2e90a137a691f0c0a6@intel.com> In-Reply-To: From: Jerin Jacob Date: Tue, 21 Sep 2021 20:26:33 +0530 Message-ID: To: "Pai G, Sunil" Cc: "Hu, Jiayu" , "Richardson, Bruce" , dpdk-dev , "Walsh, Conor" , "Laatz, Kevin" , fengchengwen , Jerin Jacob , Satananda Burla , Radha Mohan Chintakuntla Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v3 2/8] dmadev: add burst capacity API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Sep 21, 2021 at 7:46 PM Pai G, Sunil wrote: > > Hi Jerin, Hi Sunil, > > > > > > > From: Kevin Laatz <[2]kevin.laatz@intel.com> > > > > > > Add a burst capacity check API to the dmadev library. This API is > > > > > > useful to > > > > > > applications which need to how many descriptors can be enqueued > > in > > > > > > the > > > > > > current batch. For example, it could be used to determine whether > > > > > > all > > > > > > segments of a multi-segment packet can be enqueued in the > > > > > > same > > > > batch > > > > > > or not > > > > > > (to avoid half-offload of the packet). > > > > > > > > > > > > #Could you share more details on the use case with vhost? > > > > > > # Are they planning to use this in fast path if so it need to move as > > > > > > fast path function pointer? > > > > > > > > > > I believe the intent is to use it on fastpath, but I would assume > > > > > only once per burst, so the penalty for non-fastpath may be > > > > > acceptable. As you point out - for an app that really doesn't want > > > > > to have to pay that penalty, tracking ring use itself is possible. > > > > > > > > > > The desire for fast-path use is also why I suggested having the > > > > > space as an optional return parameter from the submit API call. It > > > > > could logically also be a return value from the "completed" call, > > > > > which might actually make more sense. > > > > > > > > > > > # Assume the use case needs N rte_dma_copy to complete a > > > > > > logical > > > > copy > > > > > > at vhost level. Is the any issue in half-offload, meaning when N th > > one > > > > > > successfully completed then only the logical copy is completed. > > Right? > > > > > > > > > > Yes, as I understand it, the issue is for multi-segment packets, > > > > > where we only want to enqueue the first segment if we know we will > > > > > success with the final one too. > > > > > > > > Sorry for the delay in reply. > > > > > > > > If so, why do we need this API. We can mark a logical transaction > > > > completed IFF final segment is succeeded. Since this fastpath API, I > > > > would like to really understand the real use case for it, so if > > > > required then we need to implement in an optimized way. > > > > Otherwise driver does not need to implement this to have generic > > > > solution for all the drivers. > > > > > > Hi Jiayu, Sunil, > > > > > The fact is that it's very hard for apps to calculate the available space of a > > DMA ring. > > > > Yes, I agree. > > > > My question is more why to calculate the space per burst and introduce yet > > another fastpath API. > > For example, the application needs to copy 8 segments to complete a logical > > copy in the application perspective. > > In case, when 8th copy is completed then only the application marks the > > logical copy completed. > > i.e why to check per burst, 8 segments are available or not? Even it is > > available, there may be multiple reasons why any of the segment copies can > > fail. So the application needs to track all the jobs completed or not anyway. > > Am I missing something in terms of vhost or OVS usage? > > > > For the packets that do not entirely fit in the DMA ring , we have a SW copy fallback in place. > So, we would like to avoid scenario caused because of DMA ring full where few parts of the packet are copied through DMA and other parts by CPU. > Besides, this API would also help improve debuggability/device introspection to check the occupancy rather than the app having to manually track the state of every DMA device in use. To understand it better, Could you share more details on feedback mechanism on your application enqueue app_enqueue_v1(.., nb_seg) { /* Not enough space, Let application handle it by dropping or resubmitting */ if (rte_dmadev_burst_capacity() < nb_seg) return -ENOSPC; do rte_dma_op() in loop without checking error; return 0; // Success } vs app_enqueue_v2(.., nb_seg) { int rc; rc |= rte_dma_op() in loop without checking error; return rc; // return the actual status to application if Not enough space, Let application handle it by dropping or resubmitting */ } Is app_enqueue_v1() and app_enqueue_v2() logically the same from application PoV. Right? If not, could you explain, the version you are planning to do for app_enqueue() > Copying from other thread: > > > What are those scenarios, could you share some descriptions of them. > > What if the final or any segment fails event the space is available. > > So you have to take care of that anyway. RIght? > > I think this is app dependent no? The application can choose not to take care of such scenarios and treat the packets as dropped. > Ring full scenarios(-ENOSPC from rte_dma_copy) could be avoided with this API but other errors mean a failure which unfortunately cannot be avoided. > > > > > > For DSA, the available space is decided by three factors: the number > > > of available slots in SW ring, the max batching size of a batch > > > descriptor, and if there are available batch descriptors. The first > > > one is configured by SW, and apps can calculate it. But the second depends > > on DSA HW, and the third one is hided in DSA driver which is not visible to > > apps. > > > Considering the complexity of different DMA HW, I think the best way > > > is to hide all details inside DMA dev and provide this check capacity API for > > apps. > > > > > > Thanks, > > > Jiayu > > > > > > > > > > > > > > > > > > # There is already nb_desc with which a dma_queue is configured. > > So if > > > > > > the application does its accounting properly, it knows how many > > desc it > > > > > > has used up and how many completions it has processed. > > > > > > > > > > Agreed. It's just more work for the app, and for simplicity and > > > > > completeness I think we should add this API. Because there are > > > > > other options I think it should be available, but not as a > > > > > fast-path fn (though again, the difference is likely very small > > > > > for something not called for every enqueue). > > > > > > > > > > > Would like to understand more details on this API usage. > > > > > > > > > > > Adding Sunil and Jiayu on CC who are looking at this area from the > > > > > OVS and vhost sides. > > > > > > > > See above. > > > > > > > > Sunil. Jiayu, Could you share the details on the usage and why it is > > needed. > > > > > > > > > > > > > > > > > > /Bruce