From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CA508A0032; Tue, 28 Sep 2021 11:22:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4906F410D7; Tue, 28 Sep 2021 11:22:34 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 1836740E3C for ; Tue, 28 Sep 2021 11:22:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1632820952; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xa4rSyqfh2h5vgGImbGbmvZM1vgcwVs9yDXJhkGqbLw=; b=TNTtm3YTN58XQ8nt/K7OnyKHcOInIJAQUv0CGHWU4/IJehAXbMcqZlkVctgSUMusVSFfaq 7W1uJr6mjIbVjkamLbRd+NVnKQwHFCmmk9anujow1PEYSDWZHjSNDy3NsoMpCacpOarrZ7 XO4AkrSmaVFYI6Y727XuxBcb4vfF4tU= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-428-uhAIi1v5Nna5Ne79uK8bBg-1; Tue, 28 Sep 2021 05:22:31 -0400 X-MC-Unique: uhAIi1v5Nna5Ne79uK8bBg-1 Received: by mail-wm1-f70.google.com with SMTP id 17-20020a05600c029100b00305eac9f29aso824712wmk.1 for ; Tue, 28 Sep 2021 02:22:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:in-reply-to :content-transfer-encoding; bh=xa4rSyqfh2h5vgGImbGbmvZM1vgcwVs9yDXJhkGqbLw=; b=IvT9UVjbcLrYuHbm3tFmeD+curODrH0d6SzSRCZU9TdXGvnnG7GSJMG7bsFueKJHL9 +XDL8NuwqjGvKRHWPfS5qe7V6xOqcGOjtCKgprnMSgfBFLxSMOwuJPSuI48OEG6PwBd3 IyEZ4KIU+2AkGDr0A/4TOwj+lQdajzl5TfBhVE9EOm2wsxC7YzD9bdODSr1zfFvz7Boi zupVVaAYvWHSYXUM/w7BS2EMY3mNEPzjgUTEngt+6+ddn7CKsrJOXVo2/HwWoS53n+hd FNm1akuP1YoZyKaFI3KDeJqlpncOzOEf3WvYZyB8dbirLJ/dylHZXUG3f0HuBo+VTRHS mqsQ== X-Gm-Message-State: AOAM530mxt8NrPQS17LRiRg/xh7DLxntmJr2MhWZjxVCKMju4xQ3tTi9 YJEGifnTBPv9i38DQkMGtiRT9AUPXOAwj++Gh11QB9BqS9SJFLlmM73CicHi4XlHrtRRO3VOQfF bHgU= X-Received: by 2002:a5d:664f:: with SMTP id f15mr3940498wrw.143.1632820949772; Tue, 28 Sep 2021 02:22:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyQSGQFRYqC01oeIesqBgWc6G8yy3YlhBo9KfMJv2IBGAUx6ALG4LcNxhnxWU+khMFTFoB3uw== X-Received: by 2002:a5d:664f:: with SMTP id f15mr3940485wrw.143.1632820949619; Tue, 28 Sep 2021 02:22:29 -0700 (PDT) Received: from [192.168.0.36] ([78.18.26.217]) by smtp.gmail.com with ESMTPSA id 8sm2028043wmo.47.2021.09.28.02.22.28 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Sep 2021 02:22:29 -0700 (PDT) Message-ID: <6774d67d-c4b9-dc39-55c1-358882759434@redhat.com> Date: Tue, 28 Sep 2021 10:22:27 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.1.0 To: "Zhang, AlvinX" , "Zhang, Qi Z" , "Guo, Junfeng" Cc: "dev@dpdk.org" References: <20210914013123.23768-1-alvinx.zhang@intel.com> <5a0b44cb-40d2-455f-edee-b706e0574983@redhat.com> From: Kevin Traynor In-Reply-To: Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ktraynor@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] net/ice: add ability to reduce the Rx latency X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 22/09/2021 03:16, Zhang, AlvinX wrote: >> -----Original Message----- >> From: Kevin Traynor >> Sent: Tuesday, September 21, 2021 5:21 PM >> To: Zhang, AlvinX ; Zhang, Qi Z >> ; Guo, Junfeng >> Cc: dev@dpdk.org >> Subject: Re: [dpdk-dev] [PATCH] net/ice: add ability to reduce the Rx latency >> >> On 18/09/2021 02:33, Zhang, AlvinX wrote: >>>> -----Original Message----- >>>> From: Kevin Traynor >>>> Sent: Saturday, September 18, 2021 1:25 AM >>>> To: Zhang, AlvinX ; Zhang, Qi Z >>>> ; Guo, Junfeng >>>> Cc: dev@dpdk.org >>>> Subject: Re: [dpdk-dev] [PATCH] net/ice: add ability to reduce the Rx >>>> latency >>>> >>>> On 14/09/2021 02:31, Alvin Zhang wrote: >>>>> This patch adds a devarg parameter to enable/disable reducing the Rx >>>>> latency. >>>>> >>>>> Signed-off-by: Alvin Zhang >>>>> --- >>>>> doc/guides/nics/ice.rst | 8 ++++++++ >>>>> drivers/net/ice/ice_ethdev.c | 26 +++++++++++++++++++++++--- >>>>> drivers/net/ice/ice_ethdev.h | 1 + >>>>> 3 files changed, 32 insertions(+), 3 deletions(-) >>>>> >>>>> diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index >>>>> 5bc472f..3db0430 100644 >>>>> --- a/doc/guides/nics/ice.rst >>>>> +++ b/doc/guides/nics/ice.rst >>>>> @@ -219,6 +219,14 @@ Runtime Config Options >>>>> >>>>> These ICE_DBG_XXX are defined in >> ``drivers/net/ice/base/ice_type.h``. >>>>> >>>>> +- ``Reduce Rx interrupts and latency`` (default ``0``) >>>>> + >>>>> + vRAN workloads require low latency DPDK interface for the front >>>>> + haul interface connection to Radio. Now we can reduce Rx >>>>> + interrupts and latency by specify ``1`` for parameter ``rx-low-latency``:: >>>>> + >>>>> + -a 0000:88:00.0,rx-low-latency=1 >>>>> + >>>> >>>> When would a user select this and when not? What is the trade off? >>>> >>>> The text is a bit unclear. It looks below like it reduces the >>>> interrupt latency, but not the number of interrupts. Maybe I got it wrong. >>> >>> Yes, it reduces the interrupt latency, We will refine the doc in next >>> patch. >>> >> >> Thanks, the text in v2 is clearer. >> >>>> >>>> >>>>> Driver compilation and testing >>>>> ------------------------------ >>>>> >>>>> diff --git a/drivers/net/ice/ice_ethdev.c >>>>> b/drivers/net/ice/ice_ethdev.c index a4cd39c..85662e4 100644 >>>>> --- a/drivers/net/ice/ice_ethdev.c >>>>> +++ b/drivers/net/ice/ice_ethdev.c >>>>> @@ -29,12 +29,14 @@ >>>>> #define ICE_PIPELINE_MODE_SUPPORT_ARG >> "pipeline-mode-support" >>>>> #define ICE_PROTO_XTR_ARG "proto_xtr" >>>>> #define ICE_HW_DEBUG_MASK_ARG "hw_debug_mask" >>>>> +#define ICE_RX_LOW_LATENCY "rx-low-latency" >>>>> >>>>> static const char * const ice_valid_args[] = { >>>>> ICE_SAFE_MODE_SUPPORT_ARG, >>>>> ICE_PIPELINE_MODE_SUPPORT_ARG, >>>>> ICE_PROTO_XTR_ARG, >>>>> ICE_HW_DEBUG_MASK_ARG, >>>>> + ICE_RX_LOW_LATENCY, >>>>> NULL >>>>> }; >>>>> >>>>> @@ -1827,6 +1829,9 @@ static int ice_parse_devargs(struct >>>>> rte_eth_dev >>>> *dev) >>>>> if (ret) >>>>> goto bail; >>>>> >>>>> + ret = rte_kvargs_process(kvlist, ICE_RX_LOW_LATENCY, >>>>> + &parse_bool, &ad->devargs.rx_low_latency); >>>>> + >>>>> bail: >>>>> rte_kvargs_free(kvlist); >>>>> return ret; >>>>> @@ -3144,8 +3149,9 @@ static int ice_init_rss(struct ice_pf *pf) { >>>>> struct ice_hw *hw = ICE_VSI_TO_HW(vsi); >>>>> uint32_t val, val_tx; >>>>> - int i; >>>>> + int rx_low_latency, i; >>>>> >>>>> + rx_low_latency = vsi->adapter->devargs.rx_low_latency; >>>>> for (i = 0; i < nb_queue; i++) { >>>>> /*do actual bind*/ >>>>> val = (msix_vect & QINT_RQCTL_MSIX_INDX_M) | @@ -3155,8 >>>> +3161,21 @@ >>>>> static int ice_init_rss(struct ice_pf *pf) >>>>> >>>>> PMD_DRV_LOG(INFO, "queue %d is binding to vect %d", >>>>> base_queue + i, msix_vect); >>>>> + >>>>> /* set ITR0 value */ >>>>> - ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x2); >>>>> + if (rx_low_latency) { >>>>> + /** >>>>> + * Empirical configuration for optimal real time >>>>> + * latency reduced interrupt throttling to 2us >>>>> + */ >>>>> + ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x1); >>>> >>>> Why not set this to 0? "Setting the INTERVAL to zero enables >>>> immediate interrupt." >>>> >> >> Didn't see a reply to this comment? >> >> I'm not requesting a change, just asking if there is a reason you didn't choose the >> lowest latency setting, and if you should? > > Setting the INTERVAL to zero enable immediate interrupt, which will cause more interrupts at high packets rates, > and more interrupts will consume more PCI bandwidth and CPU cycles. > Setting to 2us is a performance trade-off. ok, thanks. >> >>>>> + ICE_WRITE_REG(hw, QRX_ITR(base_queue + i), >>>>> + QRX_ITR_NO_EXPR_M); >>>>> + } else { >>>>> + ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x2); >>>>> + ICE_WRITE_REG(hw, QRX_ITR(base_queue + i), 0); >>>>> + } >>>>> + >>>>> ICE_WRITE_REG(hw, QINT_RQCTL(base_queue + i), val); >>>>> ICE_WRITE_REG(hw, QINT_TQCTL(base_queue + i), val_tx); >>>>> } >>>>> @@ -5314,7 +5333,8 @@ static int ice_xstats_get_names(__rte_unused >>>> struct rte_eth_dev *dev, >>>>> ICE_HW_DEBUG_MASK_ARG "=0xXXX" >>>>> ICE_PROTO_XTR_ARG >>>> "=[queue:]" >>>>> ICE_SAFE_MODE_SUPPORT_ARG "=<0|1>" >>>>> - ICE_PIPELINE_MODE_SUPPORT_ARG "=<0|1>"); >>>>> + ICE_PIPELINE_MODE_SUPPORT_ARG "=<0|1>" >>>>> + ICE_RX_LOW_LATENCY "=<0|1>"); >>>>> >>>>> RTE_LOG_REGISTER_SUFFIX(ice_logtype_init, init, NOTICE); >>>>> RTE_LOG_REGISTER_SUFFIX(ice_logtype_driver, driver, NOTICE); diff >>>>> --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h >>>>> index b4bf651..c61cc1f 100644 >>>>> --- a/drivers/net/ice/ice_ethdev.h >>>>> +++ b/drivers/net/ice/ice_ethdev.h >>>>> @@ -463,6 +463,7 @@ struct ice_pf { >>>>> * Cache devargs parse result. >>>>> */ >>>>> struct ice_devargs { >>>>> + int rx_low_latency; >>>>> int safe_mode_support; >>>>> uint8_t proto_xtr_dflt; >>>>> int pipe_mode_support; >>>>> >>> >