From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2AC1AA0583; Thu, 19 Mar 2020 10:36:50 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 224EB1C02E; Thu, 19 Mar 2020 10:36:49 +0100 (CET) Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by dpdk.org (Postfix) with ESMTP id 7A0491515 for ; Thu, 19 Mar 2020 10:36:41 +0100 (CET) Received: by mail-lj1-f195.google.com with SMTP id o10so1640535ljc.8 for ; Thu, 19 Mar 2020 02:36:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-language:content-transfer-encoding; bh=AQM2z2IpBpMK8cISd+q+DITDpDksA00p2+1uZvw9P9k=; b=VgjH3eBIfnsMyx0ViJ89jHZwJ9e0ndlV8Kd/Gia3MiILXFpB4D3Ja3chb0ivIYjcwq oScj9LHJxZpYOvkeF/hghPW9yJhBCaAaIQzrgGb5ITXP8dbYkHaSglQI67ER9KV4e64N BGrdOQh7K30SCVNndoR6HjSM9OR+n+mZNKzcW4j238Lxm/dE3hYAdqdUmvbNBEGmeBkZ RIPN8GOAEf7evXMZ6D/obJY+NB6JqARpF3j2vRhjOpPtKoYJFgsBPVAxyZCjz55X/Jlu IVYkJkxFJCF7V9KIy2i7pI9n55EjGvJUNihG8s5wIcd/q1ZHHXMWShlCZ8isvo+iUZbT WtYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=AQM2z2IpBpMK8cISd+q+DITDpDksA00p2+1uZvw9P9k=; b=WiKkuIJQXzGZpDZyk7o92Fr7RjedvzRGwu2D2Jdn5HQHLC69igqYQyqmmzJNAQy0LF jJXrWj7JGK+wK9ESPJAfp9GmoonMT5n1YvD7zeEoEuRwinvPn27eBgjMvuQJ6YZLMrGP Dr2HjhOdGh8hAuecwZ6PK4ZAi3jn4Vxl9N150e9OaKJZgtlSZMinQ+AfTzC+nHx4ApFY jp3jSUtnJWcKZazReYj9MOiY4CKbtaXWk9Dqy3Ot5cgME/EgTg6AJ4VK8vINQzZqMmrq Nc7qKg8NmEaSYwTfDWsAb5z+DIXXOyJ3z8P3w61yqiFJ8xO2u7yJKwaHKZw4ci9O/SeT XZnw== X-Gm-Message-State: ANhLgQ2uoagReyUFSIizXMhaIV+oh1PRfThFCcOHN9ixVodgmB6kIcdn LVQkWR83VoYOU4ImEOH6i6l4TJ9z9kk8HQ== X-Google-Smtp-Source: ADFU+vt3bYaOoYKXxx5x9Pu9NTfUlQVzLIOU2ZI5Sh/dRnjAIwwQJ6aQ6sKNznkOGLofxYecaamnTg== X-Received: by 2002:a2e:93c5:: with SMTP id p5mr1578564ljh.192.1584610600686; Thu, 19 Mar 2020 02:36:40 -0700 (PDT) Received: from [10.0.0.72] (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id g18sm1024851ljg.59.2020.03.19.02.36.39 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 19 Mar 2020 02:36:40 -0700 (PDT) To: dev@dpdk.org References: <20200306163524.1650-1-pbhagavatula@marvell.com> <20200306163524.1650-2-pbhagavatula@marvell.com> From: Andrzej Ostruszka Message-ID: <74f0a4d7-8148-f117-a1df-03d814cbe4c8@semihalf.com> Date: Thu, 19 Mar 2020 10:36:39 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.4.1 MIME-Version: 1.0 In-Reply-To: <20200306163524.1650-2-pbhagavatula@marvell.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 3/6/20 5:35 PM, pbhagavatula@marvell.com wrote: > From: Pavan Nikhilesh > > Add device arguments to lock Rx/Tx contexts. > Application can either choose to lock Rx or Tx contexts by using > 'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port. > > Example: > -w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1 > > Signed-off-by: Pavan Nikhilesh > --- > doc/guides/nics/octeontx2.rst | 14 ++ > drivers/net/octeontx2/otx2_ethdev.c | 179 +++++++++++++++++++- > drivers/net/octeontx2/otx2_ethdev.h | 2 + > drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +- > drivers/net/octeontx2/otx2_rss.c | 23 +++ > 5 files changed, 231 insertions(+), 3 deletions(-) > > diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst > index 819d09e11..6a13c9b26 100644 > --- a/doc/guides/nics/octeontx2.rst > +++ b/doc/guides/nics/octeontx2.rst > @@ -207,6 +207,20 @@ Runtime Config Options > With the above configuration, application can enable inline IPsec processing > on 128 SAs (SPI 0-127). > > +- ``Lock Rx contexts in NDC cache`` > + > + Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter. > + > + For example:: > + -w 0002:02:00.0,lock_rx_ctx=1 "=1" is needed because of kvargs require it? If that is the case then I'll think about extending kvargs to accept simple keys - this syntax doesn't feel right when all one really wants is just to test the presence of flag (for 1/true) or its lack (for 0/false). BTW - extra line break after "::" > + > +- ``Lock Tx contexts in NDC cache`` > + > + Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter. > + > + For example:: > + -w 0002:02:00.0,lock_tx_ctx=1 Same as above [...] > diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c > index e60f4901c..592fef458 100644 > --- a/drivers/net/octeontx2/otx2_ethdev.c > +++ b/drivers/net/octeontx2/otx2_ethdev.c > @@ -381,6 +381,38 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev, > goto fail; > } > > + if (dev->lock_rx_ctx) { > + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); > + aq->qidx = qid; > + aq->ctype = NIX_AQ_CTYPE_CQ; > + aq->op = NIX_AQ_INSTOP_LOCK; > + > + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); > + if (!aq) { > + /* The shared memory buffer can be full. > + * Flush it and retry > + */ > + otx2_mbox_msg_send(mbox, 0); > + rc = otx2_mbox_wait_for_rsp(mbox, 0); > + if (rc < 0) { > + otx2_err("Failed to LOCK cq context"); > + return 0; Similar comments as for the previous patch. Is this not a failure? If so why "return 0"? If not failure don't log it as an error. BTW here the failure is not for locking but for flushing of the mbox (right?) so maybe change the err-msg to differentiate from the case below. > + } > + > + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); > + if (!aq) { > + otx2_err("Failed to LOCK rq context"); > + return 0; Same as above - error or not? > + } > + } > + aq->qidx = qid; > + aq->ctype = NIX_AQ_CTYPE_RQ; > + aq->op = NIX_AQ_INSTOP_LOCK; > + rc = otx2_mbox_process(mbox); > + if (rc < 0) > + otx2_err("Failed to LOCK rq context"); Ditto. > + } > + > return 0; > fail: > return rc; Same comments for *rq_uninit (below) as those for *rq_init (above). > @@ -430,6 +462,38 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq) > return rc; > } > > + if (dev->lock_rx_ctx) { > + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); > + aq->qidx = rxq->rq; > + aq->ctype = NIX_AQ_CTYPE_CQ; > + aq->op = NIX_AQ_INSTOP_UNLOCK; > + > + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); > + if (!aq) { > + /* The shared memory buffer can be full. > + * Flush it and retry > + */ > + otx2_mbox_msg_send(mbox, 0); > + rc = otx2_mbox_wait_for_rsp(mbox, 0); > + if (rc < 0) { > + otx2_err("Failed to UNLOCK cq context"); > + return 0; > + } > + > + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); > + if (!aq) { > + otx2_err("Failed to UNLOCK rq context"); > + return 0; > + } > + } > + aq->qidx = rxq->rq; > + aq->ctype = NIX_AQ_CTYPE_RQ; > + aq->op = NIX_AQ_INSTOP_UNLOCK; > + rc = otx2_mbox_process(mbox); > + if (rc < 0) > + otx2_err("Failed to UNLOCK rq context"); > + } > + > return 0; > } > And the same set of comments apply below to *sqb_lock/unlock - in addition one comment about err-msg. > @@ -715,6 +779,90 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev) > return flags; > } > > +static int > +nix_sqb_lock(struct rte_mempool *mp) > +{ > + struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf; > + struct npa_aq_enq_req *req; > + int rc; > + > + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); > + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id); > + req->ctype = NPA_AQ_CTYPE_AURA; > + req->op = NPA_AQ_INSTOP_LOCK; > + > + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); > + if (!req) { > + /* The shared memory buffer can be full. > + * Flush it and retry > + */ > + otx2_mbox_msg_send(npa_lf->mbox, 0); > + rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0); > + if (rc < 0) { > + otx2_err("Failed to LOCK AURA context"); > + return 0; > + } > + > + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); > + if (!req) { > + otx2_err("Failed to LOCK POOL context"); Apart from the general err-or-not-err comment here you have not attempted to lock pool yet, you do that below. Just like before use different msg 6 lines above (since you have only flushed - not attepmted to lock) and here use the "Failed to LOCK AURA context". The same comment applies below for unlock. > + return 0; > + } > + } > + > + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id); > + req->ctype = NPA_AQ_CTYPE_POOL; > + req->op = NPA_AQ_INSTOP_LOCK; > + > + rc = otx2_mbox_process(npa_lf->mbox); > + if (rc < 0) > + otx2_err("Unable to lock POOL in NDC"); > + > + return 0; > +} > + > +static int > +nix_sqb_unlock(struct rte_mempool *mp) > +{ > + struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf; > + struct npa_aq_enq_req *req; > + int rc; > + > + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); > + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id); > + req->ctype = NPA_AQ_CTYPE_AURA; > + req->op = NPA_AQ_INSTOP_UNLOCK; > + > + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); > + if (!req) { > + /* The shared memory buffer can be full. > + * Flush it and retry > + */ > + otx2_mbox_msg_send(npa_lf->mbox, 0); > + rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0); > + if (rc < 0) { > + otx2_err("Failed to UNLOCK AURA context"); > + return 0; > + } > + > + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); > + if (!req) { > + otx2_err("Failed to UNLOCK POOL context"); > + return 0; > + } > + } > + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); > + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id); > + req->ctype = NPA_AQ_CTYPE_POOL; > + req->op = NPA_AQ_INSTOP_UNLOCK; > + > + rc = otx2_mbox_process(npa_lf->mbox); > + if (rc < 0) > + otx2_err("Unable to UNLOCK AURA in NDC"); > + > + return 0; > +} > + > static int > nix_sq_init(struct otx2_eth_txq *txq) > { > @@ -757,7 +905,20 @@ nix_sq_init(struct otx2_eth_txq *txq) > /* Many to one reduction */ > sq->sq.qint_idx = txq->sq % dev->qints; > > - return otx2_mbox_process(mbox); > + rc = otx2_mbox_process(mbox); > + if (rc < 0) > + return rc; > + > + if (dev->lock_tx_ctx) { > + sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); > + sq->qidx = txq->sq; > + sq->ctype = NIX_AQ_CTYPE_SQ; > + sq->op = NIX_AQ_INSTOP_LOCK; > + > + rc = otx2_mbox_process(mbox); > + } > + > + return rc; > } > > static int > @@ -800,6 +961,20 @@ nix_sq_uninit(struct otx2_eth_txq *txq) > if (rc) > return rc; > > + if (dev->lock_tx_ctx) { > + /* Unlock sq */ > + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); > + aq->qidx = txq->sq; > + aq->ctype = NIX_AQ_CTYPE_SQ; > + aq->op = NIX_AQ_INSTOP_UNLOCK; > + > + rc = otx2_mbox_process(mbox); > + if (rc < 0) > + return rc; > + > + nix_sqb_unlock(txq->sqb_pool); > + } > + > /* Read SQ and free sqb's */ > aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); > aq->qidx = txq->sq; > @@ -921,6 +1096,8 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc) > } > > nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs); > + if (dev->lock_tx_ctx) > + nix_sqb_lock(txq->sqb_pool); > > return 0; > fail: > diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h > index e5684f9f0..71f8cf729 100644 > --- a/drivers/net/octeontx2/otx2_ethdev.h > +++ b/drivers/net/octeontx2/otx2_ethdev.h > @@ -287,6 +287,8 @@ struct otx2_eth_dev { > uint16_t scalar_ena; > uint16_t rss_tag_as_xor; > uint16_t max_sqb_count; > + uint16_t lock_rx_ctx; > + uint16_t lock_tx_ctx; > uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */ > uint64_t rx_offloads; > uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */ > diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c > index bc11f54b5..0857d8247 100644 > --- a/drivers/net/octeontx2/otx2_ethdev_devargs.c > +++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c > @@ -124,6 +124,8 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args) > #define OTX2_FLOW_MAX_PRIORITY "flow_max_priority" > #define OTX2_SWITCH_HEADER_TYPE "switch_header" > #define OTX2_RSS_TAG_AS_XOR "tag_as_xor" > +#define OTX2_LOCK_RX_CTX "lock_rx_ctx" > +#define OTX2_LOCK_TX_CTX "lock_tx_ctx" > > int > otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev) > @@ -134,9 +136,11 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev) > uint16_t switch_header_type = 0; > uint16_t flow_max_priority = 3; > uint16_t ipsec_in_max_spi = 1; > - uint16_t scalar_enable = 0; > uint16_t rss_tag_as_xor = 0; > + uint16_t scalar_enable = 0; > struct rte_kvargs *kvlist; > + uint16_t lock_rx_ctx = 0; > + uint16_t lock_tx_ctx = 0; > > if (devargs == NULL) > goto null_devargs; > @@ -161,6 +165,10 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev) > &parse_switch_header_type, &switch_header_type); > rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR, > &parse_flag, &rss_tag_as_xor); > + rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX, > + &parse_flag, &lock_rx_ctx); > + rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX, > + &parse_flag, &lock_tx_ctx); > otx2_parse_common_devargs(kvlist); > rte_kvargs_free(kvlist); > > @@ -169,6 +177,8 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev) > dev->scalar_ena = scalar_enable; > dev->rss_tag_as_xor = rss_tag_as_xor; > dev->max_sqb_count = sqb_count; > + dev->lock_rx_ctx = lock_rx_ctx; > + dev->lock_tx_ctx = lock_tx_ctx; > dev->rss_info.rss_size = rss_size; > dev->npc_flow.flow_prealloc_size = flow_prealloc_size; > dev->npc_flow.flow_max_priority = flow_max_priority; > @@ -187,4 +197,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2, > OTX2_FLOW_PREALLOC_SIZE "=<1-32>" > OTX2_FLOW_MAX_PRIORITY "=<1-32>" > OTX2_SWITCH_HEADER_TYPE "=" > - OTX2_RSS_TAG_AS_XOR "=1"); > + OTX2_RSS_TAG_AS_XOR "=1" > + OTX2_LOCK_RX_CTX "=<1-65535>" > + OTX2_LOCK_TX_CTX "=<1-65535>"); AFAIU the "=1" is required due to kvargs parsing limitation. But why the range here is 1-65535 when all you want is just a boolean flag (if the key is present or not). > diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c > index 7a8c8f3de..34005ef02 100644 > --- a/drivers/net/octeontx2/otx2_rss.c > +++ b/drivers/net/octeontx2/otx2_rss.c > @@ -33,6 +33,29 @@ otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, > req->qidx = (group * rss->rss_size) + idx; > req->ctype = NIX_AQ_CTYPE_RSS; > req->op = NIX_AQ_INSTOP_INIT; > + > + if (!dev->lock_rx_ctx) > + continue; > + > + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox); > + if (!req) { > + /* The shared memory buffer can be full. > + * Flush it and retry > + */ > + otx2_mbox_msg_send(mbox, 0); > + rc = otx2_mbox_wait_for_rsp(mbox, 0); > + if (rc < 0) > + return rc; > + > + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox); > + if (!req) > + return -ENOMEM; > + } > + req->rss.rq = ind_tbl[idx]; > + /* Fill AQ info */ > + req->qidx = (group * rss->rss_size) + idx; > + req->ctype = NIX_AQ_CTYPE_RSS; > + req->op = NIX_AQ_INSTOP_LOCK; > } > > otx2_mbox_msg_send(mbox, 0); > And here you treat the locking errors as errors - so I think you need to just adapt to this style and fix the previous comments. With regards Andrzej Ostruszka