From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D0EEA052A; Tue, 26 Jan 2021 16:35:42 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 55722140F06; Tue, 26 Jan 2021 16:35:42 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 91425140CD9 for ; Tue, 26 Jan 2021 16:35:40 +0100 (CET) IronPort-SDR: MxcuiOqPtT8vHHTgE/HhAgHUV4fI8I5nDvhMOMKBNGfc11taQ8EupRlFPItPmxPPWtXLD8Sn8q euIyhLW7X6wg== X-IronPort-AV: E=McAfee;i="6000,8403,9876"; a="159694240" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="159694240" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jan 2021 07:35:39 -0800 IronPort-SDR: tbukWBQ/D66SQYhBR/Ci2XPaKigcAMTPQj4Ssv7FJ6n5DX63mENqclpMKciWJXXHHFkArTVToc DSd/Hyw2Sc4A== X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="362023460" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.227.53]) ([10.213.227.53]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jan 2021 07:35:37 -0800 To: Nalla Pradeep , Radha Mohan Chintakuntla , Veerasenareddy Burru Cc: jerinj@marvell.com, dev@dpdk.org, sburla@marvell.com References: <20210118093602.5449-1-pnalla@marvell.com> <20210118093602.5449-11-pnalla@marvell.com> From: Ferruh Yigit Message-ID: Date: Tue, 26 Jan 2021 15:35:36 +0000 MIME-Version: 1.0 In-Reply-To: <20210118093602.5449-11-pnalla@marvell.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v2 11/11] net/octeontx_ep: Transmit data path function added X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 1/18/2021 9:36 AM, Nalla Pradeep wrote: > 1. Packet transmit function for both otx and otx2 are added. > 2. Flushing transmit(command) queue when pending commands are more than > maximum allowed value (currently 16). > 3. Scatter gather support if the packet spans multiple buffers. > > Signed-off-by: Nalla Pradeep <...> > +uint16_t > +otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts) > +{ > + struct otx_ep_instr_64B iqcmd; > + struct otx_ep_instr_queue *iq; > + struct otx_ep_device *otx_ep; > + struct rte_mbuf *m; > + > + uint32_t iqreq_type, sgbuf_sz; > + int dbell, index, count = 0; > + unsigned int pkt_len, i; > + int gather, gsz; > + void *iqreq_buf; > + uint64_t dptr; > + > + iq = (struct otx_ep_instr_queue *)tx_queue; > + otx_ep = iq->otx_ep_dev; > + > + /* if (!otx_ep->started || !otx_ep->linkup) { > + * goto xmit_fail; > + * } > + */ Please drop the commented out code. > + > + iqcmd.ih.u64 = 0; > + iqcmd.pki_ih3.u64 = 0; > + iqcmd.irh.u64 = 0; > + > + /* ih invars */ > + iqcmd.ih.s.fsz = OTX_EP_FSZ; > + iqcmd.ih.s.pkind = otx_ep->pkind; /* The SDK decided PKIND value */ > + > + /* pki ih3 invars */ > + iqcmd.pki_ih3.s.w = 1; > + iqcmd.pki_ih3.s.utt = 1; > + iqcmd.pki_ih3.s.tagtype = ORDERED_TAG; > + /* sl will be sizeof(pki_ih3) */ > + iqcmd.pki_ih3.s.sl = OTX_EP_FSZ + OTX_CUST_DATA_LEN; > + > + /* irh invars */ > + iqcmd.irh.s.opcode = OTX_EP_NW_PKT_OP; > + > + for (i = 0; i < nb_pkts; i++) { > + m = pkts[i]; > + if (m->nb_segs == 1) { > + /* dptr */ > + dptr = rte_mbuf_data_iova(m); > + pkt_len = rte_pktmbuf_data_len(m); > + iqreq_buf = m; > + iqreq_type = OTX_EP_REQTYPE_NORESP_NET; > + gather = 0; > + gsz = 0; > + } else { > + struct otx_ep_buf_free_info *finfo; > + int j, frags, num_sg; > + > + if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)) > + goto xmit_fail; > + > + finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL, > + sizeof(*finfo), 0); > + if (finfo == NULL) { > + otx_ep_err("free buffer alloc failed\n"); > + goto xmit_fail; > + } > + num_sg = (m->nb_segs + 3) / 4; > + sgbuf_sz = sizeof(struct otx_ep_sg_entry) * num_sg; > + finfo->g.sg = > + rte_zmalloc(NULL, sgbuf_sz, OTX_EP_SG_ALIGN); > + if (finfo->g.sg == NULL) { > + rte_free(finfo); > + otx_ep_err("sg entry alloc failed\n"); > + goto xmit_fail; > + } > + gather = 1; > + gsz = m->nb_segs; > + finfo->g.num_sg = num_sg; > + finfo->g.sg[0].ptr[0] = rte_mbuf_data_iova(m); > + set_sg_size(&finfo->g.sg[0], m->data_len, 0); > + pkt_len = m->data_len; > + finfo->mbuf = m; > + > + frags = m->nb_segs - 1; > + j = 1; > + m = m->next; > + while (frags--) { > + finfo->g.sg[(j >> 2)].ptr[(j & 3)] = > + rte_mbuf_data_iova(m); > + set_sg_size(&finfo->g.sg[(j >> 2)], > + m->data_len, (j & 3)); > + pkt_len += m->data_len; > + j++; > + m = m->next; > + } > + dptr = rte_mem_virt2iova(finfo->g.sg); > + iqreq_buf = finfo; > + iqreq_type = OTX_EP_REQTYPE_NORESP_GATHER; > + if (pkt_len > OTX_EP_MAX_PKT_SZ) { > + rte_free(finfo->g.sg); > + rte_free(finfo); > + otx_ep_err("failed\n"); > + goto xmit_fail; > + } > + } > + /* ih vars */ > + iqcmd.ih.s.tlen = pkt_len + iqcmd.ih.s.fsz; > + iqcmd.ih.s.gather = gather; > + iqcmd.ih.s.gsz = gsz; > + /* PKI_IH3 vars */ > + /* irh vars */ > + /* irh.rlenssz = ; */ Ditto. > + > + iqcmd.dptr = dptr; > + /* Swap FSZ(front data) here, to avoid swapping on > + * OCTEON TX side rprt is not used so not swapping > + */ > + /* otx_ep_swap_8B_data(&iqcmd.rptr, 1); */ ditto <...> > +}; > +#define OTX_EP_64B_INSTR_SIZE (sizeof(otx_ep_instr_64B)) > + Is this macro used at all?