From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR03-AM5-obe.outbound.protection.outlook.com (mail-eopbgr30055.outbound.protection.outlook.com [40.107.3.55]) by dpdk.org (Postfix) with ESMTP id BFFE31D8E for ; Wed, 6 Dec 2017 12:43:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=r3c4Wi4fNnvqaxwSf3RTk2GmIFb+XIXh6c6JnO+Or2g=; b=UQyPkYwPuZsRo6ugxe3Vq14NKV7Dbc9eIJRgqJggyJ8iIohucEDF61xPM0KW9TAazPX/1pZA9AIQcc6OdQzQhr8b8N7jfhGPUnZlPv1QC2i1VYTYrFKDiEIMnLWOF2PyqYGbx3y1hng9Yc6AvP3XdBN2vTZ0TLialHG2nTrNaS0= Received: from HE1PR0502MB3659.eurprd05.prod.outlook.com (10.167.127.17) by HE1PR0502MB3658.eurprd05.prod.outlook.com (10.167.127.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.282.5; Wed, 6 Dec 2017 11:43:25 +0000 Received: from HE1PR0502MB3659.eurprd05.prod.outlook.com ([fe80::982e:2dce:9449:6891]) by HE1PR0502MB3659.eurprd05.prod.outlook.com ([fe80::982e:2dce:9449:6891%13]) with mapi id 15.20.0282.012; Wed, 6 Dec 2017 11:43:25 +0000 From: Matan Azrad To: Adrien Mazarguil CC: "dev@dpdk.org" Thread-Topic: [PATCH 5/8] net/mlx4: merge Tx queue rings management Thread-Index: AQHTboE9RVSbcGWsWUS4xguXDpU2tqM2LgXA Date: Wed, 6 Dec 2017 11:43:25 +0000 Message-ID: References: <1511871570-16826-1-git-send-email-matan@mellanox.com> <1511871570-16826-6-git-send-email-matan@mellanox.com> <20171206105850.GW4062@6wind.com> In-Reply-To: <20171206105850.GW4062@6wind.com> Accept-Language: en-US, he-IL Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=matan@mellanox.com; x-originating-ip: [193.47.165.251] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; HE1PR0502MB3658; 6:949+FsXLoskHkE35psrJw71fU8+mzYet8fQbPnB7NvuD4WW/+3kROKsrAmCk1TQLFLSv9YwFtrs6Xqhq1pGH7YmoJ+qMDf3fIBpHwDxxmF7iesvG0FmPYhGUBf316w2aXGwneqGPe8OSv2EV+Eu61vv0FvZP4QcoSd30Vg/8nbW3NR4xUcyb9kMO3wwcOnrktjDzhQVuoV99mjIe2LeIbR1KXvOw9etnBVNoulsxTB2iEWw2pkHFvoAN0VASVq1iobzFhonUJfpcfEvIpx6b3P7UHQu1JwJEHlFuXKdn3aatlC+3+Oe6UfPyktgqyCrBbOpEM0tuP4iicTm7N+lOQV+Je8VrRUIEp0WtGsiuZVk=; 5:TRxoujBoX8Qv4/1rVVWlYYhrlEdJV0ocBWXLBQtJcUBeBzp20jabTjeIvEGBFfga0CHU5Cra7NWl2H6kuHmzGE0wlfwhbcvcmx5PBVxnlFBm/Bm65WwfBRKBNRIv8IwuCD8n2Lwi0ziAbk4A4w7g229zhkfEc8k91AZ2cSdrvYU=; 24:vr331pA2ah6WOpoG1y36JnTR5TVEw4ovZ8xTDICGl0ETEAoo7MgHIfiq3WQq0ZPl9tInXAQ6esbYjeuJO96nTuXBqGyfQC8b+ihWOmcd27A=; 7:3Fm6EjCoMZJB+ge7IkZ2m3GK+6hsU4Atq8rDVvG0w4FZW/L9TdlxrnUJEcqXLQPobxk3b8GfNLs69VRU9JZC/DtmD0OJo7gEglgXCU7hrDBbfJuTj0ePTFWvThVc1FGT+7e31JsKDNH3g8dqFoqD1wtL5gOZixj6AwBke72VvsWlqUx+7Hj3+NDJT6GG+4E7O0TT3WXsm7dVFwE8FK5Q0sCrkLS1Wv7kov8NW9oPS91iYVQoYeDS6EstlhsZ5rys x-ms-exchange-antispam-srfa-diagnostics: SSOS; x-ms-office365-filtering-correlation-id: f01a49f7-e87d-4ebc-492b-08d53c9e8fe7 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(5600026)(4604075)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(48565401081)(2017052603286); SRVR:HE1PR0502MB3658; x-ms-traffictypediagnostic: HE1PR0502MB3658: x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(60795455431006); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(8121501046)(5005006)(10201501046)(3002001)(93006095)(93001095)(3231022)(6055026)(6041248)(20161123562025)(20161123564025)(20161123555025)(20161123558100)(20161123560025)(201703131423075)(201703011903075)(201702281528075)(201703061421075)(6072148)(201708071742011); SRVR:HE1PR0502MB3658; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:HE1PR0502MB3658; x-forefront-prvs: 05134F8B4F x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(7966004)(346002)(366004)(39860400002)(376002)(24454002)(189003)(199004)(13464003)(4326008)(76176011)(55016002)(102836003)(6506006)(6116002)(53936002)(3846002)(229853002)(8936002)(53546010)(53946003)(7696005)(106356001)(97736004)(68736007)(6246003)(99286004)(5250100002)(105586002)(101416001)(5660300001)(2950100002)(6916009)(508600001)(305945005)(7736002)(14454004)(74316002)(9686003)(2900100001)(2906002)(25786009)(81156014)(8676002)(316002)(86362001)(81166006)(3660700001)(6436002)(3280700002)(33656002)(66066001)(579004); DIR:OUT; SFP:1101; SCL:1; SRVR:HE1PR0502MB3658; H:HE1PR0502MB3659.eurprd05.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: f01a49f7-e87d-4ebc-492b-08d53c9e8fe7 X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Dec 2017 11:43:25.8185 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0502MB3658 Subject: Re: [dpdk-dev] [PATCH 5/8] net/mlx4: merge Tx queue rings management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 Dec 2017 11:43:28 -0000 Hi Adrien > -----Original Message----- > From: Adrien Mazarguil [mailto:adrien.mazarguil@6wind.com] > Sent: Wednesday, December 6, 2017 12:59 PM > To: Matan Azrad > Cc: dev@dpdk.org > Subject: Re: [PATCH 5/8] net/mlx4: merge Tx queue rings management >=20 > On Tue, Nov 28, 2017 at 12:19:27PM +0000, Matan Azrad wrote: > > The Tx queue send ring was managed by Tx block head,tail,count and > > mask management variables which were used for managing the send > queue > > remain space and next places of empty or completted work queue entries. >=20 > completted =3D> completed >=20 > > > > This method suffered from an actual addresses recalculation per > > packet, an unnecessary Tx block based calculations and an expensive > > dual management of Tx rings. > > > > Move send queue ring calculation to be based on actual addresses while > > managing it by descriptors ring indexes. > > > > Add new work queue entry pointer to the descriptor element to hold the > > appropriate entry in the send queue. > > > > Signed-off-by: Matan Azrad > > --- > > drivers/net/mlx4/mlx4_prm.h | 20 ++-- drivers/net/mlx4/mlx4_rxtx.c > > | 241 +++++++++++++++++++------------------------ > > drivers/net/mlx4/mlx4_rxtx.h | 1 + > > drivers/net/mlx4/mlx4_txq.c | 23 +++-- > > 4 files changed, 126 insertions(+), 159 deletions(-) > > > > diff --git a/drivers/net/mlx4/mlx4_prm.h b/drivers/net/mlx4/mlx4_prm.h > > index fcc7c12..2ca303a 100644 > > --- a/drivers/net/mlx4/mlx4_prm.h > > +++ b/drivers/net/mlx4/mlx4_prm.h > > @@ -54,22 +54,18 @@ > > > > /* Typical TSO descriptor with 16 gather entries is 352 bytes. */ > > #define MLX4_MAX_WQE_SIZE 512 -#define MLX4_MAX_WQE_TXBBS > > (MLX4_MAX_WQE_SIZE / MLX4_TXBB_SIZE) > > +#define MLX4_SEG_SHIFT 4 > > > > /* Send queue stamping/invalidating information. */ #define > > MLX4_SQ_STAMP_STRIDE 64 #define MLX4_SQ_STAMP_DWORDS > > (MLX4_SQ_STAMP_STRIDE / 4) -#define MLX4_SQ_STAMP_SHIFT 31 > > +#define MLX4_SQ_OWNER_BIT 31 > > #define MLX4_SQ_STAMP_VAL 0x7fffffff > > > > /* Work queue element (WQE) flags. */ -#define MLX4_BIT_WQE_OWN > > 0x80000000 #define MLX4_WQE_CTRL_IIP_HDR_CSUM (1 << 28) #define > > MLX4_WQE_CTRL_IL4_HDR_CSUM (1 << 27) > > > > -#define MLX4_SIZE_TO_TXBBS(size) \ > > - (RTE_ALIGN((size), (MLX4_TXBB_SIZE)) >> (MLX4_TXBB_SHIFT)) > > - > > /* CQE checksum flags. */ > > enum { > > MLX4_CQE_L2_TUNNEL_IPV4 =3D (int)(1u << 25), @@ -98,17 +94,15 > @@ enum > > { struct mlx4_sq { > > volatile uint8_t *buf; /**< SQ buffer. */ > > volatile uint8_t *eob; /**< End of SQ buffer */ > > - uint32_t head; /**< SQ head counter in units of TXBBS. */ > > - uint32_t tail; /**< SQ tail counter in units of TXBBS. */ > > - uint32_t txbb_cnt; /**< Num of WQEBB in the Q (should be ^2). */ > > - uint32_t txbb_cnt_mask; /**< txbbs_cnt mask (txbb_cnt is ^2). */ > > - uint32_t headroom_txbbs; /**< Num of txbbs that should be kept > free. */ > > + uint32_t size; /**< SQ size includes headroom. */ > > + int32_t remain_size; /**< Remain WQE size in SQ. */ >=20 > Remain =3D> Remaining? >=20 OK > By "size", do you mean "room" as there could be several WQEs in there? >=20 Size in bytes. remaining size | remaining space | remaining room | remaining bytes , What = are you preferred? > Note before reviewing the rest of this patch, the fact it's a signed inte= ger > bothers me; it's probably a mistake. There is place in the code where this variable can used for signed calculat= ions. > You should standardize on unsigned values everywhere. Why? Each field with the most appropriate type. >=20 > > + /**< Default owner opcode with HW valid owner bit. */ >=20 > The "/**<" syntax requires the comment to come after the documented > field. You should either move this line below "owner_opcode" or use "/**"= . >=20 OK > > + uint32_t owner_opcode; > > + uint32_t stamp; /**< Stamp value with an invalid HW owner bit. */ > > volatile uint32_t *db; /**< Pointer to the doorbell. */ > > uint32_t doorbell_qpn; /**< qp number to write to the doorbell. */ > > }; > > > > -#define mlx4_get_send_wqe(sq, n) ((sq)->buf + ((n) * > > (MLX4_TXBB_SIZE))) > > - > > /* Completion queue events, numbers and masks. */ #define > > MLX4_CQ_DB_GEQ_N_MASK 0x3 #define MLX4_CQ_DOORBELL 0x20 diff - > -git > > a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c index > > b9cb2fc..0a8ef93 100644 > > --- a/drivers/net/mlx4/mlx4_rxtx.c > > +++ b/drivers/net/mlx4/mlx4_rxtx.c > > @@ -61,9 +61,6 @@ > > #include "mlx4_rxtx.h" > > #include "mlx4_utils.h" > > > > -#define WQE_ONE_DATA_SEG_SIZE \ > > - (sizeof(struct mlx4_wqe_ctrl_seg) + sizeof(struct > mlx4_wqe_data_seg)) > > - > > /** > > * Pointer-value pair structure used in tx_post_send for saving the fi= rst > > * DWORD (32 byte) of a TXBB. > > @@ -268,52 +265,48 @@ struct pv { > > * > > * @param sq > > * Pointer to the SQ structure. > > - * @param index > > - * Index of the freed WQE. > > - * @param num_txbbs > > - * Number of blocks to stamp. > > - * If < 0 the routine will use the size written in the WQ entry. > > - * @param owner > > - * The value of the WQE owner bit to use in the stamp. > > + * @param wqe > > + * Pointer of WQE to stamp. >=20 > Looks like it's not just a simple pointer to the WQE to stamp seeing this > function also stores the address of the next WQE in the provided buffer > (uint32_t **wqe). It's not documented as such. >=20 Yes, you right, I will change it, it is going to be changed in the next ser= ies patch :) > > * > > * @return > > - * The number of Tx basic blocs (TXBB) the WQE contained. > > + * WQE size. > > */ > > -static int > > -mlx4_txq_stamp_freed_wqe(struct mlx4_sq *sq, uint16_t index, uint8_t > > owner) > > +static uint32_t > > +mlx4_txq_stamp_freed_wqe(struct mlx4_sq *sq, volatile uint32_t > **wqe) > > { > > - uint32_t stamp =3D rte_cpu_to_be_32(MLX4_SQ_STAMP_VAL | > > - (!!owner << > MLX4_SQ_STAMP_SHIFT)); > > - volatile uint8_t *wqe =3D mlx4_get_send_wqe(sq, > > - (index & sq- > >txbb_cnt_mask)); > > - volatile uint32_t *ptr =3D (volatile uint32_t *)wqe; > > - int i; > > - int txbbs_size; > > - int num_txbbs; > > - > > + uint32_t stamp =3D sq->stamp; > > + volatile uint32_t *next_txbb =3D *wqe; > > /* Extract the size from the control segment of the WQE. */ > > - num_txbbs =3D MLX4_SIZE_TO_TXBBS((((volatile struct > mlx4_wqe_ctrl_seg *) > > - wqe)->fence_size & 0x3f) << 4); > > - txbbs_size =3D num_txbbs * MLX4_TXBB_SIZE; > > + uint32_t size =3D RTE_ALIGN((uint32_t) > > + ((((volatile struct mlx4_wqe_ctrl_seg *) > > + next_txbb)->fence_size & 0x3f) << 4), > > + MLX4_TXBB_SIZE); > > + uint32_t size_cd =3D size; > > + > > /* Optimize the common case when there is no wrap-around. */ > > - if (wqe + txbbs_size <=3D sq->eob) { > > + if ((uintptr_t)next_txbb + size < (uintptr_t)sq->eob) { > > /* Stamp the freed descriptor. */ > > - for (i =3D 0; i < txbbs_size; i +=3D MLX4_SQ_STAMP_STRIDE) { > > - *ptr =3D stamp; > > - ptr +=3D MLX4_SQ_STAMP_DWORDS; > > - } > > + do { > > + *next_txbb =3D stamp; > > + next_txbb +=3D MLX4_SQ_STAMP_DWORDS; > > + size_cd -=3D MLX4_TXBB_SIZE; > > + } while (size_cd); > > } else { > > /* Stamp the freed descriptor. */ > > - for (i =3D 0; i < txbbs_size; i +=3D MLX4_SQ_STAMP_STRIDE) { > > - *ptr =3D stamp; > > - ptr +=3D MLX4_SQ_STAMP_DWORDS; > > - if ((volatile uint8_t *)ptr >=3D sq->eob) { > > - ptr =3D (volatile uint32_t *)sq->buf; > > - stamp ^=3D RTE_BE32(0x80000000); > > + do { > > + *next_txbb =3D stamp; > > + next_txbb +=3D MLX4_SQ_STAMP_DWORDS; > > + if ((volatile uint8_t *)next_txbb >=3D sq->eob) { > > + next_txbb =3D (volatile uint32_t *)sq->buf; > > + /* Flip invalid stamping ownership. */ > > + stamp ^=3D RTE_BE32(0x1 << > MLX4_SQ_OWNER_BIT); > > + sq->stamp =3D stamp; > > } > > - } > > + size_cd -=3D MLX4_TXBB_SIZE; > > + } while (size_cd); > > } > > - return num_txbbs; > > + *wqe =3D next_txbb; > > + return size; > > } > > > > /** > > @@ -326,24 +319,22 @@ struct pv { > > * > > * @param txq > > * Pointer to Tx queue structure. > > - * > > - * @return > > - * 0 on success, -1 on failure. > > */ > > -static int > > +static void > > mlx4_txq_complete(struct txq *txq, const unsigned int elts_n, > > struct mlx4_sq *sq) > > { > > - unsigned int elts_comp =3D txq->elts_comp; > > unsigned int elts_tail =3D txq->elts_tail; > > - unsigned int sq_tail =3D sq->tail; > > struct mlx4_cq *cq =3D &txq->mcq; > > volatile struct mlx4_cqe *cqe; > > uint32_t cons_index =3D cq->cons_index; > > - uint16_t new_index; > > - uint16_t nr_txbbs =3D 0; > > - int pkts =3D 0; > > - > > + volatile uint32_t *first_wqe; > > + volatile uint32_t *next_wqe =3D (volatile uint32_t *) > > + ((&(*txq->elts)[elts_tail])->wqe); > > + volatile uint32_t *last_wqe; > > + uint16_t mask =3D (((uintptr_t)sq->eob - (uintptr_t)sq->buf) >> > > + MLX4_TXBB_SHIFT) - 1; > > + uint32_t pkts =3D 0; > > /* > > * Traverse over all CQ entries reported and handle each WQ entry > > * reported by them. > > @@ -353,11 +344,11 @@ struct pv { > > if (unlikely(!!(cqe->owner_sr_opcode & > MLX4_CQE_OWNER_MASK) ^ > > !!(cons_index & cq->cqe_cnt))) > > break; > > +#ifndef NDEBUG > > /* > > * Make sure we read the CQE after we read the ownership > bit. > > */ > > rte_io_rmb(); > > -#ifndef NDEBUG > > if (unlikely((cqe->owner_sr_opcode & > MLX4_CQE_OPCODE_MASK) =3D=3D > > MLX4_CQE_OPCODE_ERROR)) { > > volatile struct mlx4_err_cqe *cqe_err =3D @@ -366,41 > +357,32 @@ > > struct pv { > > " syndrome: 0x%x\n", > > (void *)txq, cqe_err->vendor_err, > > cqe_err->syndrome); > > + break; > > } > > #endif /* NDEBUG */ > > - /* Get WQE index reported in the CQE. */ > > - new_index =3D > > - rte_be_to_cpu_16(cqe->wqe_index) & sq- > >txbb_cnt_mask; > > + /* Get WQE address buy index from the CQE. */ > > + last_wqe =3D (volatile uint32_t *)((uintptr_t)sq->buf + > > + ((rte_be_to_cpu_16(cqe->wqe_index) & mask) << > > + MLX4_TXBB_SHIFT)); > > do { > > /* Free next descriptor. */ > > - sq_tail +=3D nr_txbbs; > > - nr_txbbs =3D > > - mlx4_txq_stamp_freed_wqe(sq, > > - sq_tail & sq->txbb_cnt_mask, > > - !!(sq_tail & sq->txbb_cnt)); > > + first_wqe =3D next_wqe; > > + sq->remain_size +=3D > > + mlx4_txq_stamp_freed_wqe(sq, > &next_wqe); > > pkts++; > > - } while ((sq_tail & sq->txbb_cnt_mask) !=3D new_index); > > + } while (first_wqe !=3D last_wqe); > > cons_index++; > > } while (1); > > if (unlikely(pkts =3D=3D 0)) > > - return 0; > > - /* Update CQ. */ > > + return; > > + /* Update CQ consumer index. */ > > cq->cons_index =3D cons_index; > > - *cq->set_ci_db =3D rte_cpu_to_be_32(cq->cons_index & > MLX4_CQ_DB_CI_MASK); > > - sq->tail =3D sq_tail + nr_txbbs; > > - /* Update the list of packets posted for transmission. */ > > - elts_comp -=3D pkts; > > - assert(elts_comp <=3D txq->elts_comp); > > - /* > > - * Assume completion status is successful as nothing can be done > about > > - * it anyway. > > - */ > > + *cq->set_ci_db =3D rte_cpu_to_be_32(cons_index & > MLX4_CQ_DB_CI_MASK); > > + txq->elts_comp -=3D pkts; > > elts_tail +=3D pkts; > > if (elts_tail >=3D elts_n) > > elts_tail -=3D elts_n; > > txq->elts_tail =3D elts_tail; > > - txq->elts_comp =3D elts_comp; > > - return 0; > > } > > > > /** > > @@ -421,41 +403,27 @@ struct pv { > > return buf->pool; > > } > > > > -static int > > +static volatile struct mlx4_wqe_ctrl_seg * > > mlx4_tx_burst_segs(struct rte_mbuf *buf, struct txq *txq, > > - volatile struct mlx4_wqe_ctrl_seg **pctrl) > > + volatile struct mlx4_wqe_ctrl_seg *ctrl) >=20 > Can you use this opportunity to document this function? >=20 Sure, new patch for this? > > { > > - int wqe_real_size; > > - int nr_txbbs; > > struct pv *pv =3D (struct pv *)txq->bounce_buf; > > struct mlx4_sq *sq =3D &txq->msq; > > - uint32_t head_idx =3D sq->head & sq->txbb_cnt_mask; > > - volatile struct mlx4_wqe_ctrl_seg *ctrl; > > - volatile struct mlx4_wqe_data_seg *dseg; > > struct rte_mbuf *sbuf =3D buf; > > uint32_t lkey; > > int pv_counter =3D 0; > > int nb_segs =3D buf->nb_segs; > > + int32_t wqe_size; > > + volatile struct mlx4_wqe_data_seg *dseg =3D > > + (volatile struct mlx4_wqe_data_seg *)(ctrl + 1); > > > > - /* Calculate the needed work queue entry size for this packet. */ > > - wqe_real_size =3D sizeof(volatile struct mlx4_wqe_ctrl_seg) + > > - nb_segs * sizeof(volatile struct mlx4_wqe_data_seg); > > - nr_txbbs =3D MLX4_SIZE_TO_TXBBS(wqe_real_size); > > - /* > > - * Check that there is room for this WQE in the send queue and that > > - * the WQE size is legal. > > - */ > > - if (((sq->head - sq->tail) + nr_txbbs + > > - sq->headroom_txbbs) >=3D sq->txbb_cnt || > > - nr_txbbs > MLX4_MAX_WQE_TXBBS) { > > - return -1; > > - } > > - /* Get the control and data entries of the WQE. */ > > - ctrl =3D (volatile struct mlx4_wqe_ctrl_seg *) > > - mlx4_get_send_wqe(sq, head_idx); > > - dseg =3D (volatile struct mlx4_wqe_data_seg *) > > - ((uintptr_t)ctrl + sizeof(struct mlx4_wqe_ctrl_seg)); > > - *pctrl =3D ctrl; > > + ctrl->fence_size =3D 1 + nb_segs; > > + wqe_size =3D RTE_ALIGN((int32_t)(ctrl->fence_size << > MLX4_SEG_SHIFT), > > + MLX4_TXBB_SIZE); > > + /* Validate WQE size and WQE space in the send queue. */ > > + if (sq->remain_size < wqe_size || > > + wqe_size > MLX4_MAX_WQE_SIZE) > > + return NULL; > > /* > > * Fill the data segments with buffer information. > > * First WQE TXBB head segment is always control segment, @@ - > 469,7 > > +437,7 @@ struct pv { > > if (unlikely(lkey =3D=3D (uint32_t)-1)) { > > DEBUG("%p: unable to get MP <-> MR association", > > (void *)txq); > > - return -1; > > + return NULL; > > } > > /* Handle WQE wraparound. */ > > if (dseg >=3D > > @@ -501,7 +469,7 @@ struct pv { > > if (unlikely(lkey =3D=3D (uint32_t)-1)) { > > DEBUG("%p: unable to get MP <-> MR association", > > (void *)txq); > > - return -1; > > + return NULL; > > } > > mlx4_fill_tx_data_seg(dseg, lkey, > > rte_pktmbuf_mtod(sbuf, uintptr_t), @@ - > 517,7 +485,7 @@ > > struct pv { > > if (unlikely(lkey =3D=3D (uint32_t)-1)) { > > DEBUG("%p: unable to get MP <-> MR association", > > (void *)txq); > > - return -1; > > + return NULL; > > } > > mlx4_fill_tx_data_seg(dseg, lkey, > > rte_pktmbuf_mtod(sbuf, uintptr_t), @@ - > 533,7 +501,7 @@ > > struct pv { > > if (unlikely(lkey =3D=3D (uint32_t)-1)) { > > DEBUG("%p: unable to get MP <-> MR association", > > (void *)txq); > > - return -1; > > + return NULL; > > } > > mlx4_fill_tx_data_seg(dseg, lkey, > > rte_pktmbuf_mtod(sbuf, uintptr_t), @@ - > 557,9 +525,10 @@ > > struct pv { > > for (--pv_counter; pv_counter >=3D 0; pv_counter--) > > pv[pv_counter].dseg->byte_count =3D > pv[pv_counter].val; > > } > > - /* Fill the control parameters for this packet. */ > > - ctrl->fence_size =3D (wqe_real_size >> 4) & 0x3f; > > - return nr_txbbs; > > + sq->remain_size -=3D wqe_size; > > + /* Align next WQE address to the next TXBB. */ > > + return (volatile struct mlx4_wqe_ctrl_seg *) > > + ((volatile uint8_t *)ctrl + wqe_size); > > } > > > > /** > > @@ -585,7 +554,8 @@ struct pv { > > unsigned int i; > > unsigned int max; > > struct mlx4_sq *sq =3D &txq->msq; > > - int nr_txbbs; > > + volatile struct mlx4_wqe_ctrl_seg *ctrl; > > + struct txq_elt *elt; > > > > assert(txq->elts_comp_cd !=3D 0); > > if (likely(txq->elts_comp !=3D 0)) > > @@ -599,29 +569,30 @@ struct pv { > > --max; > > if (max > pkts_n) > > max =3D pkts_n; > > + elt =3D &(*txq->elts)[elts_head]; > > + /* Each element saves its appropriate work queue. */ > > + ctrl =3D elt->wqe; > > for (i =3D 0; (i !=3D max); ++i) { > > struct rte_mbuf *buf =3D pkts[i]; > > unsigned int elts_head_next =3D > > (((elts_head + 1) =3D=3D elts_n) ? 0 : elts_head + 1); > > struct txq_elt *elt_next =3D &(*txq->elts)[elts_head_next]; > > - struct txq_elt *elt =3D &(*txq->elts)[elts_head]; > > - uint32_t owner_opcode =3D MLX4_OPCODE_SEND; > > - volatile struct mlx4_wqe_ctrl_seg *ctrl; > > - volatile struct mlx4_wqe_data_seg *dseg; > > + uint32_t owner_opcode =3D sq->owner_opcode; > > + volatile struct mlx4_wqe_data_seg *dseg =3D > > + (volatile struct mlx4_wqe_data_seg *)(ctrl + > 1); > > + volatile struct mlx4_wqe_ctrl_seg *ctrl_next; > > union { > > uint32_t flags; > > uint16_t flags16[2]; > > } srcrb; > > - uint32_t head_idx =3D sq->head & sq->txbb_cnt_mask; > > uint32_t lkey; > > > > /* Clean up old buffer. */ > > if (likely(elt->buf !=3D NULL)) { > > struct rte_mbuf *tmp =3D elt->buf; > > - >=20 > Empty line following variable declarations should stay. >=20 > > #ifndef NDEBUG > > /* Poisoning. */ > > - memset(elt, 0x66, sizeof(*elt)); > > + elt->buf =3D (struct rte_mbuf *)0x6666666666666666; >=20 > Note this address depends on pointer size, which may in turn trigger a > compilation warning/error. Keep memset() on elt->buf. > ok > > #endif > > /* Faster than rte_pktmbuf_free(). */ > > do { > > @@ -633,23 +604,11 @@ struct pv { > > } > > RTE_MBUF_PREFETCH_TO_FREE(elt_next->buf); > > if (buf->nb_segs =3D=3D 1) { > > - /* > > - * Check that there is room for this WQE in the send > > - * queue and that the WQE size is legal > > - */ > > - if (((sq->head - sq->tail) + 1 + sq->headroom_txbbs) > >=3D > > - sq->txbb_cnt || 1 > MLX4_MAX_WQE_TXBBS) { > > + /* Validate WQE space in the send queue. */ > > + if (sq->remain_size < MLX4_TXBB_SIZE) { > > elt->buf =3D NULL; > > break; > > } > > - /* Get the control and data entries of the WQE. */ > > - ctrl =3D (volatile struct mlx4_wqe_ctrl_seg *) > > - mlx4_get_send_wqe(sq, head_idx); > > - dseg =3D (volatile struct mlx4_wqe_data_seg *) > > - ((uintptr_t)ctrl + > > - sizeof(struct mlx4_wqe_ctrl_seg)); > > - > > - ctrl->fence_size =3D (WQE_ONE_DATA_SEG_SIZE >> 4) > & 0x3f; > > lkey =3D mlx4_txq_mp2mr(txq, > mlx4_txq_mb2mp(buf)); > > if (unlikely(lkey =3D=3D (uint32_t)-1)) { > > /* MR does not exist. */ > > @@ -658,23 +617,33 @@ struct pv { > > elt->buf =3D NULL; > > break; > > } > > - mlx4_fill_tx_data_seg(dseg, lkey, > > + mlx4_fill_tx_data_seg(dseg++, lkey, > > rte_pktmbuf_mtod(buf, > uintptr_t), > > rte_cpu_to_be_32(buf- > >data_len)); > > - nr_txbbs =3D 1; > > + /* Set WQE size in 16-byte units. */ > > + ctrl->fence_size =3D 0x2; > > + sq->remain_size -=3D MLX4_TXBB_SIZE; > > + /* Align next WQE address to the next TXBB. */ > > + ctrl_next =3D ctrl + 0x4; > > } else { > > - nr_txbbs =3D mlx4_tx_burst_segs(buf, txq, &ctrl); > > - if (nr_txbbs < 0) { > > + ctrl_next =3D mlx4_tx_burst_segs(buf, txq, ctrl); > > + if (!ctrl_next) { > > elt->buf =3D NULL; > > break; > > } > > } > > + /* Hold SQ ring wrap around. */ > > + if ((volatile uint8_t *)ctrl_next >=3D sq->eob) { > > + ctrl_next =3D (volatile struct mlx4_wqe_ctrl_seg *) > > + ((volatile uint8_t *)ctrl_next - sq->size); > > + /* Flip HW valid ownership. */ > > + sq->owner_opcode ^=3D 0x1 << > MLX4_SQ_OWNER_BIT; > > + } > > /* > > * For raw Ethernet, the SOLICIT flag is used to indicate > > * that no ICRC should be calculated. > > */ > > - txq->elts_comp_cd -=3D nr_txbbs; > > - if (unlikely(txq->elts_comp_cd <=3D 0)) { > > + if (--txq->elts_comp_cd =3D=3D 0) { > > txq->elts_comp_cd =3D txq->elts_comp_cd_init; > > srcrb.flags =3D RTE_BE32(MLX4_WQE_CTRL_SOLICIT | > > MLX4_WQE_CTRL_CQ_UPDATE); > @@ -720,13 +689,13 @@ struct pv > > { > > * executing as soon as we do). > > */ > > rte_io_wmb(); > > - ctrl->owner_opcode =3D rte_cpu_to_be_32(owner_opcode | > > - ((sq->head & sq->txbb_cnt) ? > > - MLX4_BIT_WQE_OWN : > 0)); > > - sq->head +=3D nr_txbbs; > > + ctrl->owner_opcode =3D rte_cpu_to_be_32(owner_opcode); > > elt->buf =3D buf; > > bytes_sent +=3D buf->pkt_len; > > elts_head =3D elts_head_next; > > + elt_next->wqe =3D ctrl_next; > > + ctrl =3D ctrl_next; > > + elt =3D elt_next; > > } > > /* Take a shortcut if nothing must be sent. */ > > if (unlikely(i =3D=3D 0)) > > diff --git a/drivers/net/mlx4/mlx4_rxtx.h > > b/drivers/net/mlx4/mlx4_rxtx.h index 8207232..c092afa 100644 > > --- a/drivers/net/mlx4/mlx4_rxtx.h > > +++ b/drivers/net/mlx4/mlx4_rxtx.h > > @@ -105,6 +105,7 @@ struct mlx4_rss { > > /** Tx element. */ > > struct txq_elt { > > struct rte_mbuf *buf; /**< Buffer. */ > > + volatile struct mlx4_wqe_ctrl_seg *wqe; /**< SQ WQE. */ > > }; > > > > /** Rx queue counters. */ > > diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c > > index 7882a4d..4c7b62a 100644 > > --- a/drivers/net/mlx4/mlx4_txq.c > > +++ b/drivers/net/mlx4/mlx4_txq.c > > @@ -84,6 +84,7 @@ > > assert(elt->buf !=3D NULL); > > rte_pktmbuf_free(elt->buf); > > elt->buf =3D NULL; > > + elt->wqe =3D NULL; > > if (++elts_tail =3D=3D RTE_DIM(*elts)) > > elts_tail =3D 0; > > } > > @@ -163,20 +164,19 @@ struct txq_mp2mr_mbuf_check_data { > > struct mlx4_cq *cq =3D &txq->mcq; > > struct mlx4dv_qp *dqp =3D mlxdv->qp.out; > > struct mlx4dv_cq *dcq =3D mlxdv->cq.out; > > - uint32_t sq_size =3D (uint32_t)dqp->rq.offset - (uint32_t)dqp- > >sq.offset; > > > > - sq->buf =3D (uint8_t *)dqp->buf.buf + dqp->sq.offset; > > /* Total length, including headroom and spare WQEs. */ > > - sq->eob =3D sq->buf + sq_size; > > - sq->head =3D 0; > > - sq->tail =3D 0; > > - sq->txbb_cnt =3D > > - (dqp->sq.wqe_cnt << dqp->sq.wqe_shift) >> > MLX4_TXBB_SHIFT; > > - sq->txbb_cnt_mask =3D sq->txbb_cnt - 1; > > + sq->size =3D (uint32_t)dqp->rq.offset - (uint32_t)dqp->sq.offset; > > + sq->buf =3D (uint8_t *)dqp->buf.buf + dqp->sq.offset; > > + sq->eob =3D sq->buf + sq->size; > > + uint32_t headroom_size =3D 2048 + (1 << dqp->sq.wqe_shift); > > + /* Continuous headroom size bytes must always stay freed. */ > > + sq->remain_size =3D sq->size - headroom_size; > > + sq->owner_opcode =3D MLX4_OPCODE_SEND | (0 << > MLX4_SQ_OWNER_BIT); > > + sq->stamp =3D rte_cpu_to_be_32(MLX4_SQ_STAMP_VAL | > > + (0 << MLX4_SQ_OWNER_BIT)); > > sq->db =3D dqp->sdb; > > sq->doorbell_qpn =3D dqp->doorbell_qpn; > > - sq->headroom_txbbs =3D > > - (2048 + (1 << dqp->sq.wqe_shift)) >> MLX4_TXBB_SHIFT; > > cq->buf =3D dcq->buf.buf; > > cq->cqe_cnt =3D dcq->cqe_cnt; > > cq->set_ci_db =3D dcq->set_ci_db; > > @@ -362,6 +362,9 @@ struct txq_mp2mr_mbuf_check_data { > > goto error; > > } > > mlx4_txq_fill_dv_obj_info(txq, &mlxdv); > > + /* Save first wqe pointer in the first element. */ > > + (&(*txq->elts)[0])->wqe =3D > > + (volatile struct mlx4_wqe_ctrl_seg *)txq->msq.buf; > > /* Pre-register known mempools. */ > > rte_mempool_walk(mlx4_txq_mp2mr_iter, txq); > > DEBUG("%p: adding Tx queue %p to list", (void *)dev, (void *)txq); > > -- > > 1.8.3.1 > > >=20 > Otherwise this patch looks OK. >=20 > -- > Adrien Mazarguil > 6WIND