From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01on0068.outbound.protection.outlook.com [104.47.2.68]) by dpdk.org (Postfix) with ESMTP id 51C5F1B326 for ; Mon, 30 Oct 2017 09:15:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=K6PmEv8ZSkyS4Oi0CludI0fxi+Ppc88vdpo6KOdzxzU=; b=qIwTa+/DdzEFfXRROzxYz5c1kUIIHTknpjHxzFUUz4vXAPLgYAYu9YjLzzfk1pzHRMmm7Avx/opZmrG9EwzN+sNFfmi5PQrrPioCUkyMlItoZCnddJljwk1Al71OwRAHmr3GbjCniWOlK4F5mkZoA2wgd9pCqQF5hAKMVFCcaQ8= Received: from DB5PR05MB1254.eurprd05.prod.outlook.com (10.162.157.140) by DB5PR05MB1256.eurprd05.prod.outlook.com (10.162.157.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.178.6; Mon, 30 Oct 2017 08:15:37 +0000 Received: from DB5PR05MB1254.eurprd05.prod.outlook.com ([fe80::69a8:7948:e0cb:c16e]) by DB5PR05MB1254.eurprd05.prod.outlook.com ([fe80::69a8:7948:e0cb:c16e%13]) with mapi id 15.20.0178.012; Mon, 30 Oct 2017 08:15:37 +0000 From: Ophir Munk To: Adrien Mazarguil CC: "dev@dpdk.org" , Thomas Monjalon , "Olga Shern" , Matan Azrad Thread-Topic: [PATCH v2 7/7] net/mlx4: separate Tx for multi-segments Thread-Index: AQHTTApshTahBDz8fkut/Kxe12xv6KL0yqWAgAa7VIA= Date: Mon, 30 Oct 2017 08:15:37 +0000 Message-ID: References: <1508752838-30408-1-git-send-email-ophirmu@mellanox.com> <1508768520-4810-1-git-send-email-ophirmu@mellanox.com> <1508768520-4810-8-git-send-email-ophirmu@mellanox.com> <20171025165029.GK26782@6wind.com> In-Reply-To: <20171025165029.GK26782@6wind.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=ophirmu@mellanox.com; x-originating-ip: [93.173.17.154] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; DB5PR05MB1256; 6:Bkj3/0Tg0fQZwbIiS4xrBa8dFG6aRf6Bn44dABUaTXXS/cqZBxJTvBJ0feW7NkY18LVXq8BjAHeFDzaJLwK8cEKsvXq6f8Zx1lAGicgH75dURLVojQ31U1SxgN1v4CVoyqvz9+xzKZqM50pvkThbQ4O1b99s8C1Ex7T2i1H+cjlDNht1s9gqo5AhcY4v3k3oB4u/LiwH1vhF4XItnK7yJY186BNerBKmDTFOLEJnTcgeaGnxXFF4tENySzOLxLI9F2O00YS5vFJkoQVvGADi/7XaBxPBgEmx3tAvH8NU1mWfeVmfIopiWdayuWYC64mv6nGK7Lpw4qkYbDWj5Tj/4cV1pHxs8NxusHY3m6e6WnA=; 5:jq2aX45VEGhOUemuHFJ1MQozPNcLXzJGtnWACSmfSO4QY+X2RY9yuBKVhvHNuNYpxEsXCX7vcuosEvASEVnbJhdPUTcrqj47jgZT/Mu1eD+cJa+54CTmqdxerRM77WbULZ1F92uKoNsJBkjgnT/DUBTVZ7wby12oUarLFkg5QAQ=; 24:B2CogURYoYszdS0pxmg2OelKXtKnjNltOqq81OoYSG5pDZnAK8fh7e5nuV34qUIxgn/tp+ctIFE0kzeZKXi39MSXU/xJBKwn99VlLE+Ru4Q=; 7:2cQL0i8ao4kktvj5AmmmJWCZ6aNMte6PkkC9B7roI8j7QQMqnXbHzP6ecCNp5b3PFeArtovtaHSVvOl8rL+JhM/HEG3ZFbDEpzzxSjDXJCAcaRQBjPeg+k/iN4OdJ52DgWT6nuBkMqhJ5g2tmZ7gO6jFZMOWgGKF+jI9MpKyEEr7PQg/lyeqkwuUjLFH3EeCcMK9GLMeUArLKBZAl6U83GWNM7bjJ5sPvrFtpqmFpd/C6S9gsn52kWZtBIx9SapM x-ms-exchange-antispam-srfa-diagnostics: SSOS; x-ms-office365-filtering-correlation-id: 0e286b66-220c-470c-67a0-08d51f6e66ac x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(4534020)(4602075)(48565401081)(2017052603199); SRVR:DB5PR05MB1256; x-ms-traffictypediagnostic: DB5PR05MB1256: x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-exchange-antispam-report-test: UriScan:(60795455431006); x-microsoft-antispam-prvs: x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(3002001)(10201501046)(93006095)(93001095)(3231020)(100000703101)(100105400095)(6055026)(6041248)(20161123560025)(20161123564025)(20161123555025)(20161123562025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123558100)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:DB5PR05MB1256; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:DB5PR05MB1256; x-forefront-prvs: 0476D4AB88 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(6009001)(39860400002)(376002)(346002)(377424004)(189002)(199003)(24454002)(9686003)(8936002)(14454004)(6246003)(54356999)(50986999)(76176999)(4326008)(53946003)(107886003)(101416001)(53936002)(2900100001)(3846002)(102836003)(189998001)(6116002)(99286003)(2906002)(55016002)(6506006)(66066001)(229853002)(106356001)(105586002)(2950100002)(33656002)(7696004)(5660300001)(93886005)(68736007)(53546010)(97736004)(6916009)(54906003)(8676002)(478600001)(6436002)(81156014)(81166006)(3660700001)(305945005)(7736002)(5250100002)(3280700002)(316002)(25786009)(74316002)(86362001); DIR:OUT; SFP:1101; SCL:1; SRVR:DB5PR05MB1256; H:DB5PR05MB1254.eurprd05.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0e286b66-220c-470c-67a0-08d51f6e66ac X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Oct 2017 08:15:37.0669 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5PR05MB1256 Subject: Re: [dpdk-dev] [PATCH v2 7/7] net/mlx4: separate Tx for multi-segments X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Oct 2017 08:15:39 -0000 Hi, Please see inline. On Wednesday, October 25, 2017 7:50 PM Adrien Mazarguil wrote: >=20 > Hi Ophir, >=20 > On Mon, Oct 23, 2017 at 02:22:00PM +0000, Ophir Munk wrote: > > This commit optimizes handling of one segment and calls a dedicated > > function for handling multi segments > > > > Signed-off-by: Ophir Munk >=20 > While it indeed moves the code to a separate function I'm not sure by how > much it improves performance. >=20 > Is it noticeably better, can you provide a short performance summary with > and without this patch? Is that the case for both single and multi-segmen= t > scenarios, or was this improvement at the cost of a degradation in the la= tter > case? >=20 In v3 this commit is squashed into the previous commit "net/mlx4: improve p= erformance of one Tx segment"=20 as both commits represent one logic unit. On Matan's setup performance improvement with those 2 commits occurs for bo= th single and multi-segment scenarios. On my setup performance improvement occurs for single segment only: With patch versus without patch:=20 64 +0.2 mpps 64,64 -0.2 mpps 64,64,64,64 -0.07 mpps > If it splits a large function in two smaller ones for readability and no > performance validation was done on this specific patch alone, please not > label it as a performance improvement. I'm fine with readability > improvements when properly identified as such. >=20 Performance improvement indication was removed from commit message. > A few additional comments below. >=20 > > --- > > drivers/net/mlx4/mlx4_rxtx.c | 284 > > +++++++++++++++++++++++-------------------- > > 1 file changed, 154 insertions(+), 130 deletions(-) > > > > diff --git a/drivers/net/mlx4/mlx4_rxtx.c > > b/drivers/net/mlx4/mlx4_rxtx.c index 3236552..9596859 100644 > > --- a/drivers/net/mlx4/mlx4_rxtx.c > > +++ b/drivers/net/mlx4/mlx4_rxtx.c > > @@ -62,6 +62,9 @@ > > #include "mlx4_rxtx.h" > > #include "mlx4_utils.h" > > > > +#define WQE_ONE_DATA_SEG_SIZE \ > > + (sizeof(struct mlx4_wqe_ctrl_seg) + sizeof(struct > > +mlx4_wqe_data_seg)) > > + > > /** > > * Pointer-value pair structure used in tx_post_send for saving the fi= rst > > * DWORD (32 byte) of a TXBB. > > @@ -140,22 +143,19 @@ mlx4_txq_stamp_freed_wqe(struct mlx4_sq *sq, > uint16_t index, uint8_t owner) > > * @return > > * 0 on success, -1 on failure. > > */ > > -static int > > -mlx4_txq_complete(struct txq *txq) > > +static inline int __attribute__((always_inline)) >=20 > Should be static only, leave the rest to the compiler. This function is l= arge > enough that it shouldn't make much of a difference anyway (unless proved > otherwise). >=20 Done. __attribute__((always_inline)) was removed. > > +mlx4_txq_complete(struct txq *txq, const unsigned int elts_n, > > + struct mlx4_sq *sq) > > { > > unsigned int elts_comp =3D txq->elts_comp; > > unsigned int elts_tail =3D txq->elts_tail; > > - const unsigned int elts_n =3D txq->elts_n; > > struct mlx4_cq *cq =3D &txq->mcq; > > - struct mlx4_sq *sq =3D &txq->msq; > > struct mlx4_cqe *cqe; > > uint32_t cons_index =3D cq->cons_index; > > uint16_t new_index; > > uint16_t nr_txbbs =3D 0; > > int pkts =3D 0; > > > > - if (unlikely(elts_comp =3D=3D 0)) > > - return 0; > > /* > > * Traverse over all CQ entries reported and handle each WQ entry > > * reported by them. > > @@ -266,6 +266,120 @@ rte_be32_t mlx4_txq_add_mr(struct txq *txq, > struct rte_mempool *mp, uint32_t i) > > return txq->mp2mr[i].lkey; > > } > > > > +static int handle_multi_segs(struct rte_mbuf *buf, > > + struct txq *txq, > > + struct mlx4_wqe_ctrl_seg **pctrl) { > > + int wqe_real_size; > > + int nr_txbbs; > > + struct pv *pv =3D (struct pv *)txq->bounce_buf; > > + struct mlx4_sq *sq =3D &txq->msq; > > + uint32_t head_idx =3D sq->head & sq->txbb_cnt_mask; > > + struct mlx4_wqe_ctrl_seg *ctrl; > > + struct mlx4_wqe_data_seg *dseg; > > + uintptr_t addr; > > + uint32_t byte_count; > > + int pv_counter =3D 0; > > + > > + /* Calculate the needed work queue entry size for this packet. */ > > + wqe_real_size =3D sizeof(struct mlx4_wqe_ctrl_seg) + > > + buf->nb_segs * sizeof(struct mlx4_wqe_data_seg); > > + nr_txbbs =3D MLX4_SIZE_TO_TXBBS(wqe_real_size); > > + /* > > + * Check that there is room for this WQE in the send queue and that > > + * the WQE size is legal. > > + */ > > + if (((sq->head - sq->tail) + nr_txbbs + > > + sq->headroom_txbbs) >=3D sq->txbb_cnt || > > + nr_txbbs > MLX4_MAX_WQE_TXBBS) { > > + return -1; > > + } > > + > > + /* Get the control and data entries of the WQE. */ > > + ctrl =3D (struct mlx4_wqe_ctrl_seg *)mlx4_get_send_wqe(sq, > head_idx); > > + dseg =3D (struct mlx4_wqe_data_seg *)((uintptr_t)ctrl + > > + sizeof(struct mlx4_wqe_ctrl_seg)); > > + *pctrl =3D ctrl; > > + /* Fill the data segments with buffer information. */ > > + struct rte_mbuf *sbuf; > > + > > + for (sbuf =3D buf; sbuf !=3D NULL; sbuf =3D sbuf->next, dseg++) { > > + addr =3D rte_pktmbuf_mtod(sbuf, uintptr_t); > > + rte_prefetch0((volatile void *)addr); > > + /* Handle WQE wraparound. */ > > + if (unlikely(dseg >=3D (struct mlx4_wqe_data_seg *)sq->eob)) > > + dseg =3D (struct mlx4_wqe_data_seg *)sq->buf; > > + dseg->addr =3D rte_cpu_to_be_64(addr); > > + /* Memory region key (big endian) for this memory pool. */ > > + dseg->lkey =3D mlx4_txq_mp2mr(txq, mlx4_txq_mb2mp(sbuf)); > #ifndef > > +NDEBUG > > + /* Calculate the needed work queue entry size for this packet > */ > > + if (unlikely(dseg->lkey =3D=3D rte_cpu_to_be_32((uint32_t)-1))) { > > + /* MR does not exist. */ > > + DEBUG("%p: unable to get MP <-> MR association", > > + (void *)txq); > > + /* > > + * Restamp entry in case of failure. > > + * Make sure that size is written correctly > > + * Note that we give ownership to the SW, not the > HW. > > + */ > > + wqe_real_size =3D sizeof(struct mlx4_wqe_ctrl_seg) + > > + buf->nb_segs * sizeof(struct > mlx4_wqe_data_seg); > > + ctrl->fence_size =3D (wqe_real_size >> 4) & 0x3f; > > + mlx4_txq_stamp_freed_wqe(sq, head_idx, > > + (sq->head & sq->txbb_cnt) ? 0 : 1); > > + return -1; > > + } > > +#endif /* NDEBUG */ > > + if (likely(sbuf->data_len)) { > > + byte_count =3D rte_cpu_to_be_32(sbuf->data_len); > > + } else { > > + /* > > + * Zero length segment is treated as inline segment > > + * with zero data. > > + */ > > + byte_count =3D RTE_BE32(0x80000000); > > + } > > + /* > > + * If the data segment is not at the beginning of a > > + * Tx basic block (TXBB) then write the byte count, > > + * else postpone the writing to just before updating the > > + * control segment. > > + */ > > + if ((uintptr_t)dseg & (uintptr_t)(MLX4_TXBB_SIZE - 1)) { > > + /* > > + * Need a barrier here before writing the byte_count > > + * fields to make sure that all the data is visible > > + * before the byte_count field is set. > > + * Otherwise, if the segment begins a new cacheline, > > + * the HCA prefetcher could grab the 64-byte chunk > and > > + * get a valid (!=3D 0xffffffff) byte count but stale > > + * data, and end up sending the wrong data. > > + */ > > + rte_io_wmb(); > > + dseg->byte_count =3D byte_count; > > + } else { > > + /* > > + * This data segment starts at the beginning of a new > > + * TXBB, so we need to postpone its byte_count > writing > > + * for later. > > + */ > > + pv[pv_counter].dseg =3D dseg; > > + pv[pv_counter++].val =3D byte_count; > > + } > > + } > > + /* Write the first DWORD of each TXBB save earlier. */ > > + if (pv_counter) { > > + /* Need a barrier here before writing the byte_count. */ > > + rte_io_wmb(); > > + for (--pv_counter; pv_counter >=3D 0; pv_counter--) > > + pv[pv_counter].dseg->byte_count =3D > pv[pv_counter].val; > > + } > > + /* Fill the control parameters for this packet. */ > > + ctrl->fence_size =3D (wqe_real_size >> 4) & 0x3f; > > + > > + return nr_txbbs; > > +} > > /** > > * DPDK callback for Tx. > > * > > @@ -288,10 +402,11 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf > **pkts, uint16_t pkts_n) > > unsigned int i; > > unsigned int max; > > struct mlx4_sq *sq =3D &txq->msq; > > - struct pv *pv =3D (struct pv *)txq->bounce_buf; > > + int nr_txbbs; > > > > assert(txq->elts_comp_cd !=3D 0); > > - mlx4_txq_complete(txq); > > + if (likely(txq->elts_comp !=3D 0)) > > + mlx4_txq_complete(txq, elts_n, sq); > > max =3D (elts_n - (elts_head - txq->elts_tail)); > > if (max > elts_n) > > max -=3D elts_n; > > @@ -316,10 +431,6 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf > **pkts, uint16_t pkts_n) > > } srcrb; > > uint32_t head_idx =3D sq->head & sq->txbb_cnt_mask; > > uintptr_t addr; > > - uint32_t byte_count; > > - int wqe_real_size; > > - int nr_txbbs; > > - int pv_counter =3D 0; > > > > /* Clean up old buffer. */ > > if (likely(elt->buf !=3D NULL)) { > > @@ -338,31 +449,22 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf > **pkts, uint16_t pkts_n) > > } while (tmp !=3D NULL); > > } > > RTE_MBUF_PREFETCH_TO_FREE(elt_next->buf); > > - > > - /* > > - * Calculate the needed work queue entry size > > - * for this packet. > > - */ > > - wqe_real_size =3D sizeof(struct mlx4_wqe_ctrl_seg) + > > - buf->nb_segs * sizeof(struct > mlx4_wqe_data_seg); > > - nr_txbbs =3D MLX4_SIZE_TO_TXBBS(wqe_real_size); > > - /* > > - * Check that there is room for this WQE in the send > > - * queue and that the WQE size is legal. > > - */ > > - if (((sq->head - sq->tail) + nr_txbbs + > > - sq->headroom_txbbs) >=3D sq->txbb_cnt || > > - nr_txbbs > MLX4_MAX_WQE_TXBBS) { > > - elt->buf =3D NULL; > > - break; > > - } > > - /* Get the control and data entries of the WQE. */ > > - ctrl =3D (struct mlx4_wqe_ctrl_seg *) > > - mlx4_get_send_wqe(sq, head_idx); > > - dseg =3D (struct mlx4_wqe_data_seg *)((uintptr_t)ctrl + > > - sizeof(struct mlx4_wqe_ctrl_seg)); > > - /* Fill the data segments with buffer information. */ > > if (likely(buf->nb_segs =3D=3D 1)) { > > + /* > > + * Check that there is room for this WQE in the send > > + * queue and that the WQE size is legal > > + */ > > + if (((sq->head - sq->tail) + 1 + sq->headroom_txbbs) > > + >=3D sq->txbb_cnt || > > + 1 > > MLX4_MAX_WQE_TXBBS) { > > + elt->buf =3D NULL; > > + break; > > + } > > + /* Get the control and data entries of the WQE. */ > > + ctrl =3D (struct mlx4_wqe_ctrl_seg *) > > + mlx4_get_send_wqe(sq, head_idx); > > + dseg =3D (struct mlx4_wqe_data_seg *)((uintptr_t)ctrl + > > + sizeof(struct mlx4_wqe_ctrl_seg)); > > addr =3D rte_pktmbuf_mtod(buf, uintptr_t); > > rte_prefetch0((volatile void *)addr); > > /* Handle WQE wraparound. */ > > @@ -371,120 +473,42 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf > **pkts, uint16_t pkts_n) > > dseg =3D (struct mlx4_wqe_data_seg *)sq->buf; > > dseg->addr =3D rte_cpu_to_be_64(addr); > > /* Memory region key (big endian). */ > > - dseg->lkey =3D mlx4_txq_mp2mr(txq, > mlx4_txq_mb2mp(sbuf)); > > - #ifndef NDEBUG > > + dseg->lkey =3D mlx4_txq_mp2mr(txq, > mlx4_txq_mb2mp(buf)); #ifndef > > +NDEBUG > > if (unlikely(dseg->lkey =3D=3D > > rte_cpu_to_be_32((uint32_t)-1))) { > > /* MR does not exist. */ > > DEBUG("%p: unable to get MP <-> MR > association", > > - (void *)txq); > > + (void *)txq); > > /* > > * Restamp entry in case of failure. > > * Make sure that size is written correctly > > * Note that we give ownership to the SW, > > * not the HW. > > */ > > - ctrl->fence_size =3D (wqe_real_size >> 4) & > 0x3f; > > + ctrl->fence_size =3D > (WQE_ONE_DATA_SEG_SIZE >> 4) > > + & 0x3f; > > mlx4_txq_stamp_freed_wqe(sq, head_idx, > > - (sq->head & sq->txbb_cnt) ? 0 : 1); > > + (sq->head & sq->txbb_cnt) ? 0 : 1); > > elt->buf =3D NULL; > > break; > > } > > - #endif /* NDEBUG */ > > +#endif /* NDEBUG */ > > /* Need a barrier here before writing the > byte_count. */ > > rte_io_wmb(); > > dseg->byte_count =3D rte_cpu_to_be_32(buf- > >data_len); > > + > > + /* Fill the control parameters for this packet. */ > > + ctrl->fence_size =3D (WQE_ONE_DATA_SEG_SIZE >> 4) > & 0x3f; > > + nr_txbbs =3D 1; > > } else { > > - /* Fill the data segments with buffer information. */ > > - struct rte_mbuf *sbuf; > > - > > - for (sbuf =3D buf; > > - sbuf !=3D NULL; > > - sbuf =3D sbuf->next, dseg++) { > > - addr =3D rte_pktmbuf_mtod(sbuf, uintptr_t); > > - rte_prefetch0((volatile void *)addr); > > - /* Handle WQE wraparound. */ > > - if (unlikely(dseg >=3D > > - (struct mlx4_wqe_data_seg *)sq- > >eob)) > > - dseg =3D (struct mlx4_wqe_data_seg *) > > - sq->buf; > > - dseg->addr =3D rte_cpu_to_be_64(addr); > > - /* Memory region key (big endian). */ > > - dseg->lkey =3D mlx4_txq_mp2mr(txq, > > - mlx4_txq_mb2mp(sbuf)); > > - #ifndef NDEBUG > > - if (unlikely(dseg->lkey =3D=3D > > - rte_cpu_to_be_32((uint32_t)-1))) { > > - /* MR does not exist. */ > > - DEBUG("%p: unable to get MP <-> > MR association", > > - (void *)txq); > > - /* > > - * Restamp entry in case of failure. > > - * Make sure that size is written > > - * correctly, note that we give > > - * ownership to the SW, not the HW. > > - */ > > - ctrl->fence_size =3D > > - (wqe_real_size >> 4) & 0x3f; > > - mlx4_txq_stamp_freed_wqe(sq, > head_idx, > > - (sq->head & sq->txbb_cnt) ? 0 : 1); > > - elt->buf =3D NULL; > > - break; > > - } > > - #endif /* NDEBUG */ > > - if (likely(sbuf->data_len)) { > > - byte_count =3D > > - rte_cpu_to_be_32(sbuf->data_len); > > - } else { > > - /* > > - * Zero length segment is treated as > > - * inline segment with zero data. > > - */ > > - byte_count =3D > RTE_BE32(0x80000000); > > - } > > - /* > > - * If the data segment is not at the beginning > > - * of a Tx basic block (TXBB) then write the > > - * byte count, else postpone the writing to > > - * just before updating the control segment. > > - */ > > - if ((uintptr_t)dseg & > > - (uintptr_t)(MLX4_TXBB_SIZE - 1)) { > > - /* > > - * Need a barrier here before writing > > - * the byte_count fields to make sure > > - * that all the data is visible before > > - * the byte_count field is set. > > - * Otherwise, if the segment begins a > > - * new cacheline, the HCA prefetcher > > - * could grab the 64-byte chunk and > get > > - * a valid (!=3D 0xffffffff) byte count > > - * but stale data, and end up sending > > - * the wrong data. > > - */ > > - rte_io_wmb(); > > - dseg->byte_count =3D byte_count; > > - } else { > > - /* > > - * This data segment starts at the > > - * beginning of a new TXBB, so we > > - * need to postpone its byte_count > > - * writing for later. > > - */ > > - pv[pv_counter].dseg =3D dseg; > > - pv[pv_counter++].val =3D byte_count; > > - } > > + nr_txbbs =3D handle_multi_segs(buf, txq, &ctrl); >=20 > Having all this part non-inline could degrade multi-segment performance, = is > that OK? It is inline because it is defined as static. Performance is not degraded i= n this case. >=20 > > + if (nr_txbbs < 0) { > > + elt->buf =3D NULL; > > + break; > > } > > - /* Write the first DWORD of each TXBB save earlier. */ > > - if (pv_counter) { > > - /* Need a barrier before writing the byte_count. */ > > - rte_io_wmb(); > > - for (--pv_counter; pv_counter >=3D 0; pv_counter--) > > - pv[pv_counter].dseg->byte_count =3D > > - pv[pv_counter].val; > > } > > - /* Fill the control parameters for this packet. */ > > - ctrl->fence_size =3D (wqe_real_size >> 4) & 0x3f; > > + > > /* > > * For raw Ethernet, the SOLICIT flag is used to indicate > > * that no ICRC should be calculated. > > -- > > 2.7.4 > > >=20 > -- > Adrien Mazarguil > 6WIND