From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0E0B8A04B5 for ; Tue, 3 Dec 2019 19:29:10 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 030071BFA9; Tue, 3 Dec 2019 19:29:10 +0100 (CET) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by dpdk.org (Postfix) with ESMTP id 2EEC11BFA9 for ; Tue, 3 Dec 2019 19:29:08 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1575397747; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=epIIjR4dtVu+07bfuvIU8wfcAQTxM8v+4KmjHVoLJ/0=; b=GHm0c0nJJKjHTZ6Kmn12jrUPCoWnULh8AV3cHOtY6GSo5LgO4RJzJxZzf4rkY4MoADd45N S/DpKTiz54uYCTiXcWLHkYUoeqZ5P2T03jw7g8wJZXCcizVPyT5fJLYmFZtgJ5RGkKre6o mCJNjKvk+ItQL/Cn/8SIqQ4m2bI0HLg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-203-wPQ48NBjMNqNb8jVO4RTmw-1; Tue, 03 Dec 2019 13:29:04 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2F26A800D4E; Tue, 3 Dec 2019 18:29:03 +0000 (UTC) Received: from rh.redhat.com (ovpn-117-10.ams2.redhat.com [10.36.117.10]) by smtp.corp.redhat.com (Postfix) with ESMTP id E2DEF5C240; Tue, 3 Dec 2019 18:29:01 +0000 (UTC) From: Kevin Traynor To: Ruifeng Wang Cc: Gavin Hu , David Hunt , dpdk stable Date: Tue, 3 Dec 2019 18:27:01 +0000 Message-Id: <20191203182714.17297-52-ktraynor@redhat.com> In-Reply-To: <20191203182714.17297-1-ktraynor@redhat.com> References: <20191203182714.17297-1-ktraynor@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: wPQ48NBjMNqNb8jVO4RTmw-1 X-Mimecast-Spam-Score: 0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Subject: [dpdk-stable] patch 'lib/distributor: fix deadlock on aarch64' has been queued to LTS release 18.11.6 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi, FYI, your patch has been queued to LTS release 18.11.6 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 12/10/19. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasi= ng (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/kevintraynor/dpdk-stable-queue This queued commit can be viewed at: https://github.com/kevintraynor/dpdk-stable-queue/commit/01b5ea180ab7e79a3a= 5524f2b4bef3a3cd0c25f3 Thanks. Kevin. --- >From 01b5ea180ab7e79a3a5524f2b4bef3a3cd0c25f3 Mon Sep 17 00:00:00 2001 From: Ruifeng Wang Date: Tue, 15 Oct 2019 17:28:25 +0800 Subject: [PATCH] lib/distributor: fix deadlock on aarch64 [ upstream commit 52833924822490391df3dce3eec3a2ee7777acc5 ] Distributor and worker threads rely on data structs in cache line for synchronization. The shared data structs were not protected. This caused deadlock issue on weaker memory ordering platforms as aarch64. Fix this issue by adding memory barriers to ensure synchronization among cores. Bugzilla ID: 342 Fixes: 775003ad2f96 ("distributor: add new burst-capable library") Signed-off-by: Ruifeng Wang Reviewed-by: Gavin Hu Acked-by: David Hunt --- lib/librte_distributor/meson.build | 5 ++ lib/librte_distributor/rte_distributor.c | 68 ++++++++++++++------ lib/librte_distributor/rte_distributor_v20.c | 59 ++++++++++++----- 3 files changed, 97 insertions(+), 35 deletions(-) diff --git a/lib/librte_distributor/meson.build b/lib/librte_distributor/me= son.build index dba7e3b2a..26577dbc1 100644 --- a/lib/librte_distributor/meson.build +++ b/lib/librte_distributor/meson.build @@ -10,2 +10,7 @@ endif headers =3D files('rte_distributor.h') deps +=3D ['mbuf'] + +# for clang 32-bit compiles we need libatomic for 64-bit atomic ops +if cc.get_id() =3D=3D 'clang' and dpdk_conf.get('RTE_ARCH_64') =3D=3D fals= e +=09ext_deps +=3D cc.find_library('atomic') +endif diff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distribu= tor/rte_distributor.c index b60acdeed..62a763404 100644 --- a/lib/librte_distributor/rte_distributor.c +++ b/lib/librte_distributor/rte_distributor.c @@ -49,6 +49,9 @@ rte_distributor_request_pkt_v1705(struct rte_distributor = *d, =20 =09retptr64 =3D &(buf->retptr64[0]); -=09/* Spin while handshake bits are set (scheduler clears it) */ -=09while (unlikely(*retptr64 & RTE_DISTRIB_GET_BUF)) { +=09/* Spin while handshake bits are set (scheduler clears it). +=09 * Sync with worker on GET_BUF flag. +=09 */ +=09while (unlikely(__atomic_load_n(retptr64, __ATOMIC_ACQUIRE) +=09=09=09& RTE_DISTRIB_GET_BUF)) { =09=09rte_pause(); =09=09uint64_t t =3D rte_rdtsc()+100; @@ -75,6 +78,8 @@ rte_distributor_request_pkt_v1705(struct rte_distributor = *d, =09 * Finally, set the GET_BUF to signal to distributor that cache =09 * line is ready for processing +=09 * Sync with distributor to release retptrs =09 */ -=09*retptr64 |=3D RTE_DISTRIB_GET_BUF; +=09__atomic_store_n(retptr64, *retptr64 | RTE_DISTRIB_GET_BUF, +=09=09=09__ATOMIC_RELEASE); } BIND_DEFAULT_SYMBOL(rte_distributor_request_pkt, _v1705, 17.05); @@ -98,6 +103,9 @@ rte_distributor_poll_pkt_v1705(struct rte_distributor *d= , =09} =20 -=09/* If bit is set, return */ -=09if (buf->bufptr64[0] & RTE_DISTRIB_GET_BUF) +=09/* If bit is set, return +=09 * Sync with distributor to acquire bufptrs +=09 */ +=09if (__atomic_load_n(&(buf->bufptr64[0]), __ATOMIC_ACQUIRE) +=09=09& RTE_DISTRIB_GET_BUF) =09=09return -1; =20 @@ -114,6 +122,8 @@ rte_distributor_poll_pkt_v1705(struct rte_distributor *= d, =09 * mbuf pointers, so toggle the bit so scheduler can start working =09 * on the next cacheline while we're working. +=09 * Sync with distributor on GET_BUF flag. Release bufptrs. =09 */ -=09buf->bufptr64[0] |=3D RTE_DISTRIB_GET_BUF; +=09__atomic_store_n(&(buf->bufptr64[0]), +=09=09buf->bufptr64[0] | RTE_DISTRIB_GET_BUF, __ATOMIC_RELEASE); =20 =09return count; @@ -174,4 +184,6 @@ rte_distributor_return_pkt_v1705(struct rte_distributor= *d, =09} =20 +=09/* Sync with distributor to acquire retptrs */ +=09__atomic_thread_fence(__ATOMIC_ACQUIRE); =09for (i =3D 0; i < RTE_DIST_BURST_SIZE; i++) =09=09/* Switch off the return bit first */ @@ -182,6 +194,9 @@ rte_distributor_return_pkt_v1705(struct rte_distributor= *d, =09=09=09RTE_DISTRIB_FLAG_BITS) | RTE_DISTRIB_RETURN_BUF; =20 -=09/* set the GET_BUF but even if we got no returns */ -=09buf->retptr64[0] |=3D RTE_DISTRIB_GET_BUF; +=09/* set the GET_BUF but even if we got no returns. +=09 * Sync with distributor on GET_BUF flag. Release retptrs. +=09 */ +=09__atomic_store_n(&(buf->retptr64[0]), +=09=09buf->retptr64[0] | RTE_DISTRIB_GET_BUF, __ATOMIC_RELEASE); =20 =09return 0; @@ -273,5 +288,7 @@ handle_returns(struct rte_distributor *d, unsigned int = wkr) =09unsigned int i; =20 -=09if (buf->retptr64[0] & RTE_DISTRIB_GET_BUF) { +=09/* Sync on GET_BUF flag. Acquire retptrs. */ +=09if (__atomic_load_n(&(buf->retptr64[0]), __ATOMIC_ACQUIRE) +=09=09& RTE_DISTRIB_GET_BUF) { =09=09for (i =3D 0; i < RTE_DIST_BURST_SIZE; i++) { =09=09=09if (buf->retptr64[i] & RTE_DISTRIB_RETURN_BUF) { @@ -286,6 +303,8 @@ handle_returns(struct rte_distributor *d, unsigned int = wkr) =09=09d->returns.start =3D ret_start; =09=09d->returns.count =3D ret_count; -=09=09/* Clear for the worker to populate with more returns */ -=09=09buf->retptr64[0] =3D 0; +=09=09/* Clear for the worker to populate with more returns. +=09=09 * Sync with distributor on GET_BUF flag. Release retptrs. +=09=09 */ +=09=09__atomic_store_n(&(buf->retptr64[0]), 0, __ATOMIC_RELEASE); =09} =09return count; @@ -307,5 +326,7 @@ release(struct rte_distributor *d, unsigned int wkr) =09unsigned int i; =20 -=09while (!(d->bufs[wkr].bufptr64[0] & RTE_DISTRIB_GET_BUF)) +=09/* Sync with worker on GET_BUF flag */ +=09while (!(__atomic_load_n(&(d->bufs[wkr].bufptr64[0]), __ATOMIC_ACQUIRE) +=09=09& RTE_DISTRIB_GET_BUF)) =09=09rte_pause(); =20 @@ -327,6 +348,9 @@ release(struct rte_distributor *d, unsigned int wkr) =09d->backlog[wkr].count =3D 0; =20 -=09/* Clear the GET bit */ -=09buf->bufptr64[0] &=3D ~RTE_DISTRIB_GET_BUF; +=09/* Clear the GET bit. +=09 * Sync with worker on GET_BUF flag. Release bufptrs. +=09 */ +=09__atomic_store_n(&(buf->bufptr64[0]), +=09=09buf->bufptr64[0] & ~RTE_DISTRIB_GET_BUF, __ATOMIC_RELEASE); =09return buf->count; =20 @@ -355,5 +379,7 @@ rte_distributor_process_v1705(struct rte_distributor *d= , =09=09/* Flush out all non-full cache-lines to workers. */ =09=09for (wid =3D 0 ; wid < d->num_workers; wid++) { -=09=09=09if (d->bufs[wid].bufptr64[0] & RTE_DISTRIB_GET_BUF) { +=09=09=09/* Sync with worker on GET_BUF flag. */ +=09=09=09if (__atomic_load_n(&(d->bufs[wid].bufptr64[0]), +=09=09=09=09__ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF) { =09=09=09=09release(d, wid); =09=09=09=09handle_returns(d, wid); @@ -367,5 +393,7 @@ rte_distributor_process_v1705(struct rte_distributor *d= , =09=09unsigned int pkts; =20 -=09=09if (d->bufs[wkr].bufptr64[0] & RTE_DISTRIB_GET_BUF) +=09=09/* Sync with worker on GET_BUF flag. */ +=09=09if (__atomic_load_n(&(d->bufs[wkr].bufptr64[0]), +=09=09=09__ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF) =09=09=09d->bufs[wkr].count =3D 0; =20 @@ -465,5 +493,7 @@ rte_distributor_process_v1705(struct rte_distributor *d= , =09/* Flush out all non-full cache-lines to workers. */ =09for (wid =3D 0 ; wid < d->num_workers; wid++) -=09=09if ((d->bufs[wid].bufptr64[0] & RTE_DISTRIB_GET_BUF)) +=09=09/* Sync with worker on GET_BUF flag. */ +=09=09if ((__atomic_load_n(&(d->bufs[wid].bufptr64[0]), +=09=09=09__ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF)) =09=09=09release(d, wid); =20 @@ -574,5 +604,7 @@ rte_distributor_clear_returns_v1705(struct rte_distribu= tor *d) =09/* throw away returns, so workers can exit */ =09for (wkr =3D 0; wkr < d->num_workers; wkr++) -=09=09d->bufs[wkr].retptr64[0] =3D 0; +=09=09/* Sync with worker. Release retptrs. */ +=09=09__atomic_store_n(&(d->bufs[wkr].retptr64[0]), 0, +=09=09=09=09__ATOMIC_RELEASE); } BIND_DEFAULT_SYMBOL(rte_distributor_clear_returns, _v1705, 17.05); diff --git a/lib/librte_distributor/rte_distributor_v20.c b/lib/librte_dist= ributor/rte_distributor_v20.c index 9566b53f2..35adc8ea8 100644 --- a/lib/librte_distributor/rte_distributor_v20.c +++ b/lib/librte_distributor/rte_distributor_v20.c @@ -34,7 +34,10 @@ rte_distributor_request_pkt_v20(struct rte_distributor_v= 20 *d, =09int64_t req =3D (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS) =09=09=09| RTE_DISTRIB_GET_BUF; -=09while (unlikely(buf->bufptr64 & RTE_DISTRIB_FLAGS_MASK)) +=09while (unlikely(__atomic_load_n(&buf->bufptr64, __ATOMIC_RELAXED) +=09=09=09& RTE_DISTRIB_FLAGS_MASK)) =09=09rte_pause(); -=09buf->bufptr64 =3D req; + +=09/* Sync with distributor on GET_BUF flag. */ +=09__atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE); } VERSION_SYMBOL(rte_distributor_request_pkt, _v20, 2.0); @@ -45,5 +48,7 @@ rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *= d, { =09union rte_distributor_buffer_v20 *buf =3D &d->bufs[worker_id]; -=09if (buf->bufptr64 & RTE_DISTRIB_GET_BUF) +=09/* Sync with distributor. Acquire bufptr64. */ +=09if (__atomic_load_n(&buf->bufptr64, __ATOMIC_ACQUIRE) +=09=09& RTE_DISTRIB_GET_BUF) =09=09return NULL; =20 @@ -73,5 +78,6 @@ rte_distributor_return_pkt_v20(struct rte_distributor_v20= *d, =09uint64_t req =3D (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS= ) =09=09=09| RTE_DISTRIB_RETURN_BUF; -=09buf->bufptr64 =3D req; +=09/* Sync with distributor on RETURN_BUF flag. */ +=09__atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE); =09return 0; } @@ -117,5 +123,6 @@ handle_worker_shutdown(struct rte_distributor_v20 *d, u= nsigned int wkr) =09d->in_flight_tags[wkr] =3D 0; =09d->in_flight_bitmask &=3D ~(1UL << wkr); -=09d->bufs[wkr].bufptr64 =3D 0; +=09/* Sync with worker. Release bufptr64. */ +=09__atomic_store_n(&(d->bufs[wkr].bufptr64), 0, __ATOMIC_RELEASE); =09if (unlikely(d->backlog[wkr].count !=3D 0)) { =09=09/* On return of a packet, we need to move the @@ -161,15 +168,21 @@ process_returns(struct rte_distributor_v20 *d) =20 =09for (wkr =3D 0; wkr < d->num_workers; wkr++) { - -=09=09const int64_t data =3D d->bufs[wkr].bufptr64; =09=09uintptr_t oldbuf =3D 0; +=09=09/* Sync with worker. Acquire bufptr64. */ +=09=09const int64_t data =3D __atomic_load_n(&(d->bufs[wkr].bufptr64), +=09=09=09=09=09=09=09__ATOMIC_ACQUIRE); =20 =09=09if (data & RTE_DISTRIB_GET_BUF) { =09=09=09flushed++; =09=09=09if (d->backlog[wkr].count) -=09=09=09=09d->bufs[wkr].bufptr64 =3D -=09=09=09=09=09=09backlog_pop(&d->backlog[wkr]); +=09=09=09=09/* Sync with worker. Release bufptr64. */ +=09=09=09=09__atomic_store_n(&(d->bufs[wkr].bufptr64), +=09=09=09=09=09backlog_pop(&d->backlog[wkr]), +=09=09=09=09=09__ATOMIC_RELEASE); =09=09=09else { -=09=09=09=09d->bufs[wkr].bufptr64 =3D RTE_DISTRIB_GET_BUF; +=09=09=09=09/* Sync with worker on GET_BUF flag. */ +=09=09=09=09__atomic_store_n(&(d->bufs[wkr].bufptr64), +=09=09=09=09=09RTE_DISTRIB_GET_BUF, +=09=09=09=09=09__ATOMIC_RELEASE); =09=09=09=09d->in_flight_tags[wkr] =3D 0; =09=09=09=09d->in_flight_bitmask &=3D ~(1UL << wkr); @@ -207,7 +220,8 @@ rte_distributor_process_v20(struct rte_distributor_v20 = *d, =20 =09while (next_idx < num_mbufs || next_mb !=3D NULL) { - -=09=09int64_t data =3D d->bufs[wkr].bufptr64; =09=09uintptr_t oldbuf =3D 0; +=09=09/* Sync with worker. Acquire bufptr64. */ +=09=09int64_t data =3D __atomic_load_n(&(d->bufs[wkr].bufptr64), +=09=09=09=09=09=09__ATOMIC_ACQUIRE); =20 =09=09if (!next_mb) { @@ -255,9 +269,14 @@ rte_distributor_process_v20(struct rte_distributor_v20= *d, =20 =09=09=09if (d->backlog[wkr].count) -=09=09=09=09d->bufs[wkr].bufptr64 =3D -=09=09=09=09=09=09backlog_pop(&d->backlog[wkr]); +=09=09=09=09/* Sync with worker. Release bufptr64. */ +=09=09=09=09__atomic_store_n(&(d->bufs[wkr].bufptr64), +=09=09=09=09=09=09backlog_pop(&d->backlog[wkr]), +=09=09=09=09=09=09__ATOMIC_RELEASE); =20 =09=09=09else { -=09=09=09=09d->bufs[wkr].bufptr64 =3D next_value; +=09=09=09=09/* Sync with worker. Release bufptr64. */ +=09=09=09=09__atomic_store_n(&(d->bufs[wkr].bufptr64), +=09=09=09=09=09=09next_value, +=09=09=09=09=09=09__ATOMIC_RELEASE); =09=09=09=09d->in_flight_tags[wkr] =3D new_tag; =09=09=09=09d->in_flight_bitmask |=3D (1UL << wkr); @@ -280,11 +299,17 @@ rte_distributor_process_v20(struct rte_distributor_v2= 0 *d, =09for (wkr =3D 0; wkr < d->num_workers; wkr++) =09=09if (d->backlog[wkr].count && -=09=09=09=09(d->bufs[wkr].bufptr64 & RTE_DISTRIB_GET_BUF)) { +=09=09=09=09/* Sync with worker. Acquire bufptr64. */ +=09=09=09=09(__atomic_load_n(&(d->bufs[wkr].bufptr64), +=09=09=09=09__ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF)) { =20 =09=09=09int64_t oldbuf =3D d->bufs[wkr].bufptr64 >> =09=09=09=09=09RTE_DISTRIB_FLAG_BITS; + =09=09=09store_return(oldbuf, d, &ret_start, &ret_count); =20 -=09=09=09d->bufs[wkr].bufptr64 =3D backlog_pop(&d->backlog[wkr]); +=09=09=09/* Sync with worker. Release bufptr64. */ +=09=09=09__atomic_store_n(&(d->bufs[wkr].bufptr64), +=09=09=09=09backlog_pop(&d->backlog[wkr]), +=09=09=09=09__ATOMIC_RELEASE); =09=09} =20 --=20 2.21.0 --- Diff of the applied patch vs upstream commit (please double-check if non-= empty: --- --- -=092019-12-03 17:29:54.758306253 +0000 +++ 0052-lib-distributor-fix-deadlock-on-aarch64.patch=092019-12-03 17:29:5= 1.776749412 +0000 @@ -1 +1 @@ -From 52833924822490391df3dce3eec3a2ee7777acc5 Mon Sep 17 00:00:00 2001 +From 01b5ea180ab7e79a3a5524f2b4bef3a3cd0c25f3 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 52833924822490391df3dce3eec3a2ee7777acc5 ] + @@ -15 +16,0 @@ -Cc: stable@dpdk.org @@ -39 +40 @@ -index 21eb1fb0a..0a03625c9 100644 +index b60acdeed..62a763404 100644 @@ -42 +43 @@ -@@ -50,6 +50,9 @@ rte_distributor_request_pkt_v1705(struct rte_distributor= *d, +@@ -49,6 +49,9 @@ rte_distributor_request_pkt_v1705(struct rte_distributor= *d, @@ -54 +55 @@ -@@ -76,6 +79,8 @@ rte_distributor_request_pkt_v1705(struct rte_distributor= *d, +@@ -75,6 +78,8 @@ rte_distributor_request_pkt_v1705(struct rte_distributor= *d, @@ -64 +65 @@ -@@ -99,6 +104,9 @@ rte_distributor_poll_pkt_v1705(struct rte_distributor *= d, +@@ -98,6 +103,9 @@ rte_distributor_poll_pkt_v1705(struct rte_distributor *= d, @@ -76 +77 @@ -@@ -115,6 +123,8 @@ rte_distributor_poll_pkt_v1705(struct rte_distributor = *d, +@@ -114,6 +122,8 @@ rte_distributor_poll_pkt_v1705(struct rte_distributor = *d, @@ -86 +87 @@ -@@ -175,4 +185,6 @@ rte_distributor_return_pkt_v1705(struct rte_distributo= r *d, +@@ -174,4 +184,6 @@ rte_distributor_return_pkt_v1705(struct rte_distributo= r *d, @@ -93 +94 @@ -@@ -183,6 +195,9 @@ rte_distributor_return_pkt_v1705(struct rte_distributo= r *d, +@@ -182,6 +194,9 @@ rte_distributor_return_pkt_v1705(struct rte_distributo= r *d, @@ -105 +106 @@ -@@ -274,5 +289,7 @@ handle_returns(struct rte_distributor *d, unsigned int= wkr) +@@ -273,5 +288,7 @@ handle_returns(struct rte_distributor *d, unsigned int= wkr) @@ -114 +115 @@ -@@ -287,6 +304,8 @@ handle_returns(struct rte_distributor *d, unsigned int= wkr) +@@ -286,6 +303,8 @@ handle_returns(struct rte_distributor *d, unsigned int= wkr) @@ -125 +126 @@ -@@ -308,5 +327,7 @@ release(struct rte_distributor *d, unsigned int wkr) +@@ -307,5 +326,7 @@ release(struct rte_distributor *d, unsigned int wkr) @@ -134 +135 @@ -@@ -328,6 +349,9 @@ release(struct rte_distributor *d, unsigned int wkr) +@@ -327,6 +348,9 @@ release(struct rte_distributor *d, unsigned int wkr) @@ -146 +147 @@ -@@ -356,5 +380,7 @@ rte_distributor_process_v1705(struct rte_distributor *= d, +@@ -355,5 +379,7 @@ rte_distributor_process_v1705(struct rte_distributor *= d, @@ -155 +156 @@ -@@ -368,5 +394,7 @@ rte_distributor_process_v1705(struct rte_distributor *= d, +@@ -367,5 +393,7 @@ rte_distributor_process_v1705(struct rte_distributor *= d, @@ -164 +165 @@ -@@ -466,5 +494,7 @@ rte_distributor_process_v1705(struct rte_distributor *= d, +@@ -465,5 +493,7 @@ rte_distributor_process_v1705(struct rte_distributor *= d, @@ -173 +174 @@ -@@ -575,5 +605,7 @@ rte_distributor_clear_returns_v1705(struct rte_distrib= utor *d) +@@ -574,5 +604,7 @@ rte_distributor_clear_returns_v1705(struct rte_distrib= utor *d) @@ -183 +184 @@ -index cdc0969a8..ef6d5cb4b 100644 +index 9566b53f2..35adc8ea8 100644 @@ -186 +187 @@ -@@ -35,7 +35,10 @@ rte_distributor_request_pkt_v20(struct rte_distributor_= v20 *d, +@@ -34,7 +34,10 @@ rte_distributor_request_pkt_v20(struct rte_distributor_= v20 *d, @@ -199 +200 @@ -@@ -46,5 +49,7 @@ rte_distributor_poll_pkt_v20(struct rte_distributor_v20 = *d, +@@ -45,5 +48,7 @@ rte_distributor_poll_pkt_v20(struct rte_distributor_v20 = *d, @@ -208 +209 @@ -@@ -74,5 +79,6 @@ rte_distributor_return_pkt_v20(struct rte_distributor_v2= 0 *d, +@@ -73,5 +78,6 @@ rte_distributor_return_pkt_v20(struct rte_distributor_v2= 0 *d, @@ -216 +217 @@ -@@ -118,5 +124,6 @@ handle_worker_shutdown(struct rte_distributor_v20 *d, = unsigned int wkr) +@@ -117,5 +123,6 @@ handle_worker_shutdown(struct rte_distributor_v20 *d, = unsigned int wkr) @@ -224 +225 @@ -@@ -162,15 +169,21 @@ process_returns(struct rte_distributor_v20 *d) +@@ -161,15 +168,21 @@ process_returns(struct rte_distributor_v20 *d) @@ -251 +252 @@ -@@ -208,7 +221,8 @@ rte_distributor_process_v20(struct rte_distributor_v20= *d, +@@ -207,7 +220,8 @@ rte_distributor_process_v20(struct rte_distributor_v20= *d, @@ -262 +263 @@ -@@ -256,9 +270,14 @@ rte_distributor_process_v20(struct rte_distributor_v2= 0 *d, +@@ -255,9 +269,14 @@ rte_distributor_process_v20(struct rte_distributor_v2= 0 *d, @@ -280 +281 @@ -@@ -281,11 +300,17 @@ rte_distributor_process_v20(struct rte_distributor_v= 20 *d, +@@ -280,11 +299,17 @@ rte_distributor_process_v20(struct rte_distributor_v= 20 *d,