From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 64630A0679 for ; Mon, 29 Apr 2019 15:59:00 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2FA611B10E; Mon, 29 Apr 2019 15:59:00 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id E06C41B10D; Mon, 29 Apr 2019 15:58:58 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Apr 2019 06:58:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,409,1549958400"; d="scan'208";a="341812933" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by fmsmga006.fm.intel.com with ESMTP; 29 Apr 2019 06:58:57 -0700 Received: from fmsmsx117.amr.corp.intel.com ([169.254.3.26]) by FMSMSX105.amr.corp.intel.com ([169.254.4.204]) with mapi id 14.03.0415.000; Mon, 29 Apr 2019 06:58:57 -0700 From: "Wiles, Keith" To: "Lipiec, Herakliusz" CC: dpdk-dev , "rasland@mellanox.com" , "stable@dpdk.org" Thread-Topic: [PATCH v2] net/tap: fix potential buffer overrun Thread-Index: AQHU+4qXDIqxDlkl0EC/l0BL4mKjyaZTpaIA Date: Mon, 29 Apr 2019 13:58:57 +0000 Message-ID: <7E00796C-D91F-47C4-B957-8561FC26F0E5@intel.com> References: <20190425164700.30948-1-herakliusz.lipiec@intel.com> <20190425171702.933-1-herakliusz.lipiec@intel.com> In-Reply-To: <20190425171702.933-1-herakliusz.lipiec@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.255.89.125] Content-Type: text/plain; charset="UTF-8" Content-ID: <269756819AB2D040AAE48A585BBED271@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v2] net/tap: fix potential buffer overrun X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190429135857.4ZG_yjnxMnfYbVVhk4rXz7exEc9YjC0dpkV98Agaxso@z> > On Apr 25, 2019, at 10:17 AM, Lipiec, Herakliusz wrote: >=20 > When secondary to primary process synchronization occours > there is no check for number of fds which could cause buffer overrun. >=20 > Bugzilla ID: 252 > Fixes: c9aa56edec8e ("net/tap: access primary process queues from seconda= ry") > Cc: rasland@mellanox.com > Cc: stable@dpdk.org >=20 > Signed-off-by: Herakliusz Lipiec > --- > drivers/net/tap/rte_eth_tap.c | 13 +++++++++++-- > 1 file changed, 11 insertions(+), 2 deletions(-) >=20 > diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.= c > index e9fda8cf6..4a2ef5ce7 100644 > --- a/drivers/net/tap/rte_eth_tap.c > +++ b/drivers/net/tap/rte_eth_tap.c > @@ -2111,6 +2111,10 @@ tap_mp_attach_queues(const char *port_name, struct= rte_eth_dev *dev) > TAP_LOG(DEBUG, "Received IPC reply for %s", reply_param->port_name); >=20 > /* Attach the queues from received file descriptors */ > + if (reply_param->rxq_count + reply_param->txq_count !=3D reply->num_fds= ) { > + TAP_LOG(ERR, "Unexpected number of fds received"); > + return -1; > + } This check is reasonable, but why is this being done on the receive side an= d not checked on the send side. There may need to be a check for num_fds be= ing zero or greater than 8 as that is the limit to the number of FDs that c= an be handle by the IPC. In a different thread for Bug-258 we need to return an indicator that the r= eceive side detected an error by returning 0 for num_fds and I have patch f= or that one. https://bugs.dpdk.org/show_bug.cgi?id=3D258 I would have expected the sender to make sure they match and then this test= is not needed, but a test for num_fds being zero or > 8 is needed if you w= ant to detect the failure here or not if you do not care as long as nb_[r/t= ]x_queues is zero too. > dev->data->nb_rx_queues =3D reply_param->rxq_count; > dev->data->nb_tx_queues =3D reply_param->txq_count; > fd_iterator =3D 0; > @@ -2151,12 +2155,16 @@ tap_mp_sync_queues(const struct rte_mp_msg *reque= st, const void *peer) > /* Fill file descriptors for all queues */ > reply.num_fds =3D 0; > reply_param->rxq_count =3D 0; > + if (dev->data->nb_rx_queues + dev->data->nb_tx_queues > > + RTE_MP_MAX_FD_NUM){ > + TAP_LOG(ERR, "Number of rx/tx queues exceeds max number of fds"); > + return -1; > + } > for (queue =3D 0; queue < dev->data->nb_rx_queues; queue++) { > reply.fds[reply.num_fds++] =3D process_private->rxq_fds[queue]; > reply_param->rxq_count++; > } > RTE_ASSERT(reply_param->rxq_count =3D=3D dev->data->nb_rx_queues); > - RTE_ASSERT(reply_param->txq_count =3D=3D dev->data->nb_tx_queues); > RTE_ASSERT(reply.num_fds <=3D RTE_MP_MAX_FD_NUM); >=20 > reply_param->txq_count =3D 0; > @@ -2164,7 +2172,8 @@ tap_mp_sync_queues(const struct rte_mp_msg *request= , const void *peer) > reply.fds[reply.num_fds++] =3D process_private->txq_fds[queue]; > reply_param->txq_count++; > } > - > + RTE_ASSERT(reply_param->txq_count =3D=3D dev->data->nb_tx_queues); > + RTE_ASSERT(reply.num_fds <=3D RTE_MP_MAX_FD_NUM); > /* Send reply */ > strlcpy(reply.name, request->name, sizeof(reply.name)); > strlcpy(reply_param->port_name, request_param->port_name, > --=20 > 2.17.2 >=20 Regards, Keith