From: "De Lara Guarch, Pablo" <pablo.de.lara.guarch@intel.com>
To: Tetsuya Mukawa <mukawa@igel.co.jp>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] testpmd: Fix segmentation fault when portmask is specified
Date: Fri, 27 Feb 2015 10:06:43 +0000 [thread overview]
Message-ID: <E115CCD9D858EF4F90C690B0DCB4D89727264F11@IRSMSX108.ger.corp.intel.com> (raw)
In-Reply-To: <1425021375-5763-1-git-send-email-mukawa@igel.co.jp>
> -----Original Message-----
> From: Tetsuya Mukawa [mailto:mukawa@igel.co.jp]
> Sent: Friday, February 27, 2015 7:16 AM
> To: dev@dpdk.org
> Cc: De Lara Guarch, Pablo; Tetsuya Mukawa
> Subject: [PATCH] testpmd: Fix segmentation fault when portmask is
> specified
>
> If testpmd is invoked with portmask option like below, segmentation
> fault will be occured. This patch fixes the issue.
>
> Reported-by: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>
> Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
> ---
> app/test-pmd/testpmd.c | 37 +++++++++++++++++++++++--------------
> 1 file changed, 23 insertions(+), 14 deletions(-)
>
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 43329ed..61291be 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -579,20 +579,6 @@ init_config(void)
> socket_num);
> }
>
> - /* Configuration of Ethernet ports. */
> - ports = rte_zmalloc("testpmd: ports",
> - sizeof(struct rte_port) * RTE_MAX_ETHPORTS,
> - RTE_CACHE_LINE_SIZE);
> - if (ports == NULL) {
> - rte_exit(EXIT_FAILURE,
> - "rte_zmalloc(%d struct rte_port) failed\n",
> - RTE_MAX_ETHPORTS);
> - }
> -
> - /* enabled allocated ports */
> - for (pid = 0; pid < nb_ports; pid++)
> - ports[pid].enabled = 1;
> -
> FOREACH_PORT(pid, ports) {
> port = &ports[pid];
> rte_eth_dev_info_get(pid, &port->dev_info);
> @@ -1999,6 +1985,26 @@ init_port_dcb_config(portid_t pid,struct
> dcb_config *dcb_conf)
> return 0;
> }
>
> +static void
> +init_port(void)
> +{
> + portid_t pid;
> +
> + /* Configuration of Ethernet ports. */
> + ports = rte_zmalloc("testpmd: ports",
> + sizeof(struct rte_port) * RTE_MAX_ETHPORTS,
> + RTE_CACHE_LINE_SIZE);
> + if (ports == NULL) {
> + rte_exit(EXIT_FAILURE,
> + "rte_zmalloc(%d struct rte_port) failed\n",
> + RTE_MAX_ETHPORTS);
> + }
> +
> + /* enabled allocated ports */
> + for (pid = 0; pid < nb_ports; pid++)
> + ports[pid].enabled = 1;
> +}
> +
> int
> main(int argc, char** argv)
> {
> @@ -2013,6 +2019,9 @@ main(int argc, char** argv)
> if (nb_ports == 0)
> RTE_LOG(WARNING, EAL, "No probed ethernet devices\n");
>
> + /* allocate port structures, and init them */
> + init_port();
> +
> set_def_fwd_config();
> if (nb_lcores == 0)
> rte_panic("Empty set of forwarding logical cores - check the "
> --
> 1.9.1
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
next prev parent reply other threads:[~2015-02-27 10:06 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-02-27 7:16 Tetsuya Mukawa
2015-02-27 10:06 ` De Lara Guarch, Pablo [this message]
2015-02-27 23:25 ` Thomas Monjalon
2015-02-28 5:41 ` Fu, JingguoX
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=E115CCD9D858EF4F90C690B0DCB4D89727264F11@IRSMSX108.ger.corp.intel.com \
--to=pablo.de.lara.guarch@intel.com \
--cc=dev@dpdk.org \
--cc=mukawa@igel.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).