DPDK patches and discussions
 help / color / mirror / Atom feed
* Re: [dpdk-dev] dev Digest, Vol 83, Issue 176
       [not found] <mailman.373.1457882255.1284.dev@dpdk.org>
@ 2016-03-13 16:43 ` wenhb
  0 siblings, 0 replies; only message in thread
From: wenhb @ 2016-03-13 16:43 UTC (permalink / raw)
  To: dev

售后服务需要放到实施团队中。

销售一般分两块:
1)业务员
2)售前工程师




wenhb@techsure.com.cn

From: dev-request
Date: 2016-03-13 23:17
To: dev
Subject: dev Digest, Vol 83, Issue 176
Send dev mailing list submissions to
dev@dpdk.org

To subscribe or unsubscribe via the World Wide Web, visit
http://dpdk.org/ml/listinfo/dev
or, via email, send a message with subject or body 'help' to
dev-request@dpdk.org

You can reach the person managing the list at
dev-owner@dpdk.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of dev digest..."


Today's Topics:

   1. Re: [PATCH v8 1/4] lib/ether: optimize struct
      rte_eth_tunnel_filter_conf (Thomas Monjalon)
   2. Re: [PATCH v8 0/4] This patch set adds tunnel filter support
      for IP in GRE on i40e. (Thomas Monjalon)
   3. dpdk hash lookup function crashed (segment fault) (??)
   4. Re: [PATCH v4 0/4] Add PCAP support to source and sink port
      (Thomas Monjalon)
   5. Re: [PATCH v3] ip_pipeline: add load balancing function to
      pass-through pipeline (Thomas Monjalon)
   6. Re: [PATCH] app/test-pmd: add support for zero rx and tx
      queues (Thomas Monjalon)


----------------------------------------------------------------------

Message: 1
Date: Sun, 13 Mar 2016 13:01:05 +0100
From: Thomas Monjalon <thomas.monjalon@6wind.com>
To: Jingjing Wu <jingjing.wu@intel.com>, xutao.sun@intel.com
Cc: dev@dpdk.org, helin.zhang@intel.com, Jijiang Liu
<jijiang.liu@intel.com>
Subject: Re: [dpdk-dev] [PATCH v8 1/4] lib/ether: optimize struct
rte_eth_tunnel_filter_conf
Message-ID: <3488408.ByHyY3dxP7@xps13>
Content-Type: text/plain; charset="us-ascii"

2016-03-10 11:05, Jingjing Wu:
> From: Xutao Sun <xutao.sun@intel.com>
> 
> Change the fields of outer_mac and inner_mac in struct
> rte_eth_tunnel_filter_conf from pointer to struct in order to
> keep the code's readability.

It breaks compilation of examples/tep_termination.

> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -6628,8 +6628,10 @@ cmd_tunnel_filter_parsed(void *parsed_result,
>  struct rte_eth_tunnel_filter_conf tunnel_filter_conf;
>  int ret = 0;
>  
> - tunnel_filter_conf.outer_mac = &res->outer_mac;
> - tunnel_filter_conf.inner_mac = &res->inner_mac;
> + rte_memcpy(&tunnel_filter_conf.outer_mac, &res->outer_mac,
> + ETHER_ADDR_LEN);
> + rte_memcpy(&tunnel_filter_conf.inner_mac, &res->inner_mac,
> + ETHER_ADDR_LEN);

Please use ether_addr_copy().

> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -5839,10 +5839,10 @@ i40e_dev_tunnel_filter_set(struct i40e_pf *pf,
>  }
>  pfilter = cld_filter;
>  
> - (void)rte_memcpy(&pfilter->outer_mac, tunnel_filter->outer_mac,
> - sizeof(struct ether_addr));
> - (void)rte_memcpy(&pfilter->inner_mac, tunnel_filter->inner_mac,
> - sizeof(struct ether_addr));
> + (void)rte_memcpy(&pfilter->outer_mac, &tunnel_filter->outer_mac,
> + ETHER_ADDR_LEN);
> + (void)rte_memcpy(&pfilter->inner_mac, &tunnel_filter->inner_mac,
> + ETHER_ADDR_LEN);

As already commented in January, please stop this useless return cast.

There is a dedicated function to copy MAC addresses:
ether_addr_copy()


------------------------------

Message: 2
Date: Sun, 13 Mar 2016 15:18:37 +0100
From: Thomas Monjalon <thomas.monjalon@6wind.com>
To: Jingjing Wu <jingjing.wu@intel.com>, xutao.sun@intel.com
Cc: dev@dpdk.org, helin.zhang@intel.com
Subject: Re: [dpdk-dev] [PATCH v8 0/4] This patch set adds tunnel
filter support for IP in GRE on i40e.
Message-ID: <2132081.GGHc1VANAT@xps13>
Content-Type: text/plain; charset="us-ascii"

2016-03-10 11:05, Jingjing Wu:
> Xutao Sun (4):
>   lib/ether: optimize the'rte_eth_tunnel_filter_conf' structure
>   lib/ether: add IP in GRE type
>   driver/i40e: implement tunnel filter for IP in GRE
>   app/test-pmd: test tunnel filter for IP in GRE

I've done the changes for ether_addr_copy and fixed tep_termination.
Applied, thanks


------------------------------

Message: 3
Date: Sun, 13 Mar 2016 22:38:14 +0800 (CST)
From: ?? <zhangwqh@126.com>
To: dev@dpdk.org
Subject: [dpdk-dev] dpdk hash lookup function crashed (segment fault)
Message-ID: <44245b2a.50bb.1537069ca54.Coremail.zhangwqh@126.com>
Content-Type: text/plain; charset=GBK

Hi all, 
When I use the dpdk lookup function, I met the segment fault problem. Can  anybody help to look at why this happens. I will put the aim what I want to do and the related piece of code, and my debug message, 


This problem is that in dpdk multi process - client and server example, dpdk-2.2.0/examples/multi_process/client_server_mp
My aim is that server create a hash table, then share it to client. Client will write the hash  table, server will read the hash table.  I am using dpdk hash table.  What I did is that server create a hash table (table and array entries), return the table address.  I use memzone pass the table address to client.  In client, the second lookup gets segment fault. The system gets crashed.  I will put some related code here. 
create hash table function:

struct onvm_ft*

onvm_ft_create(int cnt, int entry_size) {

        struct rte_hash* hash;

        struct onvm_ft* ft;

        struct rte_hash_parameters ipv4_hash_params = {

            .name = NULL,

            .entries = cnt,

            .key_len = sizeof(struct onvm_ft_ipv4_5tuple),

            .hash_func = NULL,

            .hash_func_init_val = 0,

        };




        char s[64];

        /* create ipv4 hash table. use core number and cycle counter to get a unique name. */

        ipv4_hash_params.name = s;

        ipv4_hash_params.socket_id = rte_socket_id();

        snprintf(s, sizeof(s), "onvm_ft_%d-%"PRIu64, rte_lcore_id(), rte_get_tsc_cycles());

        hash = rte_hash_create(&ipv4_hash_params);

        if (hash == NULL) {

                return NULL;

        }

        ft = (struct onvm_ft*)rte_calloc("table", 1, sizeof(struct onvm_ft), 0);

        if (ft == NULL) {

                rte_hash_free(hash);

                return NULL;

        }

        ft->hash = hash;

        ft->cnt = cnt;

        ft->entry_size = entry_size;

        /* Create data array for storing values */

        ft->data = rte_calloc("entry", cnt, entry_size, 0);

        if (ft->data == NULL) {

                rte_hash_free(hash);

                rte_free(ft);

                return NULL;

        }

        return ft;

}




related structure:

struct onvm_ft {

        struct rte_hash* hash;

        char* data;

        int cnt;

        int entry_size;

};




in server side, I will call the create function, use memzone share it to client. The following is what I do:

related variables:

struct onvm_ft *sdn_ft;

struct onvm_ft **sdn_ft_p;

const struct rte_memzone *mz_ftp;




        sdn_ft = onvm_ft_create(1024, sizeof(struct onvm_flow_entry));

        if(sdn_ft == NULL) {

                rte_exit(EXIT_FAILURE, "Unable to create flow table\n");

        }

        mz_ftp = rte_memzone_reserve(MZ_FTP_INFO, sizeof(struct onvm_ft *),

                                  rte_socket_id(), NO_FLAGS);

        if (mz_ftp == NULL) {

                rte_exit(EXIT_FAILURE, "Canot reserve memory zone for flow table pointer\n");

        }

        memset(mz_ftp->addr, 0, sizeof(struct onvm_ft *));

        sdn_ft_p = mz_ftp->addr;

        *sdn_ft_p = sdn_ft;




In client side:

struct onvm_ft *sdn_ft;

static void

map_flow_table(void) {

        const struct rte_memzone *mz_ftp;

        struct onvm_ft **ftp;




        mz_ftp = rte_memzone_lookup(MZ_FTP_INFO);

        if (mz_ftp == NULL)

                rte_exit(EXIT_FAILURE, "Cannot get flow table pointer\n");

        ftp = mz_ftp->addr;

        sdn_ft = *ftp;

}




The following is my debug message: I set a breakpoint in lookup table line. To narrow down the problem, I just send one flow. So the second time and the first time, the packets are the same.  

For the first time, it works. I print out the parameters: inside the onvm_ft_lookup function, if there is a related entry, it will return the address by flow_entry. 

Breakpoint 1, datapath_handle_read (dp=0x7ffff00008c0) at /home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/sdn.c:191

191                                 ret = onvm_ft_lookup(sdn_ft, fk, (char**)&flow_entry);

(gdb) print *sdn_ft 

$1 = {hash = 0x7fff32cce740, data = 0x7fff32cb0480 "", cnt = 1024, entry_size = 56}

(gdb) print *fk

$2 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}

(gdb) s

onvm_ft_lookup (table=0x7fff32cbe4c0, key=0x7fff32b99d00, data=0x7ffff68d2b00) at /home/zhangwei1984/openNetVM-master/openNetVM/onvm/shared/onvm_flow_table.c:151

151 softrss = onvm_softrss(key);

(gdb) n

152         printf("software rss %d\n", softrss);

(gdb) 

software rss 403183624

154         tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss);

(gdb) print table->hash

$3 = (struct rte_hash *) 0x7fff32cce740

(gdb) print *key

$4 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}

(gdb) print softrss 

$5 = 403183624

(gdb) c




After I hit c, it will do the second lookup,

Breakpoint 1, datapath_handle_read (dp=0x7ffff00008c0) at /home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/sdn.c:191

191                                 ret = onvm_ft_lookup(sdn_ft, fk, (char**)&flow_entry);

(gdb) print *sdn_ft


$7 = {hash = 0x7fff32cce740, data = 0x7fff32cb0480 "", cnt = 1024, entry_size = 56}

(gdb) print *fk

$8 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}

(gdb) s

onvm_ft_lookup (table=0x7fff32cbe4c0, key=0x7fff32b99c00, data=0x7ffff68d2b00) at /home/zhangwei1984/openNetVM-master/openNetVM/onvm/shared/onvm_flow_table.c:151

151 softrss = onvm_softrss(key);

(gdb) n

152         printf("software rss %d\n", softrss);

(gdb) n

software rss 403183624

154         tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss);

(gdb) print table->hash

$9 = (struct rte_hash *) 0x7fff32cce740

(gdb) print *key 

$10 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}

(gdb) print softrss

$11 = 403183624

(gdb) n




Program received signal SIGSEGV, Segmentation fault.

0x000000000045fb97 in __rte_hash_lookup_bulk ()

(gdb) bt

#0  0x000000000045fb97 in __rte_hash_lookup_bulk ()

#1  0x0000000000000000 in ?? ()




From the debug message, the parameters are exactly the same. I do not know why it has the segmentation fault. 

my lookup function:

int

onvm_ft_lookup(struct onvm_ft* table, struct onvm_ft_ipv4_5tuple *key, char** data) {

        int32_t tbl_index;

        uint32_t softrss;




        softrss = onvm_softrss(key);

        printf("software rss %d\n", softrss);




        tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss);

        if (tbl_index >= 0) {

                *data = onvm_ft_get_data(table, tbl_index);

                return 0;

        }

        else {

                return tbl_index;

        }

}

------------------------------

Message: 4
Date: Sun, 13 Mar 2016 15:58:43 +0100
From: Thomas Monjalon <thomas.monjalon@6wind.com>
To: Fan Zhang <roy.fan.zhang@intel.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH v4 0/4] Add PCAP support to source and
sink port
Message-ID: <1468838.gGGXln9niK@xps13>
Content-Type: text/plain; charset="us-ascii"

2016-03-11 17:08, Fan Zhang:
> This patchset adds feature to source and sink type port in librte_port
> library, and to examples/ip_pipline. Originally, source/sink ports act
> as input and output of NULL packets generator. This patchset enables
> them read from and write to specific PCAP file, to generate and dump
> packets.
[...]
> Fan Zhang (4):
>   lib/librte_port: add PCAP file support to source port
>   example/ip_pipeline: add PCAP file support
>   lib/librte_port: add packet dumping to PCAP file support in sink port
>   examples/ip_pipeline: add packets dumping to PCAP file support

Applied, thanks


------------------------------

Message: 5
Date: Sun, 13 Mar 2016 16:09:05 +0100
From: Thomas Monjalon <thomas.monjalon@6wind.com>
To: Jasvinder Singh <jasvinder.singh@intel.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH v3] ip_pipeline: add load balancing
function to pass-through pipeline
Message-ID: <1644289.O54HJSKTCM@xps13>
Content-Type: text/plain; charset="us-ascii"

2016-03-10 15:29, Jasvinder Singh:
> The pass-through pipeline implementation is extended with load balancing
> function. This function allows uniform distribution of the packets among
> its output ports. For packets distribution, any application level logic
> can be applied. For instance, in this implementation, hash value
> computed over specific header fields of the incoming packets has been
> used to spread traffic uniformly among the output ports.

Applied, thanks


------------------------------

Message: 6
Date: Sun, 13 Mar 2016 16:16:12 +0100
From: Thomas Monjalon <thomas.monjalon@6wind.com>
To: "Pattan, Reshma" <reshma.pattan@intel.com>
Cc: dev@dpdk.org, "De Lara Guarch, Pablo"
<pablo.de.lara.guarch@intel.com>
Subject: Re: [dpdk-dev] [PATCH] app/test-pmd: add support for zero rx
and tx queues
Message-ID: <1482267.vDSkJW1rPM@xps13>
Content-Type: text/plain; charset="us-ascii"

> > Added testpmd support to validate zero nb_rxq/nb_txq
> > changes of librte_ether.
> > 
> > Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
> 
> Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>

Applied, thanks


End of dev Digest, Vol 83, Issue 176
************************************

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2016-03-13 16:43 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <mailman.373.1457882255.1284.dev@dpdk.org>
2016-03-13 16:43 ` [dpdk-dev] dev Digest, Vol 83, Issue 176 wenhb

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).