DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] dpdk hash lookup function crashed (segment fault)
@ 2016-03-13 14:38 张伟
  2016-03-14 13:02 ` Kyle Larose
  0 siblings, 1 reply; 3+ messages in thread
From: 张伟 @ 2016-03-13 14:38 UTC (permalink / raw)
  To: dev

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=GBK, Size: 7159 bytes --]

Hi all, 
When I use the dpdk lookup function, I met the segment fault problem. Can  anybody help to look at why this happens. I will put the aim what I want to do and the related piece of code, and my debug message, 


This problem is that in dpdk multi process - client and server example, dpdk-2.2.0/examples/multi_process/client_server_mp
My aim is that server create a hash table, then share it to client. Client will write the hash  table, server will read the hash table.  I am using dpdk hash table.  What I did is that server create a hash table (table and array entries), return the table address.  I use memzone pass the table address to client.  In client, the second lookup gets segment fault. The system gets crashed.  I will put some related code here. 
create hash table function:

struct onvm_ft*

onvm_ft_create(int cnt, int entry_size) {

        struct rte_hash* hash;

        struct onvm_ft* ft;

        struct rte_hash_parameters ipv4_hash_params = {

            .name = NULL,

            .entries = cnt,

            .key_len = sizeof(struct onvm_ft_ipv4_5tuple),

            .hash_func = NULL,

            .hash_func_init_val = 0,

        };




        char s[64];

        /* create ipv4 hash table. use core number and cycle counter to get a unique name. */

        ipv4_hash_params.name = s;

        ipv4_hash_params.socket_id = rte_socket_id();

        snprintf(s, sizeof(s), "onvm_ft_%d-%"PRIu64, rte_lcore_id(), rte_get_tsc_cycles());

        hash = rte_hash_create(&ipv4_hash_params);

        if (hash == NULL) {

                return NULL;

        }

        ft = (struct onvm_ft*)rte_calloc("table", 1, sizeof(struct onvm_ft), 0);

        if (ft == NULL) {

                rte_hash_free(hash);

                return NULL;

        }

        ft->hash = hash;

        ft->cnt = cnt;

        ft->entry_size = entry_size;

        /* Create data array for storing values */

        ft->data = rte_calloc("entry", cnt, entry_size, 0);

        if (ft->data == NULL) {

                rte_hash_free(hash);

                rte_free(ft);

                return NULL;

        }

        return ft;

}




related structure:

struct onvm_ft {

        struct rte_hash* hash;

        char* data;

        int cnt;

        int entry_size;

};




in server side, I will call the create function, use memzone share it to client. The following is what I do:

related variables:

struct onvm_ft *sdn_ft;

struct onvm_ft **sdn_ft_p;

const struct rte_memzone *mz_ftp;




        sdn_ft = onvm_ft_create(1024, sizeof(struct onvm_flow_entry));

        if(sdn_ft == NULL) {

                rte_exit(EXIT_FAILURE, "Unable to create flow table\n");

        }

        mz_ftp = rte_memzone_reserve(MZ_FTP_INFO, sizeof(struct onvm_ft *),

                                  rte_socket_id(), NO_FLAGS);

        if (mz_ftp == NULL) {

                rte_exit(EXIT_FAILURE, "Canot reserve memory zone for flow table pointer\n");

        }

        memset(mz_ftp->addr, 0, sizeof(struct onvm_ft *));

        sdn_ft_p = mz_ftp->addr;

        *sdn_ft_p = sdn_ft;




In client side:

struct onvm_ft *sdn_ft;

static void

map_flow_table(void) {

        const struct rte_memzone *mz_ftp;

        struct onvm_ft **ftp;




        mz_ftp = rte_memzone_lookup(MZ_FTP_INFO);

        if (mz_ftp == NULL)

                rte_exit(EXIT_FAILURE, "Cannot get flow table pointer\n");

        ftp = mz_ftp->addr;

        sdn_ft = *ftp;

}




The following is my debug message: I set a breakpoint in lookup table line. To narrow down the problem, I just send one flow. So the second time and the first time, the packets are the same.  

For the first time, it works. I print out the parameters: inside the onvm_ft_lookup function, if there is a related entry, it will return the address by flow_entry. 

Breakpoint 1, datapath_handle_read (dp=0x7ffff00008c0) at /home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/sdn.c:191

191                                 ret = onvm_ft_lookup(sdn_ft, fk, (char**)&flow_entry);

(gdb) print *sdn_ft 

$1 = {hash = 0x7fff32cce740, data = 0x7fff32cb0480 "", cnt = 1024, entry_size = 56}

(gdb) print *fk

$2 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}

(gdb) s

onvm_ft_lookup (table=0x7fff32cbe4c0, key=0x7fff32b99d00, data=0x7ffff68d2b00) at /home/zhangwei1984/openNetVM-master/openNetVM/onvm/shared/onvm_flow_table.c:151

151 softrss = onvm_softrss(key);

(gdb) n

152         printf("software rss %d\n", softrss);

(gdb) 

software rss 403183624

154         tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss);

(gdb) print table->hash

$3 = (struct rte_hash *) 0x7fff32cce740

(gdb) print *key

$4 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}

(gdb) print softrss 

$5 = 403183624

(gdb) c




After I hit c, it will do the second lookup,

Breakpoint 1, datapath_handle_read (dp=0x7ffff00008c0) at /home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/sdn.c:191

191                                 ret = onvm_ft_lookup(sdn_ft, fk, (char**)&flow_entry);

(gdb) print *sdn_ft


$7 = {hash = 0x7fff32cce740, data = 0x7fff32cb0480 "", cnt = 1024, entry_size = 56}

(gdb) print *fk

$8 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}

(gdb) s

onvm_ft_lookup (table=0x7fff32cbe4c0, key=0x7fff32b99c00, data=0x7ffff68d2b00) at /home/zhangwei1984/openNetVM-master/openNetVM/onvm/shared/onvm_flow_table.c:151

151 softrss = onvm_softrss(key);

(gdb) n

152         printf("software rss %d\n", softrss);

(gdb) n

software rss 403183624

154         tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss);

(gdb) print table->hash

$9 = (struct rte_hash *) 0x7fff32cce740

(gdb) print *key 

$10 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}

(gdb) print softrss

$11 = 403183624

(gdb) n





Program received signal SIGSEGV, Segmentation fault.

0x000000000045fb97 in __rte_hash_lookup_bulk ()

(gdb) bt

#0  0x000000000045fb97 in __rte_hash_lookup_bulk ()

#1  0x0000000000000000 in ?? ()




From the debug message, the parameters are exactly the same. I do not know why it has the segmentation fault. 

my lookup function:

int

onvm_ft_lookup(struct onvm_ft* table, struct onvm_ft_ipv4_5tuple *key, char** data) {

        int32_t tbl_index;

        uint32_t softrss;




        softrss = onvm_softrss(key);

        printf("software rss %d\n", softrss);




        tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss);

        if (tbl_index >= 0) {

                *data = onvm_ft_get_data(table, tbl_index);

                return 0;

        }

        else {

                return tbl_index;

        }

}\x16º&¶\x1a&jɨž6¥¢~°Šw\¢d®œÆ«×}zÓM9ÛMzEën®sÚ¶^[a¢f¬š‰ãjZ'ë\b§uÊ&Eç\x1eŠ÷~º&™¨¥Âm\x1fã¨(¢	^r‰¦j)p›Gøê
(‚Wœ¢nø×n|ÛŽ›ÉÚ]’Šà>‹-~,pŠØDHÄωÑ;\f_5ßÞŸ¢·^½Ú]’ŠàJéõÜÆ«ÛMz×­4ӏ´×M\x02\x114^qè¯yÖò™¨¥Âm\x1fã¨(¢	^r‰°ŠØR13âv^¼²Žöã~yïL&—Gè­×¯v—d¢¸\x12º}w1ªöÓ^´óM4ãNôÐðÓ\f¢\fJ('jÛ«z

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] dpdk hash lookup function crashed (segment fault)
  2016-03-13 14:38 [dpdk-dev] dpdk hash lookup function crashed (segment fault) 张伟
@ 2016-03-14 13:02 ` Kyle Larose
  2016-03-15  1:01   ` 张伟
  0 siblings, 1 reply; 3+ messages in thread
From: Kyle Larose @ 2016-03-14 13:02 UTC (permalink / raw)
  To: 张伟; +Cc: dev

Hello,

On Sun, Mar 13, 2016 at 10:38 AM, 张伟 <zhangwqh@126.com> wrote:
> Hi all,
> When I use the dpdk lookup function, I met the segment fault problem. Can  anybody help to look at why this happens. I will put the aim what I want to do and the related piece of code, and my debug message,
>
>
> This problem is that in dpdk multi process - client and server example, dpdk-2.2.0/examples/multi_process/client_server_mp
> My aim is that server create a hash table, then share it to client. Client will write the hash  table, server will read the hash table.  I am using dpdk hash table.  What I did is that server create a hash table (table and array entries), return the table address.  I use memzone pass the table address to client.  In client, the second lookup gets segment fault. The system gets crashed.  I will put some related code here.
> create hash table function:
>

Let me see if I understand correctly. You're allocating a hash table
on huge-page backed memory.
You pass a pointer to that table over a shared memory structure.

Is that correct?

I don't think something being in a huge-page necessarily means it is
shared. That is, allocating your hash table using rte_calloc in the
primary isn't sufficient to make it available in the secondary.

Further, even if it was, I do not think that it would work, because
there are a bunch of pointers involved (i.e. ft->data). As far as I'm
aware, each  process has its own "view" of the shared memory. It maps
it into its own local address space, and gives it an address according
to what is currently available there.

Most of my IPC with DPDK has involved passing packets around; I'm not
sure what the strategy is for hash tables. Synchronization issues
aside, I think you will need to put the hash table in its entirety in
shared memory, and avoid global pointers: either offset into the
shared memory, or have a structure with no pointers at all. From that,
you can probably build up local pointers.

Maybe somebody else can correct me or come up with a better idea.

Hope that helps,

Kyle


> struct onvm_ft*
>
> onvm_ft_create(int cnt, int entry_size) {
>
>         struct rte_hash* hash;
>
>         struct onvm_ft* ft;
>
>         struct rte_hash_parameters ipv4_hash_params = {
>
>             .name = NULL,
>
>             .entries = cnt,
>
>             .key_len = sizeof(struct onvm_ft_ipv4_5tuple),
>
>             .hash_func = NULL,
>
>             .hash_func_init_val = 0,
>
>         };
>
>
>
>
>         char s[64];
>
>         /* create ipv4 hash table. use core number and cycle counter to get a unique name. */
>
>         ipv4_hash_params.name = s;
>
>         ipv4_hash_params.socket_id = rte_socket_id();
>
>         snprintf(s, sizeof(s), "onvm_ft_%d-%"PRIu64, rte_lcore_id(), rte_get_tsc_cycles());
>
>         hash = rte_hash_create(&ipv4_hash_params);
>
>         if (hash == NULL) {
>
>                 return NULL;
>
>         }
>
>         ft = (struct onvm_ft*)rte_calloc("table", 1, sizeof(struct onvm_ft), 0);
>
>         if (ft == NULL) {
>
>                 rte_hash_free(hash);
>
>                 return NULL;
>
>         }
>
>         ft->hash = hash;
>
>         ft->cnt = cnt;
>
>         ft->entry_size = entry_size;
>
>         /* Create data array for storing values */
>
>         ft->data = rte_calloc("entry", cnt, entry_size, 0);
>
>         if (ft->data == NULL) {
>
>                 rte_hash_free(hash);
>
>                 rte_free(ft);
>
>                 return NULL;
>
>         }
>
>         return ft;
>
> }
>
>
>
>
> related structure:
>
> struct onvm_ft {
>
>         struct rte_hash* hash;
>
>         char* data;
>
>         int cnt;
>
>         int entry_size;
>
> };
>
>
>
>
> in server side, I will call the create function, use memzone share it to client. The following is what I do:
>
> related variables:
>
> struct onvm_ft *sdn_ft;
>
> struct onvm_ft **sdn_ft_p;
>
> const struct rte_memzone *mz_ftp;
>
>
>
>
>         sdn_ft = onvm_ft_create(1024, sizeof(struct onvm_flow_entry));
>
>         if(sdn_ft == NULL) {
>
>                 rte_exit(EXIT_FAILURE, "Unable to create flow table\n");
>
>         }
>
>         mz_ftp = rte_memzone_reserve(MZ_FTP_INFO, sizeof(struct onvm_ft *),
>
>                                   rte_socket_id(), NO_FLAGS);
>
>         if (mz_ftp == NULL) {
>
>                 rte_exit(EXIT_FAILURE, "Canot reserve memory zone for flow table pointer\n");
>
>         }
>
>         memset(mz_ftp->addr, 0, sizeof(struct onvm_ft *));
>
>         sdn_ft_p = mz_ftp->addr;
>
>         *sdn_ft_p = sdn_ft;
>
>
>
>
> In client side:
>
> struct onvm_ft *sdn_ft;
>
> static void
>
> map_flow_table(void) {
>
>         const struct rte_memzone *mz_ftp;
>
>         struct onvm_ft **ftp;
>
>
>
>
>         mz_ftp = rte_memzone_lookup(MZ_FTP_INFO);
>
>         if (mz_ftp == NULL)
>
>                 rte_exit(EXIT_FAILURE, "Cannot get flow table pointer\n");
>
>         ftp = mz_ftp->addr;
>
>         sdn_ft = *ftp;
>
> }
>
>
>
>
> The following is my debug message: I set a breakpoint in lookup table line. To narrow down the problem, I just send one flow. So the second time and the first time, the packets are the same.
>
> For the first time, it works. I print out the parameters: inside the onvm_ft_lookup function, if there is a related entry, it will return the address by flow_entry.
>
> Breakpoint 1, datapath_handle_read (dp=0x7ffff00008c0) at /home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/sdn.c:191
>
> 191                                 ret = onvm_ft_lookup(sdn_ft, fk, (char**)&flow_entry);
>
> (gdb) print *sdn_ft
>
> $1 = {hash = 0x7fff32cce740, data = 0x7fff32cb0480 "", cnt = 1024, entry_size = 56}
>
> (gdb) print *fk
>
> $2 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}
>
> (gdb) s
>
> onvm_ft_lookup (table=0x7fff32cbe4c0, key=0x7fff32b99d00, data=0x7ffff68d2b00) at /home/zhangwei1984/openNetVM-master/openNetVM/onvm/shared/onvm_flow_table.c:151
>
> 151 softrss = onvm_softrss(key);
>
> (gdb) n
>
> 152         printf("software rss %d\n", softrss);
>
> (gdb)
>
> software rss 403183624
>
> 154         tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss);
>
> (gdb) print table->hash
>
> $3 = (struct rte_hash *) 0x7fff32cce740
>
> (gdb) print *key
>
> $4 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}
>
> (gdb) print softrss
>
> $5 = 403183624
>
> (gdb) c
>
>
>
>
> After I hit c, it will do the second lookup,
>
> Breakpoint 1, datapath_handle_read (dp=0x7ffff00008c0) at /home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/sdn.c:191
>
> 191                                 ret = onvm_ft_lookup(sdn_ft, fk, (char**)&flow_entry);
>
> (gdb) print *sdn_ft
>
>
> $7 = {hash = 0x7fff32cce740, data = 0x7fff32cb0480 "", cnt = 1024, entry_size = 56}
>
> (gdb) print *fk
>
> $8 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}
>
> (gdb) s
>
> onvm_ft_lookup (table=0x7fff32cbe4c0, key=0x7fff32b99c00, data=0x7ffff68d2b00) at /home/zhangwei1984/openNetVM-master/openNetVM/onvm/shared/onvm_flow_table.c:151
>
> 151 softrss = onvm_softrss(key);
>
> (gdb) n
>
> 152         printf("software rss %d\n", softrss);
>
> (gdb) n
>
> software rss 403183624
>
> 154         tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss);
>
> (gdb) print table->hash
>
> $9 = (struct rte_hash *) 0x7fff32cce740
>
> (gdb) print *key
>
> $10 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}
>
> (gdb) print softrss
>
> $11 = 403183624
>
> (gdb) n
>
>
>
>
>
> Program received signal SIGSEGV, Segmentation fault.
>
> 0x000000000045fb97 in __rte_hash_lookup_bulk ()
>
> (gdb) bt
>
> #0  0x000000000045fb97 in __rte_hash_lookup_bulk ()
>
> #1  0x0000000000000000 in ?? ()
>
>
>
>
> From the debug message, the parameters are exactly the same. I do not know why it has the segmentation fault.
>
> my lookup function:
>
> int
>
> onvm_ft_lookup(struct onvm_ft* table, struct onvm_ft_ipv4_5tuple *key, char** data) {
>
>         int32_t tbl_index;
>
>         uint32_t softrss;
>
>
>
>
>         softrss = onvm_softrss(key);
>
>         printf("software rss %d\n", softrss);
>
>
>
>
>         tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss);
>
>         if (tbl_index >= 0) {
>
>                 *data = onvm_ft_get_data(table, tbl_index);
>
>                 return 0;
>
>         }
>
>         else {
>
>                 return tbl_index;
>
>         }
>
> }

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] dpdk hash lookup function crashed (segment fault)
  2016-03-14 13:02 ` Kyle Larose
@ 2016-03-15  1:01   ` 张伟
  0 siblings, 0 replies; 3+ messages in thread
From: 张伟 @ 2016-03-15  1:01 UTC (permalink / raw)
  To: Kyle Larose; +Cc: dev

Thanks for your reply! I used one patch solve my problem someone posted  last night in the mailing list.


At 2016-03-14 21:02:13, "Kyle Larose" <eomereadig@gmail.com> wrote:
>Hello,
>
>On Sun, Mar 13, 2016 at 10:38 AM, 张伟 <zhangwqh@126.com> wrote:
>> Hi all,
>> When I use the dpdk lookup function, I met the segment fault problem. Can  anybody help to look at why this happens. I will put the aim what I want to do and the related piece of code, and my debug message,
>>
>>
>> This problem is that in dpdk multi process - client and server example, dpdk-2.2.0/examples/multi_process/client_server_mp
>> My aim is that server create a hash table, then share it to client. Client will write the hash  table, server will read the hash table.  I am using dpdk hash table.  What I did is that server create a hash table (table and array entries), return the table address.  I use memzone pass the table address to client.  In client, the second lookup gets segment fault. The system gets crashed.  I will put some related code here.
>> create hash table function:
>>
>
>Let me see if I understand correctly. You're allocating a hash table
>on huge-page backed memory.
>You pass a pointer to that table over a shared memory structure.
>
>Is that correct?
>
>I don't think something being in a huge-page necessarily means it is
>shared. That is, allocating your hash table using rte_calloc in the
>primary isn't sufficient to make it available in the secondary.
>
>Further, even if it was, I do not think that it would work, because
>there are a bunch of pointers involved (i.e. ft->data). As far as I'm
>aware, each  process has its own "view" of the shared memory. It maps
>it into its own local address space, and gives it an address according
>to what is currently available there.
>
>Most of my IPC with DPDK has involved passing packets around; I'm not
>sure what the strategy is for hash tables. Synchronization issues
>aside, I think you will need to put the hash table in its entirety in
>shared memory, and avoid global pointers: either offset into the
>shared memory, or have a structure with no pointers at all. From that,
>you can probably build up local pointers.
>
>Maybe somebody else can correct me or come up with a better idea.
>
>Hope that helps,
>
>Kyle
>
>
>> struct onvm_ft*
>>
>> onvm_ft_create(int cnt, int entry_size) {
>>
>>         struct rte_hash* hash;
>>
>>         struct onvm_ft* ft;
>>
>>         struct rte_hash_parameters ipv4_hash_params = {
>>
>>             .name = NULL,
>>
>>             .entries = cnt,
>>
>>             .key_len = sizeof(struct onvm_ft_ipv4_5tuple),
>>
>>             .hash_func = NULL,
>>
>>             .hash_func_init_val = 0,
>>
>>         };
>>
>>
>>
>>
>>         char s[64];
>>
>>         /* create ipv4 hash table. use core number and cycle counter to get a unique name. */
>>
>>         ipv4_hash_params.name = s;
>>
>>         ipv4_hash_params.socket_id = rte_socket_id();
>>
>>         snprintf(s, sizeof(s), "onvm_ft_%d-%"PRIu64, rte_lcore_id(), rte_get_tsc_cycles());
>>
>>         hash = rte_hash_create(&ipv4_hash_params);
>>
>>         if (hash == NULL) {
>>
>>                 return NULL;
>>
>>         }
>>
>>         ft = (struct onvm_ft*)rte_calloc("table", 1, sizeof(struct onvm_ft), 0);
>>
>>         if (ft == NULL) {
>>
>>                 rte_hash_free(hash);
>>
>>                 return NULL;
>>
>>         }
>>
>>         ft->hash = hash;
>>
>>         ft->cnt = cnt;
>>
>>         ft->entry_size = entry_size;
>>
>>         /* Create data array for storing values */
>>
>>         ft->data = rte_calloc("entry", cnt, entry_size, 0);
>>
>>         if (ft->data == NULL) {
>>
>>                 rte_hash_free(hash);
>>
>>                 rte_free(ft);
>>
>>                 return NULL;
>>
>>         }
>>
>>         return ft;
>>
>> }
>>
>>
>>
>>
>> related structure:
>>
>> struct onvm_ft {
>>
>>         struct rte_hash* hash;
>>
>>         char* data;
>>
>>         int cnt;
>>
>>         int entry_size;
>>
>> };
>>
>>
>>
>>
>> in server side, I will call the create function, use memzone share it to client. The following is what I do:
>>
>> related variables:
>>
>> struct onvm_ft *sdn_ft;
>>
>> struct onvm_ft **sdn_ft_p;
>>
>> const struct rte_memzone *mz_ftp;
>>
>>
>>
>>
>>         sdn_ft = onvm_ft_create(1024, sizeof(struct onvm_flow_entry));
>>
>>         if(sdn_ft == NULL) {
>>
>>                 rte_exit(EXIT_FAILURE, "Unable to create flow table\n");
>>
>>         }
>>
>>         mz_ftp = rte_memzone_reserve(MZ_FTP_INFO, sizeof(struct onvm_ft *),
>>
>>                                   rte_socket_id(), NO_FLAGS);
>>
>>         if (mz_ftp == NULL) {
>>
>>                 rte_exit(EXIT_FAILURE, "Canot reserve memory zone for flow table pointer\n");
>>
>>         }
>>
>>         memset(mz_ftp->addr, 0, sizeof(struct onvm_ft *));
>>
>>         sdn_ft_p = mz_ftp->addr;
>>
>>         *sdn_ft_p = sdn_ft;
>>
>>
>>
>>
>> In client side:
>>
>> struct onvm_ft *sdn_ft;
>>
>> static void
>>
>> map_flow_table(void) {
>>
>>         const struct rte_memzone *mz_ftp;
>>
>>         struct onvm_ft **ftp;
>>
>>
>>
>>
>>         mz_ftp = rte_memzone_lookup(MZ_FTP_INFO);
>>
>>         if (mz_ftp == NULL)
>>
>>                 rte_exit(EXIT_FAILURE, "Cannot get flow table pointer\n");
>>
>>         ftp = mz_ftp->addr;
>>
>>         sdn_ft = *ftp;
>>
>> }
>>
>>
>>
>>
>> The following is my debug message: I set a breakpoint in lookup table line. To narrow down the problem, I just send one flow. So the second time and the first time, the packets are the same.
>>
>> For the first time, it works. I print out the parameters: inside the onvm_ft_lookup function, if there is a related entry, it will return the address by flow_entry.
>>
>> Breakpoint 1, datapath_handle_read (dp=0x7ffff00008c0) at /home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/sdn.c:191
>>
>> 191                                 ret = onvm_ft_lookup(sdn_ft, fk, (char**)&flow_entry);
>>
>> (gdb) print *sdn_ft
>>
>> $1 = {hash = 0x7fff32cce740, data = 0x7fff32cb0480 "", cnt = 1024, entry_size = 56}
>>
>> (gdb) print *fk
>>
>> $2 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}
>>
>> (gdb) s
>>
>> onvm_ft_lookup (table=0x7fff32cbe4c0, key=0x7fff32b99d00, data=0x7ffff68d2b00) at /home/zhangwei1984/openNetVM-master/openNetVM/onvm/shared/onvm_flow_table.c:151
>>
>> 151 softrss = onvm_softrss(key);
>>
>> (gdb) n
>>
>> 152         printf("software rss %d\n", softrss);
>>
>> (gdb)
>>
>> software rss 403183624
>>
>> 154         tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss);
>>
>> (gdb) print table->hash
>>
>> $3 = (struct rte_hash *) 0x7fff32cce740
>>
>> (gdb) print *key
>>
>> $4 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}
>>
>> (gdb) print softrss
>>
>> $5 = 403183624
>>
>> (gdb) c
>>
>>
>>
>>
>> After I hit c, it will do the second lookup,
>>
>> Breakpoint 1, datapath_handle_read (dp=0x7ffff00008c0) at /home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/sdn.c:191
>>
>> 191                                 ret = onvm_ft_lookup(sdn_ft, fk, (char**)&flow_entry);
>>
>> (gdb) print *sdn_ft
>>
>>
>> $7 = {hash = 0x7fff32cce740, data = 0x7fff32cb0480 "", cnt = 1024, entry_size = 56}
>>
>> (gdb) print *fk
>>
>> $8 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}
>>
>> (gdb) s
>>
>> onvm_ft_lookup (table=0x7fff32cbe4c0, key=0x7fff32b99c00, data=0x7ffff68d2b00) at /home/zhangwei1984/openNetVM-master/openNetVM/onvm/shared/onvm_flow_table.c:151
>>
>> 151 softrss = onvm_softrss(key);
>>
>> (gdb) n
>>
>> 152         printf("software rss %d\n", softrss);
>>
>> (gdb) n
>>
>> software rss 403183624
>>
>> 154         tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss);
>>
>> (gdb) print table->hash
>>
>> $9 = (struct rte_hash *) 0x7fff32cce740
>>
>> (gdb) print *key
>>
>> $10 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'}
>>
>> (gdb) print softrss
>>
>> $11 = 403183624
>>
>> (gdb) n
>>
>>
>>
>>
>>
>> Program received signal SIGSEGV, Segmentation fault.
>>
>> 0x000000000045fb97 in __rte_hash_lookup_bulk ()
>>
>> (gdb) bt
>>
>> #0  0x000000000045fb97 in __rte_hash_lookup_bulk ()
>>
>> #1  0x0000000000000000 in ?? ()
>>
>>
>>
>>
>> From the debug message, the parameters are exactly the same. I do not know why it has the segmentation fault.
>>
>> my lookup function:
>>
>> int
>>
>> onvm_ft_lookup(struct onvm_ft* table, struct onvm_ft_ipv4_5tuple *key, char** data) {
>>
>>         int32_t tbl_index;
>>
>>         uint32_t softrss;
>>
>>
>>
>>
>>         softrss = onvm_softrss(key);
>>
>>         printf("software rss %d\n", softrss);
>>
>>
>>
>>
>>         tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss);
>>
>>         if (tbl_index >= 0) {
>>
>>                 *data = onvm_ft_get_data(table, tbl_index);
>>
>>                 return 0;
>>
>>         }
>>
>>         else {
>>
>>                 return tbl_index;
>>
>>         }
>>
>> }

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-03-15  1:01 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-13 14:38 [dpdk-dev] dpdk hash lookup function crashed (segment fault) 张伟
2016-03-14 13:02 ` Kyle Larose
2016-03-15  1:01   ` 张伟

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).