DPDK patches and discussions
 help / color / mirror / Atom feed
* Re: [dpdk-dev] DPDK and custom memory
       [not found] <B1E02754DD4D4F4EAC7A14C1202CE62073E8E330@ORSMSX102.amr.corp.intel.com>
@ 2014-08-30 13:03 ` Thomas Monjalon
  2014-08-31  8:27   ` Alex Markuze
  2014-09-02 13:47 ` Neil Horman
  1 sibling, 1 reply; 6+ messages in thread
From: Thomas Monjalon @ 2014-08-30 13:03 UTC (permalink / raw)
  To: Saygin, Artur; +Cc: dev

Hello,

2014-08-29 18:40, Saygin, Artur:
> Imagine a PMD for an FPGA-based NIC that is limited to accessing certain
> memory regions <system, PCI, etc>.

Does it mean Intel is making an FPGA-based NIC?

> Is there a way to make DPDK use that exact memory?

Maybe I don't understand the question well, because it doesn't seem really
different of what other PMDs do.
Assuming your NIC is PCI, you can access it via uio (igb_uio) or VFIO.

> Perhaps this is more of a hugetlbfs question than DPDK but I thought I'd
> start here.

It's a pleasure to receive new drivers.
Welcome here :)

-- 
Thomas

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] DPDK and custom memory
  2014-08-30 13:03 ` [dpdk-dev] DPDK and custom memory Thomas Monjalon
@ 2014-08-31  8:27   ` Alex Markuze
       [not found]     ` <B1E02754DD4D4F4EAC7A14C1202CE62073E8E8C7@ORSMSX102.amr.corp.intel.com>
  0 siblings, 1 reply; 6+ messages in thread
From: Alex Markuze @ 2014-08-31  8:27 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, Saygin, Artur

Artur, I don't have the details of what you are trying to achieve, but
it sounds like something that is covered by IOMMU, SW or HW.  The
IOMMU creates an iova (I/O Virtual address) the nic can access the
range is controlled with flags passed to the dma_map functions.

So I understand your question this way, How does the DPDK work with
IOMMU enabled system and can you influence the mapping?


On Sat, Aug 30, 2014 at 4:03 PM, Thomas Monjalon
<thomas.monjalon@6wind.com> wrote:
> Hello,
>
> 2014-08-29 18:40, Saygin, Artur:
>> Imagine a PMD for an FPGA-based NIC that is limited to accessing certain
>> memory regions <system, PCI, etc>.
>
> Does it mean Intel is making an FPGA-based NIC?
>
>> Is there a way to make DPDK use that exact memory?
>
> Maybe I don't understand the question well, because it doesn't seem really
> different of what other PMDs do.
> Assuming your NIC is PCI, you can access it via uio (igb_uio) or VFIO.
>
>> Perhaps this is more of a hugetlbfs question than DPDK but I thought I'd
>> start here.
>
> It's a pleasure to receive new drivers.
> Welcome here :)
>
> --
> Thomas

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] DPDK and custom memory
       [not found] <B1E02754DD4D4F4EAC7A14C1202CE62073E8E330@ORSMSX102.amr.corp.intel.com>
  2014-08-30 13:03 ` [dpdk-dev] DPDK and custom memory Thomas Monjalon
@ 2014-09-02 13:47 ` Neil Horman
  1 sibling, 0 replies; 6+ messages in thread
From: Neil Horman @ 2014-09-02 13:47 UTC (permalink / raw)
  To: Saygin, Artur; +Cc: dev

On Fri, Aug 29, 2014 at 06:40:08PM +0000, Saygin, Artur wrote:
> Hello DPDK experts,
> 
> Imagine a PMD for an FPGA-based NIC that is limited to accessing certain memory regions <system, PCI, etc>. Is there a way to make DPDK use that exact memory?
> 
> Perhaps this is more of a hugetlbfs question than DPDK but I thought I'd start here.
> 
Theres no real custom memory need there.  What you need access to is covered by
interface like vfio, which dpdk uses fairly regularly within its other hardware
pmds.
Neil

> Sincerely,
> Artur Saygin
> 
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] DPDK and custom memory
       [not found]     ` <B1E02754DD4D4F4EAC7A14C1202CE62073E8E8C7@ORSMSX102.amr.corp.intel.com>
@ 2014-09-03 10:03       ` Neil Horman
  2014-09-19  0:13         ` Saygin, Artur
  0 siblings, 1 reply; 6+ messages in thread
From: Neil Horman @ 2014-09-03 10:03 UTC (permalink / raw)
  To: Saygin, Artur; +Cc: dev

On Wed, Sep 03, 2014 at 01:17:53AM +0000, Saygin, Artur wrote:
> Thanks for prompt responses!
> 
> To clarify, the questions is not about accessing a NIC, but about a NIC accessing a very specific block of physical memory, possibly non-kernel managed.
> 
Still not sure what you mean here by non-kernel managed.  If memory can be
accessed from the CPU, then the kernel can allocate, free and access it, thats
it.  If the memory isn't accessible from the cpu, then this is out of our hands
anyway.  The only question is, how do you access it.

> Per my understanding memory that rte_mempool_create API obtains is kernel managed, grabbed by DPDK via HUGETLBFS, with address selection being outside of application control. Is there a way around that? As in have DPDK allocate buffer memory from address XYZ only...
Nope, the DPDK allocates blocks of memory without regard to the operation of the
NIC.  If you have some odd NIC that requires access to a specific physical
memory range, then it is your responsibility to reserve that memory and author
the PMD in such a way that it communicates with the NIC via that memory.
Usually this is done via a combination of operating system facilities (e.g. the
linux kernel commanline option memmap or the runtime mmap operation on the
/dev/mem device).

Regards
Neil

> 
> If VFIO / IOMMU is still the answer - I'll poke in that direction. If not - any additional insight is appreciated.
> 
> -----Original Message-----
> From: Alex Markuze [mailto:alex@weka.io] 
> Sent: Sunday, August 31, 2014 1:27 AM
> To: Thomas Monjalon
> Cc: Saygin, Artur; dev@dpdk.org
> Subject: Re: [dpdk-dev] DPDK and custom memory
> 
> Artur, I don't have the details of what you are trying to achieve, but
> it sounds like something that is covered by IOMMU, SW or HW.  The
> IOMMU creates an iova (I/O Virtual address) the nic can access the
> range is controlled with flags passed to the dma_map functions.
> 
> So I understand your question this way, How does the DPDK work with
> IOMMU enabled system and can you influence the mapping?
> 
> 
> On Sat, Aug 30, 2014 at 4:03 PM, Thomas Monjalon
> <thomas.monjalon@6wind.com> wrote:
> > Hello,
> >
> > 2014-08-29 18:40, Saygin, Artur:
> >> Imagine a PMD for an FPGA-based NIC that is limited to accessing certain
> >> memory regions <system, PCI, etc>.
> >
> > Does it mean Intel is making an FPGA-based NIC?
> >
> >> Is there a way to make DPDK use that exact memory?
> >
> > Maybe I don't understand the question well, because it doesn't seem really
> > different of what other PMDs do.
> > Assuming your NIC is PCI, you can access it via uio (igb_uio) or VFIO.
> >
> >> Perhaps this is more of a hugetlbfs question than DPDK but I thought I'd
> >> start here.
> >
> > It's a pleasure to receive new drivers.
> > Welcome here :)
> >
> > --
> > Thomas

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] DPDK and custom memory
  2014-09-03 10:03       ` Neil Horman
@ 2014-09-19  0:13         ` Saygin, Artur
  2014-09-19 10:18           ` Neil Horman
  0 siblings, 1 reply; 6+ messages in thread
From: Saygin, Artur @ 2014-09-19  0:13 UTC (permalink / raw)
  To: Neil Horman; +Cc: dev

FWIW: rte_mempool_xmem_create turned out to be exactly what the use case requires. It's not without limitations but is probably better than having to copy buffers between device and DPDK memory.

-----Original Message-----
From: Neil Horman [mailto:nhorman@tuxdriver.com] 
Sent: Wednesday, September 03, 2014 3:04 AM
To: Saygin, Artur
Cc: Alex Markuze; Thomas Monjalon; dev@dpdk.org
Subject: Re: [dpdk-dev] DPDK and custom memory

On Wed, Sep 03, 2014 at 01:17:53AM +0000, Saygin, Artur wrote:
> Thanks for prompt responses!
> 
> To clarify, the questions is not about accessing a NIC, but about a NIC accessing a very specific block of physical memory, possibly non-kernel managed.
> 
Still not sure what you mean here by non-kernel managed.  If memory can be
accessed from the CPU, then the kernel can allocate, free and access it, thats
it.  If the memory isn't accessible from the cpu, then this is out of our hands
anyway.  The only question is, how do you access it.

> Per my understanding memory that rte_mempool_create API obtains is kernel managed, grabbed by DPDK via HUGETLBFS, with address selection being outside of application control. Is there a way around that? As in have DPDK allocate buffer memory from address XYZ only...
Nope, the DPDK allocates blocks of memory without regard to the operation of the
NIC.  If you have some odd NIC that requires access to a specific physical
memory range, then it is your responsibility to reserve that memory and author
the PMD in such a way that it communicates with the NIC via that memory.
Usually this is done via a combination of operating system facilities (e.g. the
linux kernel commanline option memmap or the runtime mmap operation on the
/dev/mem device).

Regards
Neil

> 
> If VFIO / IOMMU is still the answer - I'll poke in that direction. If not - any additional insight is appreciated.
> 
> -----Original Message-----
> From: Alex Markuze [mailto:alex@weka.io] 
> Sent: Sunday, August 31, 2014 1:27 AM
> To: Thomas Monjalon
> Cc: Saygin, Artur; dev@dpdk.org
> Subject: Re: [dpdk-dev] DPDK and custom memory
> 
> Artur, I don't have the details of what you are trying to achieve, but
> it sounds like something that is covered by IOMMU, SW or HW.  The
> IOMMU creates an iova (I/O Virtual address) the nic can access the
> range is controlled with flags passed to the dma_map functions.
> 
> So I understand your question this way, How does the DPDK work with
> IOMMU enabled system and can you influence the mapping?
> 
> 
> On Sat, Aug 30, 2014 at 4:03 PM, Thomas Monjalon
> <thomas.monjalon@6wind.com> wrote:
> > Hello,
> >
> > 2014-08-29 18:40, Saygin, Artur:
> >> Imagine a PMD for an FPGA-based NIC that is limited to accessing certain
> >> memory regions <system, PCI, etc>.
> >
> > Does it mean Intel is making an FPGA-based NIC?
> >
> >> Is there a way to make DPDK use that exact memory?
> >
> > Maybe I don't understand the question well, because it doesn't seem really
> > different of what other PMDs do.
> > Assuming your NIC is PCI, you can access it via uio (igb_uio) or VFIO.
> >
> >> Perhaps this is more of a hugetlbfs question than DPDK but I thought I'd
> >> start here.
> >
> > It's a pleasure to receive new drivers.
> > Welcome here :)
> >
> > --
> > Thomas

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] DPDK and custom memory
  2014-09-19  0:13         ` Saygin, Artur
@ 2014-09-19 10:18           ` Neil Horman
  0 siblings, 0 replies; 6+ messages in thread
From: Neil Horman @ 2014-09-19 10:18 UTC (permalink / raw)
  To: Saygin, Artur; +Cc: dev

On Fri, Sep 19, 2014 at 12:13:55AM +0000, Saygin, Artur wrote:
> FWIW: rte_mempool_xmem_create turned out to be exactly what the use case requires. It's not without limitations but is probably better than having to copy buffers between device and DPDK memory.
> 
Ah, so its not non-kernel managed memory you were after, it was a way to make
non-dpdk managed memory  get managed by dpdk.  That makes more sense.
Neil

> -----Original Message-----
> From: Neil Horman [mailto:nhorman@tuxdriver.com] 
> Sent: Wednesday, September 03, 2014 3:04 AM
> To: Saygin, Artur
> Cc: Alex Markuze; Thomas Monjalon; dev@dpdk.org
> Subject: Re: [dpdk-dev] DPDK and custom memory
> 
> On Wed, Sep 03, 2014 at 01:17:53AM +0000, Saygin, Artur wrote:
> > Thanks for prompt responses!
> > 
> > To clarify, the questions is not about accessing a NIC, but about a NIC accessing a very specific block of physical memory, possibly non-kernel managed.
> > 
> Still not sure what you mean here by non-kernel managed.  If memory can be
> accessed from the CPU, then the kernel can allocate, free and access it, thats
> it.  If the memory isn't accessible from the cpu, then this is out of our hands
> anyway.  The only question is, how do you access it.
> 
> > Per my understanding memory that rte_mempool_create API obtains is kernel managed, grabbed by DPDK via HUGETLBFS, with address selection being outside of application control. Is there a way around that? As in have DPDK allocate buffer memory from address XYZ only...
> Nope, the DPDK allocates blocks of memory without regard to the operation of the
> NIC.  If you have some odd NIC that requires access to a specific physical
> memory range, then it is your responsibility to reserve that memory and author
> the PMD in such a way that it communicates with the NIC via that memory.
> Usually this is done via a combination of operating system facilities (e.g. the
> linux kernel commanline option memmap or the runtime mmap operation on the
> /dev/mem device).
> 
> Regards
> Neil
> 
> > 
> > If VFIO / IOMMU is still the answer - I'll poke in that direction. If not - any additional insight is appreciated.
> > 
> > -----Original Message-----
> > From: Alex Markuze [mailto:alex@weka.io] 
> > Sent: Sunday, August 31, 2014 1:27 AM
> > To: Thomas Monjalon
> > Cc: Saygin, Artur; dev@dpdk.org
> > Subject: Re: [dpdk-dev] DPDK and custom memory
> > 
> > Artur, I don't have the details of what you are trying to achieve, but
> > it sounds like something that is covered by IOMMU, SW or HW.  The
> > IOMMU creates an iova (I/O Virtual address) the nic can access the
> > range is controlled with flags passed to the dma_map functions.
> > 
> > So I understand your question this way, How does the DPDK work with
> > IOMMU enabled system and can you influence the mapping?
> > 
> > 
> > On Sat, Aug 30, 2014 at 4:03 PM, Thomas Monjalon
> > <thomas.monjalon@6wind.com> wrote:
> > > Hello,
> > >
> > > 2014-08-29 18:40, Saygin, Artur:
> > >> Imagine a PMD for an FPGA-based NIC that is limited to accessing certain
> > >> memory regions <system, PCI, etc>.
> > >
> > > Does it mean Intel is making an FPGA-based NIC?
> > >
> > >> Is there a way to make DPDK use that exact memory?
> > >
> > > Maybe I don't understand the question well, because it doesn't seem really
> > > different of what other PMDs do.
> > > Assuming your NIC is PCI, you can access it via uio (igb_uio) or VFIO.
> > >
> > >> Perhaps this is more of a hugetlbfs question than DPDK but I thought I'd
> > >> start here.
> > >
> > > It's a pleasure to receive new drivers.
> > > Welcome here :)
> > >
> > > --
> > > Thomas
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-09-19 10:13 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <B1E02754DD4D4F4EAC7A14C1202CE62073E8E330@ORSMSX102.amr.corp.intel.com>
2014-08-30 13:03 ` [dpdk-dev] DPDK and custom memory Thomas Monjalon
2014-08-31  8:27   ` Alex Markuze
     [not found]     ` <B1E02754DD4D4F4EAC7A14C1202CE62073E8E8C7@ORSMSX102.amr.corp.intel.com>
2014-09-03 10:03       ` Neil Horman
2014-09-19  0:13         ` Saygin, Artur
2014-09-19 10:18           ` Neil Horman
2014-09-02 13:47 ` Neil Horman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).