All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC] Virt machine memory map
@ 2015-07-20  8:55 Pavel Fedin
  2015-07-20  9:41 ` Peter Maydell
  0 siblings, 1 reply; 8+ messages in thread
From: Pavel Fedin @ 2015-07-20  8:55 UTC (permalink / raw
  To: qemu-devel; +Cc: 'Peter Maydell'

 Hello!

 In our project we work on a very fast paravirtualized network I/O drivers, based  on ivshmem. We
successfully got ivshmem working on ARM, however with one hack.
Currently we have:
--- cut ---
    [VIRT_PCIE_MMIO] =          { 0x10000000, 0x2eff0000 },
    [VIRT_PCIE_PIO] =           { 0x3eff0000, 0x00010000 },
    [VIRT_PCIE_ECAM] =          { 0x3f000000, 0x01000000 },
    [VIRT_MEM] =                { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
--- cut ---
 And MMIO region is not enough for us because we want to have 1GB mapping for PCI device. In order
to make it working, we modify the map as follows:
--- cut ---
    [VIRT_PCIE_MMIO] =            { 0x10000000, 0x7eff0000 },
    [VIRT_PCIE_PIO] =           { 0x8eff0000, 0x00010000 },
    [VIRT_PCIE_ECAM] =          { 0x8f000000, 0x01000000 },
    [VIRT_MEM] =             { 0x90000000, 30ULL * 1024 * 1024 * 1024 },
--- cut ---
 The question is - how could we upstream this? I believe modifying 32-bit virt memory map this way
is not good. Will it be OK to have different memory map for 64-bit virt ?
 Another possible approach is not to use PCI, but MMIO instead, and just specify our region in the
device tree. This way we work around the limitation of having only a single PCI MMIO region, and we
could happily place our 1GB device after system RAM.
 Any opinions ?

Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [RFC] Virt machine memory map
  2015-07-20  8:55 [Qemu-devel] [RFC] Virt machine memory map Pavel Fedin
@ 2015-07-20  9:41 ` Peter Maydell
  2015-07-20 11:23   ` Alexander Graf
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Maydell @ 2015-07-20  9:41 UTC (permalink / raw
  To: Pavel Fedin; +Cc: QEMU Developers, Alexander Graf

On 20 July 2015 at 09:55, Pavel Fedin <p.fedin@samsung.com> wrote:
>  Hello!
>
>  In our project we work on a very fast paravirtualized network I/O drivers, based  on ivshmem. We
> successfully got ivshmem working on ARM, however with one hack.
> Currently we have:
> --- cut ---
>     [VIRT_PCIE_MMIO] =          { 0x10000000, 0x2eff0000 },
>     [VIRT_PCIE_PIO] =           { 0x3eff0000, 0x00010000 },
>     [VIRT_PCIE_ECAM] =          { 0x3f000000, 0x01000000 },
>     [VIRT_MEM] =                { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
> --- cut ---
>  And MMIO region is not enough for us because we want to have 1GB mapping for PCI device. In order
> to make it working, we modify the map as follows:
> --- cut ---
>     [VIRT_PCIE_MMIO] =            { 0x10000000, 0x7eff0000 },
>     [VIRT_PCIE_PIO] =           { 0x8eff0000, 0x00010000 },
>     [VIRT_PCIE_ECAM] =          { 0x8f000000, 0x01000000 },
>     [VIRT_MEM] =             { 0x90000000, 30ULL * 1024 * 1024 * 1024 },
> --- cut ---
>  The question is - how could we upstream this? I believe modifying 32-bit virt memory map this way
> is not good. Will it be OK to have different memory map for 64-bit virt ?

I think the theory we discussed at the time of putting in the PCIe
device was that if we wanted this we'd add support for the other
PCIe memory window (which would then live at somewhere above 4GB).
Alex, can you remember what the idea was?

But to be honest I think we weren't expecting anybody to need
1GB of PCI MMIO space unless it was a video card...

thanks
-- PMM

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [RFC] Virt machine memory map
  2015-07-20  9:41 ` Peter Maydell
@ 2015-07-20 11:23   ` Alexander Graf
  2015-07-20 13:30     ` Igor Mammedov
  2015-07-22  6:52     ` Pavel Fedin
  0 siblings, 2 replies; 8+ messages in thread
From: Alexander Graf @ 2015-07-20 11:23 UTC (permalink / raw
  To: Peter Maydell, Pavel Fedin; +Cc: QEMU Developers

On 07/20/15 11:41, Peter Maydell wrote:
> On 20 July 2015 at 09:55, Pavel Fedin <p.fedin@samsung.com> wrote:
>>   Hello!
>>
>>   In our project we work on a very fast paravirtualized network I/O drivers, based  on ivshmem. We
>> successfully got ivshmem working on ARM, however with one hack.
>> Currently we have:
>> --- cut ---
>>      [VIRT_PCIE_MMIO] =          { 0x10000000, 0x2eff0000 },
>>      [VIRT_PCIE_PIO] =           { 0x3eff0000, 0x00010000 },
>>      [VIRT_PCIE_ECAM] =          { 0x3f000000, 0x01000000 },
>>      [VIRT_MEM] =                { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
>> --- cut ---
>>   And MMIO region is not enough for us because we want to have 1GB mapping for PCI device. In order
>> to make it working, we modify the map as follows:
>> --- cut ---
>>      [VIRT_PCIE_MMIO] =            { 0x10000000, 0x7eff0000 },
>>      [VIRT_PCIE_PIO] =           { 0x8eff0000, 0x00010000 },
>>      [VIRT_PCIE_ECAM] =          { 0x8f000000, 0x01000000 },
>>      [VIRT_MEM] =             { 0x90000000, 30ULL * 1024 * 1024 * 1024 },
>> --- cut ---
>>   The question is - how could we upstream this? I believe modifying 32-bit virt memory map this way
>> is not good. Will it be OK to have different memory map for 64-bit virt ?
> I think the theory we discussed at the time of putting in the PCIe
> device was that if we wanted this we'd add support for the other
> PCIe memory window (which would then live at somewhere above 4GB).
> Alex, can you remember what the idea was?

Yes, pretty much. It would give us an upper bound to the amount of RAM 
that we're able to support, but at least we would be able to support big 
MMIO regions like for ivshmem.

I'm not really sure where to put it though. Depending on your kernel 
config Linux supports somewhere between 39 and 48 or so bits of phys 
address space. And I'd rather not crawl into the PCI hole rat hole that 
we have on x86 ;).

We could of course also put it just above RAM - but then our device tree 
becomes really dynamic and heavily dependent on -m.

>
> But to be honest I think we weren't expecting anybody to need
> 1GB of PCI MMIO space unless it was a video card...

Ivshmem was actually the most likely target that I could've thought of 
to require big MMIO regions ;).


Alex

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [RFC] Virt machine memory map
  2015-07-20 11:23   ` Alexander Graf
@ 2015-07-20 13:30     ` Igor Mammedov
  2015-07-20 13:44       ` Alexander Graf
  2015-07-22  6:52     ` Pavel Fedin
  1 sibling, 1 reply; 8+ messages in thread
From: Igor Mammedov @ 2015-07-20 13:30 UTC (permalink / raw
  To: Alexander Graf
  Cc: Marcel Apfelbaum, Peter Maydell, Pavel Fedin, QEMU Developers

On Mon, 20 Jul 2015 13:23:45 +0200
Alexander Graf <agraf@suse.de> wrote:

> On 07/20/15 11:41, Peter Maydell wrote:
> > On 20 July 2015 at 09:55, Pavel Fedin <p.fedin@samsung.com> wrote:
> >>   Hello!
> >>
> >>   In our project we work on a very fast paravirtualized network I/O drivers, based  on ivshmem. We
> >> successfully got ivshmem working on ARM, however with one hack.
> >> Currently we have:
> >> --- cut ---
> >>      [VIRT_PCIE_MMIO] =          { 0x10000000, 0x2eff0000 },
> >>      [VIRT_PCIE_PIO] =           { 0x3eff0000, 0x00010000 },
> >>      [VIRT_PCIE_ECAM] =          { 0x3f000000, 0x01000000 },
> >>      [VIRT_MEM] =                { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
> >> --- cut ---
> >>   And MMIO region is not enough for us because we want to have 1GB mapping for PCI device. In order
> >> to make it working, we modify the map as follows:
> >> --- cut ---
> >>      [VIRT_PCIE_MMIO] =            { 0x10000000, 0x7eff0000 },
> >>      [VIRT_PCIE_PIO] =           { 0x8eff0000, 0x00010000 },
> >>      [VIRT_PCIE_ECAM] =          { 0x8f000000, 0x01000000 },
> >>      [VIRT_MEM] =             { 0x90000000, 30ULL * 1024 * 1024 * 1024 },
> >> --- cut ---
> >>   The question is - how could we upstream this? I believe modifying 32-bit virt memory map this way
> >> is not good. Will it be OK to have different memory map for 64-bit virt ?
> > I think the theory we discussed at the time of putting in the PCIe
> > device was that if we wanted this we'd add support for the other
> > PCIe memory window (which would then live at somewhere above 4GB).
> > Alex, can you remember what the idea was?
> 
> Yes, pretty much. It would give us an upper bound to the amount of RAM 
> that we're able to support, but at least we would be able to support big 
> MMIO regions like for ivshmem.
> 
> I'm not really sure where to put it though. Depending on your kernel 
> config Linux supports somewhere between 39 and 48 or so bits of phys 
> address space. And I'd rather not crawl into the PCI hole rat hole that 
> we have on x86 ;).
> 
> We could of course also put it just above RAM - but then our device tree 
> becomes really dynamic and heavily dependent on -m.
on x86 we've made everything that is not mapped to ram/mmio fall down to
PCI address space, see pc_pci_as_mapping_init().

So we don't have explicitly mapped PCI regions anymore there, but
we still thinking in terms of PCI hole/PCI ranges when it comes to ACPI
PCI bus description where one need to specify ranges available for bus
in its _CRS.

> 
> >
> > But to be honest I think we weren't expecting anybody to need
> > 1GB of PCI MMIO space unless it was a video card...
> 
> Ivshmem was actually the most likely target that I could've thought of 
> to require big MMIO regions ;).
> 
> 
> Alex
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [RFC] Virt machine memory map
  2015-07-20 13:30     ` Igor Mammedov
@ 2015-07-20 13:44       ` Alexander Graf
  0 siblings, 0 replies; 8+ messages in thread
From: Alexander Graf @ 2015-07-20 13:44 UTC (permalink / raw
  To: Igor Mammedov
  Cc: Marcel Apfelbaum, Peter Maydell, Pavel Fedin, QEMU Developers

On 07/20/15 15:30, Igor Mammedov wrote:
> On Mon, 20 Jul 2015 13:23:45 +0200
> Alexander Graf <agraf@suse.de> wrote:
>
>> On 07/20/15 11:41, Peter Maydell wrote:
>>> On 20 July 2015 at 09:55, Pavel Fedin <p.fedin@samsung.com> wrote:
>>>>    Hello!
>>>>
>>>>    In our project we work on a very fast paravirtualized network I/O drivers, based  on ivshmem. We
>>>> successfully got ivshmem working on ARM, however with one hack.
>>>> Currently we have:
>>>> --- cut ---
>>>>       [VIRT_PCIE_MMIO] =          { 0x10000000, 0x2eff0000 },
>>>>       [VIRT_PCIE_PIO] =           { 0x3eff0000, 0x00010000 },
>>>>       [VIRT_PCIE_ECAM] =          { 0x3f000000, 0x01000000 },
>>>>       [VIRT_MEM] =                { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
>>>> --- cut ---
>>>>    And MMIO region is not enough for us because we want to have 1GB mapping for PCI device. In order
>>>> to make it working, we modify the map as follows:
>>>> --- cut ---
>>>>       [VIRT_PCIE_MMIO] =            { 0x10000000, 0x7eff0000 },
>>>>       [VIRT_PCIE_PIO] =           { 0x8eff0000, 0x00010000 },
>>>>       [VIRT_PCIE_ECAM] =          { 0x8f000000, 0x01000000 },
>>>>       [VIRT_MEM] =             { 0x90000000, 30ULL * 1024 * 1024 * 1024 },
>>>> --- cut ---
>>>>    The question is - how could we upstream this? I believe modifying 32-bit virt memory map this way
>>>> is not good. Will it be OK to have different memory map for 64-bit virt ?
>>> I think the theory we discussed at the time of putting in the PCIe
>>> device was that if we wanted this we'd add support for the other
>>> PCIe memory window (which would then live at somewhere above 4GB).
>>> Alex, can you remember what the idea was?
>> Yes, pretty much. It would give us an upper bound to the amount of RAM
>> that we're able to support, but at least we would be able to support big
>> MMIO regions like for ivshmem.
>>
>> I'm not really sure where to put it though. Depending on your kernel
>> config Linux supports somewhere between 39 and 48 or so bits of phys
>> address space. And I'd rather not crawl into the PCI hole rat hole that
>> we have on x86 ;).
>>
>> We could of course also put it just above RAM - but then our device tree
>> becomes really dynamic and heavily dependent on -m.
> on x86 we've made everything that is not mapped to ram/mmio fall down to
> PCI address space, see pc_pci_as_mapping_init().
>
> So we don't have explicitly mapped PCI regions anymore there, but
> we still thinking in terms of PCI hole/PCI ranges when it comes to ACPI
> PCI bus description where one need to specify ranges available for bus
> in its _CRS.

Yes, and in the ARM case we pass those in as a region in device tree 
which gets generated from QEMU :).


Alex

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [RFC] Virt machine memory map
  2015-07-20 11:23   ` Alexander Graf
  2015-07-20 13:30     ` Igor Mammedov
@ 2015-07-22  6:52     ` Pavel Fedin
  2015-07-22  7:33       ` Alexander Graf
  1 sibling, 1 reply; 8+ messages in thread
From: Pavel Fedin @ 2015-07-22  6:52 UTC (permalink / raw
  To: 'Alexander Graf', 'Peter Maydell'
  Cc: 'QEMU Developers'

 Hello!

> > I think the theory we discussed at the time of putting in the PCIe
> > device was that if we wanted this we'd add support for the other
> > PCIe memory window (which would then live at somewhere above 4GB).
> > Alex, can you remember what the idea was?
> 
> Yes, pretty much. It would give us an upper bound to the amount of RAM
> that we're able to support, but at least we would be able to support big
> MMIO regions like for ivshmem.

 But i currently think that specification of "Generic PCI host" assumes only a single MMIO region + single PIO region + single ECAM. And we cannot have two MMIO regions without changing its specification, which can be problematic. Am i wrong about this ?

Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [RFC] Virt machine memory map
  2015-07-22  6:52     ` Pavel Fedin
@ 2015-07-22  7:33       ` Alexander Graf
  2015-07-22  8:42         ` Pavel Fedin
  0 siblings, 1 reply; 8+ messages in thread
From: Alexander Graf @ 2015-07-22  7:33 UTC (permalink / raw
  To: Pavel Fedin; +Cc: Peter Maydell, QEMU Developers


> Am 22.07.2015 um 08:52 schrieb Pavel Fedin <p.fedin@samsung.com>:
> 
> Hello!
> 
>>> I think the theory we discussed at the time of putting in the PCIe
>>> device was that if we wanted this we'd add support for the other
>>> PCIe memory window (which would then live at somewhere above 4GB).
>>> Alex, can you remember what the idea was?
>> 
>> Yes, pretty much. It would give us an upper bound to the amount of RAM
>> that we're able to support, but at least we would be able to support big
>> MMIO regions like for ivshmem.
> 
> But i currently think that specification of "Generic PCI host" assumes only a single MMIO region + single PIO region + single ECAM. And we cannot have two MMIO regions without changing its specification, which can be problematic. Am i wrong about this ?

At least on Seattle we do have several regions with that driver and I don't expect real hardware to provide a model as simple as ours. So yes, I would be very surprised if there were limitations about the split of regions.


Alex

> 
> Kind regards,
> Pavel Fedin
> Expert Engineer
> Samsung Electronics Research center Russia
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [RFC] Virt machine memory map
  2015-07-22  7:33       ` Alexander Graf
@ 2015-07-22  8:42         ` Pavel Fedin
  0 siblings, 0 replies; 8+ messages in thread
From: Pavel Fedin @ 2015-07-22  8:42 UTC (permalink / raw
  To: 'Alexander Graf'
  Cc: 'Peter Maydell', 'QEMU Developers'

 Hello!

> At least on Seattle we do have several regions with that driver and I don't expect real hardware
to provide
> a model as simple as ours. So yes, I would be very surprised if there were limitations about the
split of
> regions.

 Thank you for pointing out, i will check.

Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-07-22  8:42 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-20  8:55 [Qemu-devel] [RFC] Virt machine memory map Pavel Fedin
2015-07-20  9:41 ` Peter Maydell
2015-07-20 11:23   ` Alexander Graf
2015-07-20 13:30     ` Igor Mammedov
2015-07-20 13:44       ` Alexander Graf
2015-07-22  6:52     ` Pavel Fedin
2015-07-22  7:33       ` Alexander Graf
2015-07-22  8:42         ` Pavel Fedin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.