Allow multiple non consecutive ranges for AbstractMemory

Description

At the moment it is possible to assing a single range only to an AbstractMemory [1].

It would be useful if we could allow multiple non consecutive ranges, by replacing the
Param.AddrRange with a VectorParam.AddrRange. In this way we would be able (for example) to
instantiate a single DRAM controller for a set of physical ranges, enabling a more fragmented system
memory map.
We use the System mem_ranges parameter [2] to hold the global list of ranges; at the moment we are iterating over the list and instantiating a different memory controller per address range [3]. If we want to assign the entire range list to a single memory controller, we would have to remove the loop over the mem_ranges list and assign the list as a block to a single controller.
In this way we would loose the capability of instantiating multiple controllers.

How can we accomodate for both solutions and let the user decide?

OPTION 1

No changes to System.mem_ranges, and no introduction of an extra System parameter.
We make the top level python script responsible of specifying the setup.

For example we could add an extra list ranges parameter to MemConfig.config_mem [4], and provide
the capability of forwarding either list of ranges or list of list of ranges:

If a client script wanted to create memories with the old setup (one range per controller), then it would just need to pass the the system ranges as follows:

If a client script wanted to create memories with the new setup (multiple ranges per controller), then it would just need to pass the the system ranges as follows (note we are passing a list of lists):

In other words, every element of the toplevel list specifies a distinct controller:

[
[ range1, range2 ], \ ← controller 1
[ range3 ] ← controller 2
]

If I wanted to bind 2 ranges to a controller and 1 to another (as described above), it would be something like

[1]: https://github.com/gem5/gem5/blob/stable/src/mem/AbstractMemory.py#L49
[2]: https://github.com/gem5/gem5/blob/stable/src/sim/System.py#L84
[3]: https://github.com/gem5/gem5/blob/stable/configs/common/MemConfig.py#L197
[4]: https://github.com/gem5/gem5/blob/stable/configs/common/MemConfig.py#L108

Activity

Show:
Giacomo Travaglini
March 8, 2021, 9:50 AM

Hi ,
That is interesting to us. We are doing this in order to enable non contiguous ranges in the Arm memory map. Once we do that, we will face the same KVM problems

Is there a Jira ticket covering your work?

Matt Poremba
March 5, 2021, 7:27 PM

We are more interested in being able to do this for a KVM VM (i.e., exit KVM in the “holes” in the non-contiguous region).

Jason Lowe-Power
March 5, 2021, 3:44 PM

Seems related to some of the stuff that y’all are working on.

Assignee

Giacomo Travaglini

Reporter

Giacomo Travaglini

Components

Priority

Medium