Commit graph

2038 commits

Author SHA1 Message Date
Omar Naji 78dd152a0d mem: add DRAM powerdown current
Change-Id: I763cffe0c69f5ebbbf6a6eb12bec5c13d5d0161d
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
2016-10-13 19:22:11 +01:00
Wendy Elsasser 1dc16aff24 mem: Add DRAM low-power functionality
Added power-down state transitions to the DRAM controller model.

Added per rank parameter, outstandingEvents, which tracks the number
of outstanding command events and is used to determine when the
controller should transition to a low power state.
The controller will only transition when there are no outstanding events
scheduled and the number of command entries for the given rank is 0.

The outstandingEvents parameter is incremented for every RD/WR burst,
PRE, and REF event scheduled.  ACT is implicitly covered by RD/WR
since burst will always issue and complete after a required ACT.
The parameter is decremented when the event is serviced (completed).

The controller will automatically transition to ACT power down,
PRE power down, or SREF.

Transition to ACT power down state scheduled from:
1) The RespondEvent, where read data is received from the memory.
   ACT power-down entry will be scheduled when one or more banks is
   open, all commands for the rank have completed (no more commands
   scheduled), and there are no commands in queue for the rank

Transition to PRE power down scheduled from:
1) respondEvent, when all banks are closed, all commands have
   completed, and there are no commands in queue for the rank
2) prechargeEvent when all banks are closed, all commands have
   completed, and there are no commands in queue for the rank
3) refreshEvent, after the refresh is complete when the previous
   state was ACT power-down
4) refreshEvent, after the refresh is complete when the previous
   state was PRE power-down and there are commands in the queue.

Transition to SREF will be scheduled from:
1) refreshEvent, after the refresh is completes when the previous
   state was PRE power-down with no commands in queue

Power-down exit commands are scheduled from:
1) The refreshEvent, prior to issuing a refresh
2) doDRAMAccess, to wake-up the rank for RD/WR command issue.

Self-refresh exit commands are scheduled from:
1) The next request event, when the queue has commands for the rank
   in the readQueue or there are commands for the rank in the
   writeQueue and the bus state is WRITE.

Change-Id: I6103f660776e36c686655e71d92ec7b5b752050a
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
2016-10-13 19:22:11 +01:00
Wendy Elsasser 7b269f2c95 mem: Add callback to compute stats prior to dump event
The per rank statistics are periodically updated based on
state transition and refresh events.

Add a method to update these when a dump event occurs to
ensure they reflect accurate values.
Specifically, need to ensure that the low-power state
durations, power, and energy are logged correctly.

Change-Id: Ib642a6668340de8f494a608bb34982e58ba7f1eb
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
2016-10-13 19:22:11 +01:00
Wendy Elsasser 0dd0d4ee7a mem: Modify drain to ensure banks and power are idled
Add constraint that all ranks have to be in PWR_IDLE
before signaling drain complete

This will ensure that the banks are all closed and the rank
has exited any low-power states.

On suspend, update the power stats to sync the DRAM power logic

The logic maintains the location of the signalDrainDone
method, which is still triggered from either:
1) Read response event
2) Next request event

This ensures that the drain will complete in the READ bus
state and minimizes the changes required.

Change-Id: If1476e631ea7d5999fe50a0c9379c5967a90e3d1
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
2016-10-13 19:22:11 +01:00
Wendy Elsasser 27665af26d mem: Sort memory commands and update DRAMPower
Add local variable to stores commands to be issued.
These commands are in order within a single bank but will be out
of order across banks & ranks.

A new procedure, flushCmdList, sorts commands across banks / ranks,
and flushes the sorted list, up to curTick() to DRAMPower.
This is currently called in refresh, once all previous commands are
guaranteed to have completed.  Could be called in other events like
the powerEvent as well.

By only flushing commands up to curTick(), will not get out of sync
when flushed at a periodic stats dump (done in subsequent patch).

Change-Id: I4ac65a52407f64270db1e16a1fb04cfe7f638851
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
2016-10-13 19:22:10 +01:00
Omar Naji 61b2b493d4 mem: update DDR3 die revision
Change-Id: I8992ddc1664c3ed4b2d36d8a34e4ce8be113b9de
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
2016-10-13 19:22:10 +01:00
Omar Naji d19dc35b06 mem: add DRAM powerdown timing 2016-10-13 19:22:10 +01:00
Omar Naji 20e6bb0140 mem: make DDR4 x16 2016-10-13 19:22:10 +01:00
Tushar Krishna 22e6f65d72 ruby: Add M5_VAR_USED before variables used only inside assert in garnet2.0.
This removes errors when building gem5.fast
2016-10-06 21:06:00 -04:00
Tushar Krishna dbe8892b76 ruby: garnet2.0
Revamped version of garnet with more optimized single-cycle routers,
more configurability, and cleaner code.
2016-10-06 14:35:22 -04:00
Tushar Krishna b512f4bf71 ruby: remove the original garnet code.
Only garnet2.0 will be supported henceforth.
2016-10-06 14:35:21 -04:00
Tushar Krishna 0962d76827 config: add port directions and per-router delay in topology.
This patch adds port direction names to the links during topology
creation, which can be used for better printed names for the links
or for users to code up their own adaptive routing algorithms.
It also adds support for every router to have an independent latency
value to support heterogeneous topologies with the subsequent
garnet2.0 patch.
2016-10-06 14:35:20 -04:00
Tushar Krishna 003c08fa90 config: make internal links in network topology unidirectional.
This patch makes the internal links within the network topology
unidirectional, thus allowing any deadlock-free routing algorithms to
be specified from the topology itself using weights.
This patch also renames Mesh.py and MeshDirCorners.py to
Mesh_XY.py and MeshDirCorners_XY.py (Mesh with XY routing).
It also adds a Mesh_westfirst.py and CrossbarGarnet.py topologies.
2016-10-06 14:35:18 -04:00
Tushar Krishna aca869bf2d ruby: rename ALPHA_Network_test protocol to Garnet_standalone.
Over the past 6 years, we realized that the protocol is essentially used
to run the garnet network in a standalone manner, and feed standard synthetic
traffic patterns through it.
2016-10-06 14:35:14 -04:00
Brad Beckmann ee78758857 ruby: correct size for partial memory writes
Fixed AbstractController::queueMemoryWritePartial to specify the
correct size for partial memory writes.
2016-09-29 01:06:52 -04:00
Brad Beckmann f0971354c4 mem: minor dprintf fix to abstract mem
print number of bytes written as a decimal number, not hex
2016-09-29 01:06:33 -04:00
David Hashe f3ccaab1e9 cpu, mem, sim: Change how KVM maps memory
Only map memories into the KVM guest address space that are
marked as usable by KVM. Create BackingStoreEntry class
containing flags for is_conf_reported, in_addr_map, and
kvm_map.
2016-08-22 11:41:05 -04:00
Nikos Nikoleris 25ce5db3a3 mem: Print an MSHR without triggering any assertions
Previously printing an mshr would trigger an assertion if the MSHR was
not in service or if the targets list was empty. This patch changes
the print function to bypasses the accessor functions for
postInvalidate and postDowngrade and avoid the relevant assertions. It
also checks if the targets list is empty before calling print on it.

Change-Id: Ic18bee6cb088f63976112eba40e89501237cfe62
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-08-15 12:00:36 +01:00
Nikos Nikoleris ee7d8fdcb2 mem: Add support for secure packets in the snoop filter
Secure and non-secure data can coexist in the cache and therefore the
snoop filter should treat differently packets with secure and non
secure accesses. This patch uses the lower bits of the line address to
keep track of whether the packet is addressing secure memory or not.

Change-Id: I54a5e614dad566a5083582bede86c86896f2c2c1
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
2016-08-12 14:11:45 +01:00
Andreas Hansson 080d4e08d6 mem: Add snoop filter to SystemXBar by default
This patch changes the default behaviour of the SystemXBar, adding a
snoop filter. With the recent updates to the snoop filter allocation
behaviour this change no longer causes problems for the regressions
without caches.

Change-Id: Ibe0cd437b71b2ede9002384126553679acc69cc1
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
2016-08-12 14:11:45 +01:00
Andreas Hansson a23e914519 mem: Use FromCache attribute in snoop filter allocation
This patch improves the snoop filter allocation decisions by not only
looking at whether a port is snooping or not, but also if the packet
actually came from a cache. The issue with only looking at isSnooping
is that the CPU ports, for example, are snooping, but not actually
caching. Previously we ended up incorrectly allocating entries in
systems without caches (such as the atomic and timing quick
regressions). Eventually these misguided allocations caused the snoop
filter to panic due to an excessive size.

On the request path we now include the fromCache check on the packet
itself, and for responses we check if we actually have a snoop-filter
entry.

Change-Id: Idd2dbc4f00c7e07d331e9a02658aee30d0350d7e
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
2016-08-12 14:11:45 +01:00
Andreas Hansson 721efa4d09 mem: Update mostly exclusive policy even further
This patch takes yet another step in maintaining the clusivity, in
that it allows a mostly-inclusive cache to hold on to blocks even when
responding to a ReadExReq or UpgradeReq. Previously the cache simply
invalidated these blocks, but there is no strict need to do so.

The most important part of this patch is that we simply mark the block
clean when satisfying the upstream request where the cache is allowed
to keep the block. The only tricky part of the patch is in the memory
management of deferred snoops, where we need to distinguish the cases
where only the packet was copied (we expected to respond), and the
cases where we created an entirely new packet and request (we kept it
only to replay later).

The code in satisfyRequest is definitely ready for some refactoring
after this.

Change-Id: I201ddc7b2582eaa46fb8cff0c7ad09e02d64b0fc
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
2016-08-12 14:11:45 +01:00
Andreas Hansson 94f94fbc55 mem: Update mostly exclusive cache policy to cover more cases
This patch changes how the mostly exclusive policy is enforced to
ensure that we drop blocks when we should. As part of this change, the
actual invalidation due to the clusivity enforcement is moved outside
the hit handling, to a separate method maintainClusivity. For the
timing mode that means we can deal with all MSHR targets before taking
any action and possibly dropping the block. The method
satisfyCpuSideRequest is also renamed satisfyRequest as part of this
change (since we only ever see requests from the cpu-side port).

Change-Id: If6f3d1e0c3e7be9a67b72a55e4fc2ec4a90fd3d2
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
2016-08-12 14:11:45 +01:00
Andreas Hansson 2509553403 mem: Add a FromCache packet attribute
This patch adds a FromCache attribute to the packet, and updates a
number of the existing request commands to reflect that the request
originates from a cache. The attribute simplifies checking if a
requests came from a cache or not, and this is used by both the cache
and snoop filter in follow-on patches.

Change-Id: Ib0a7a080bbe4d6036ddd84b46fd45bc7eb41cd8f
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
Reviewed-by: Steve Reinhardt <stever@gmail.com>
2016-08-12 14:11:45 +01:00
Andreas Sandberg 26dc0017d2 ruby: Implement support for functional accesses to PIO ranges
There are cases where we want to put boot ROMs on the PIO bus. Ruby
currently doesn't support functional accesses to such memories since
functional accesses are always assumed to go to physical memory. Add
the required support for routing functional accesses to the PIO bus.

Change-Id: Ia5b0fcbe87b9642bfd6ff98a55f71909d1a804e3
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Brad Beckmann <brad.beckmann@amd.com>
Reviewed-by: Michael LeBeane <michael.lebeane@amd.com>
2016-08-10 15:27:13 +01:00
David Guillen Fandos 0020662459 mem: Add snoop traffic statistic 2016-07-21 17:19:14 +01:00
Nikos Nikoleris f4cc3a4d20 mem: Remove stale argument from a DPRINTF in the cache code
Change-Id: I70dd11c23b45dfc606ef08233d2e50fcc0817505
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-07-11 10:39:22 +01:00
Matthew Poremba 134824e847 ruby: Fix double statistic registration in garnet
Currently garnet will not run due to double statistic registration of new
stats in ClockedObject. This occurs because a temporary array named 'cls'
is being added as a child to garnet internal and external link SimObjects.
This patch simply renames the temporary array which prevents it from
being added as a child object and avoids the assertion that a statistic
was already registered.

Committed by Jason Lowe-Power <jason@lowepower.com>
2016-07-01 10:31:37 -05:00
Matthias Jung 86e9a6ffec ext: Update DRAMPower
Sync DRAMPower to external tool

This patch syncs the DRAMPower library of gem5 to the external
one on github (https://github.com/ravenrd/DRAMPower) of which
I am a maintainer.

The version used is the commit:
902a00a1797c48a9df97ec88868f20e847680ae6
from 07.  May.  2016.

Committed by Jason Lowe-Power <jason@lowepower.com>
2016-07-01 10:31:36 -05:00
Abdul Mutaal Ahmad 7cb0c7bd65 mem: different HMC configuration
In this new hmc configuration we have used the existing components in gem5
mainly [SerialLink] [NoncoherentXbar]& [DRAMCtrl] to define 3 different
architecture for HMC.

Highlights

1- It explores 3 different HMC architectures

2- It creates 4-HMC crossbars and attaches 16 vault controllers with it.
This  will connect vaults to serial links

3- From the previous version, HMCController with round robin funtionality
is being removed and all the serial links are being accessible directly
from user ports

4- Latency incorporated by HMCController (in previous version) is being
added to SerialLink

Committed by Jason Lowe-Power <jason@lowepower.com>
2016-07-01 09:45:21 -05:00
Nikos Nikoleris 40e4453ddc mem: Fix the snoop filter when there is a downstream addr mapper
The snoop filter handles requests in two steps which preceed and
follow the call to send the packet downstream. An address mapper could
possibly change the address of the packet when it is sent downstream
breaking the snoop filter assumption that the address is unchanged

Change-Id: Ib2db755e9ebef4f2f7c0169a46b1b11185ffbe79
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-06-20 15:11:18 +01:00
David Guillen Fandos 70798b1ba0 stats: Fixing regStats function for some SimObjects
Fixing an issue with regStats not calling the parent class method
for most SimObjects in Gem5. This causes issues if one adds new
stats in the base class (since they are never initialized properly!).

Change-Id: Iebc5aa66f58816ef4295dc8e48a357558d76a77c
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-06-06 17:16:43 +01:00
Stephan Diestelhorst 589033c94c sim: Call regStats of base-class as well
We want to extend the stats of objects hierarchically and thus it is necessary
to register the statistics of the base-class(es), as well.  For now, these are
empty, but generic stats will be added there.

Patch originally provided by Akash Bagdia at ARM Ltd.
2016-06-06 17:16:43 +01:00
Marco Elver 289a8ebdb1 ruby: Implement SwapReq support
This implements SwapReq for Ruby memory.

A SwapReq should be treated like a write, except that the response
packet contains the overwritten data.

Note that, in particular, the conditional checking for isStore/isLoad
needs to be reversed, as a SwapReq is both.
2016-06-03 16:20:08 -04:00
Andreas Hansson e3e808416f mem: Fix memory leak in handling of deferred snoops
This patch fixes a memory leak where deferred snoop packets never got
deallocated. On the call to MSHR::handleSnoop these snoops were
treated as if a response will be sent, as the MSHR was
pendingModified. Consequently, a copy of the packet was created and
added to the MSHR targets. However, an preceeding target to the same
MSHR, originally from a CPU, was serviced before the snoop, and caused
the block to be invalidated. This happens for ReadExReq and
UpgradeReq.

Note that the original snoop will receive a response, just not from
the cache in question, but instead from the cache upstream that issued
the ReadExReq or UpgradeReq.

Change-Id: I4ac012fbc8a46cf693ca390fe9476105d444e6f4
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
2016-05-26 11:56:24 +01:00
Andreas Hansson 4ff4f9c531 mem: Do not set cacheResponding on MSHR snoop if not responding
This patch changes the flow control for HSHR::handleSnoop to ensure
that we only set cacheResponding on the snoop packet if we are
actually responding. This avoids situations where a responder is
stalling indefinitely on a response that never arrives.

Change-Id: I691dd01755b614b30203581aa74fc743b350eacc
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
2016-05-26 11:56:24 +01:00
Andreas Hansson 90de9be2ef mem: Fix MemChecker unique_ptr type mismatch
This patch fixes the type of the unique_ptr instances, to ensure that
the data that is allocated with new[] is also deleted with
delete[]. The issue was highlighted by ASAN.

Change-Id: I2c5510424959d862a9954d83e728d901bb18d309
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Reviewed-by: Stephan Diestelhorst <stephan.diestelhorst@arm.com>
2016-05-26 11:56:24 +01:00
Nikos Nikoleris a69a0f33cb mem: fix headers include order in the cache related classes
Change-Id: Ia57cc104978861ab342720654e408dbbfcbe4b69
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-05-26 11:56:24 +01:00
Nikos Nikoleris f9d62b63e1 mem: remove redudant check whether the cache forwards snoops
Change-Id: I57b56771086e1e2f512977fb7248d93c171ab925
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-05-26 11:56:24 +01:00
Nikos Nikoleris d68f3577d6 mem: change NULL to nullptr in the cache related classes
Change-Id: I5042410be54935650b7d05c84d8d9efbfcc06e70
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-05-26 11:56:24 +01:00
Nikos Nikoleris 90bf50b4c7 mem: fix the line length in the cache related classes
Change-Id: I6d1feb164a958dde0da87a1cd2698096112c4a82
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-05-26 11:56:24 +01:00
Matthew Poremba 67e93a5846 ruby: Rename pkt to m_pkt so it may be accessed via SLICC
Allow usage of packet class in ruby for convenience purposes. This may be
used to access members of the packet/request class (e.g., via helper
functions) and/or push protocol specific information to the packets
SenderState without needing to modify SLICC types and protocols in multiple
locations.
2016-04-26 12:07:51 -04:00
Andreas Hansson 5a1dea51d2 mem: Include WriteLineReq in cache demand stats
Somehow the WriteLineReq were never added to the list of commands
considered demand.
2016-04-21 04:48:20 -04:00
Andreas Hansson a7c94f6e69 mem: Remove unused cache stats
Prune cache stats that are never actually used.
2016-04-21 04:48:19 -04:00
Andreas Hansson 13b9d4215d mem: Deallocate all write-queue entries when sent
This patch removes the write-queue entry tracking previously used for
uncacheable writes. The write-queue entry is now deallocated as soon
as the packet is sent. As a result we also forego the stats for
uncacheable writes. Additionally, there is no longer a need to attach
the write-queue entry to the packet.
2016-04-21 04:48:07 -04:00
Andreas Hansson 6c92ee49f1 mem: Align downstream cache packet creation in atomic and timing
This patch makes the control flow more uniform in atomic and timing,
ultimately making the code easier to understand.
2016-04-21 04:48:06 -04:00
Joel Hestness 39e10ced03 ruby: Fix block_on behavior
Ruby's controller block_on behavior aimed to block MessageBuffer requests into
SLICC controllers when a Locked_RMW was in flight. Unfortunately, this
functionality only partially works: When non-Locked_RMW memory accesses are
issued to the sequencer to an address with an in-flight Locked_RMW, the
sequencer may pass those accesses through to the controller. At the controller,
a number of incorrect activities can occur depending on the protocol. In
MOESI_hammer, for example, an intermediate IFETCH will cause an L1D to L2
transfer, which cannot be serviced, because the block_on functionality blocks
the trigger queue, resulting in a deadlock. Further, if an intermediate store
arrives (e.g. from a separate SMT thread), the sequencer allows the request
through to the controller, and the atomicity of the Locked_RMW may be broken.

To avoid these problems, disallow the Sequencer from passing any memory
accesses to the controller besides Locked_RMW_Write when a Locked_RMW is in-
flight.
2016-04-15 12:34:02 -05:00
Bjoern A. Zeeb bc45e930e4 mem: FreeBSD does not provide MAP_NORESERVE either
Like OS X, FreeBSD does not support MAP_NORESERVE.
Handle accordingly and update comment.

Committed by Jason Lowe-Power <power.jg@gmail.com>
2016-04-15 10:02:58 -05:00
Andreas Hansson 8127c4e7bf misc: Fix issues flagged by gcc 6
A few warnings (and thus errors) pop up after being added to -Wall:

1. -Wmisleading-indentation

In the auto-generated code there were instances of if/else blocks that
were not indented to gcc's liking. This is addressed by adding braces.

2. -Wshift-negative-value

gcc is clever enougn to consider ~0 a negative constant, and
rightfully complains. This is addressed by using mask() which
explicitly casts to unsigned before shifting.

That is all. Porting done.
2016-04-13 12:13:44 -04:00
Rekai Gonzalez Alberquilla af27586fbc mem: Add priority to QueuedPrefetcher
Queued prefetcher entries now count with a priority field. The idea is to
add packets ordered by priority and then by age.

For the existing algorithms in which priority doesn't make sense, it is set
to 0 for all deferred packets in the queue.
2016-04-07 11:32:38 -05:00
Rekai Gonzalez Alberquilla dad7d9277b mem: Handful extra features for BasePrefetcher
Some common functionality added to the base prefetcher, mainly dealing with
extracting the block address, page address, block index inside the page and
some other information that can be inferred from the block address. This is
used for some prefetching algorithms, and having the methods in the base,
as well as the block size and other information is the sensible way.
2016-04-07 11:32:38 -05:00
Victor Garcia df5a811833 mem: Add Program Counter to MemTraceProbe 2016-04-07 11:32:38 -05:00
Rekai Gonzalez Alberquilla a3bf4aa6ec mem: Add unused prefetch counter in caches
Added stat to the cache to account for HardPF'ed blocks that are evicted
before being referenced (over-prefetching).
2015-05-27 13:50:01 +01:00
Mitch Hayenga c75ff71139 mem: Remove threadId from memory request class
In general, the ThreadID parameter is unnecessary in the memory system
as the ContextID is what is used for the purposes of locks/wakeups.
Since we allocate sequential ContextIDs for each thread on MT-enabled
CPUs, ThreadID is unnecessary as the CPUs can identify the requesting
thread through sideband info (SenderState / LSQ entries) or ContextID
offset from the base ContextID for a cpu.

This is a re-spin of 20264eb after the revert (bd1c6789) and includes
some fixes of that commit.
2016-04-07 09:30:20 -05:00
Andreas Sandberg fd52a63e24 Revert to 74c1e6513bd0 (sim: Thermal support for Linux) 2016-04-07 10:42:07 +01:00
Andreas Sandberg be28d96510 Revert power patch sets with unexpected interactions
The following patches had unexpected interactions with the current
upstream code and have been reverted for now:

e07fd01651f3: power: Add support for power models
831c7f2f9e39: power: Low-power idle power state for idle CPUs
4f749e00b667: power: Add power states to ClockedObject

Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>

--HG--
extra : amend_source : 0b6fb073c6bbc24be533ec431eb51fbf1b269508
2016-04-06 19:43:31 +01:00
Mitch Hayenga 8615b27174 mem: Remove threadId from memory request class
In general, the ThreadID parameter is unnecessary in the memory system
as the ContextID is what is used for the purposes of locks/wakeups.
Since we allocate sequential ContextIDs for each thread on MT-enabled
CPUs, ThreadID is unnecessary as the CPUs can identify the requesting
thread through sideband info (SenderState / LSQ entries) or ContextID
offset from the base ContextID for a cpu.
2016-04-05 12:39:21 -05:00
Akash Bagdia 3ee4957b49 power: Add power states to ClockedObject
Add 4 power states to the ClockedObject, provides necessary access functions
to check and update the power state. Default power state is UNDEFINED, it is
responsibility of the respective simulation model to provide the startup state
and any other logic for state change.

Add number of transition stat.
Add distribution of time spent in clock gated state.
Add power state residency stat.

Add dump call back function to allow stats update of distribution and residency
stats.
2014-11-18 14:00:48 +00:00
Andreas Hansson abcbc4e51e mem: Adjust cache queue reserve to more conservative values
The cache queue reserve is there as an overflow to give us enough
headroom based on when we block the cache, and how many transactions
we may already have accepted before actually blocking. The previous
values were probably chosen to be "big enough", when we actually know
that we check the MSHRs after every single allocation, and for the
write buffers we know that we implicitly may need one entry for every
outstanding MSHR.
* * *
mem: Adjust cache queue reserve to more conservative values

The cache queue reserve is there as an overflow to give us enough
headroom based on when we block the cache, and how many transactions
we may already have accepted before actually blocking. The previous
values were probably chosen to be "big enough", when we actually know
that we check the MSHRs after every single allocation, and for the
write buffers we know that we implicitly may need one entry for every
outstanding MSHR.
2016-03-17 09:51:22 -04:00
Andreas Hansson 041ea8107e mem: Create a separate class for the cache write buffer
This patch breaks out the cache write buffer into a separate class,
without affecting any stats. The goal of the patch is to avoid
encumbering the much-simpler write queue with the complex MSHR
handling. In a follow on patch this simplification allows us to
implement write combining.

The WriteQueue gets its own class, but shares a common ancestor, the
generic Queue, with the MSHRQueue.
2016-03-17 09:51:18 -04:00
Stephan Diestelhorst f703160e5a mem, cpu: Add assertions to snoop invalidation logic
This patch adds assertions that enforce that only invalidating snoops
will ever reach into the logic that tracks in-order load completion and
also invalidation of LL/SC (and MONITOR / MWAIT) monitors. Also adds
some comments to MSHR::replaceUpgrades().
2015-08-10 11:25:52 +01:00
Andreas Hansson 7958f34797 mem: Ensure that InvalidateReq is not forwarded as ReadExReq
This patch fixes an issue where an InvalidationReq only traversed one
level of the cache hierarchy, and was subsequently turned into a
ReadExReq due to it needing writable, and the command not being
checked for explicitly.
2016-02-24 04:16:57 -05:00
Andreas Hansson 4619f0ee8b scons: Add missing override to appease clang
Make clang happy...again.
2016-02-23 03:27:20 -05:00
Tony Gutierrez 5a88f0931f ruby: move range change send from RubyPort to derived classes. 2016-02-18 10:50:16 -05:00
Tony Gutierrez 969babd26f ruby: send address ranges from RubyPort 2016-02-17 11:31:54 -05:00
Andreas Hansson 0d50979888 misc: Add missing overrides to appease clang
Since the last round of fixes a few new issues have snuck in. We
should consider switching the regression runs to clang.
2016-02-15 03:40:32 -05:00
Andreas Hansson 407233f5d8 mem: Avoid using invalid iterator in cache lock list traversal
Fix up issue highlighted by Valgrind and the clang Address Sanitizer.
2016-02-15 03:40:04 -05:00
Michael LeBeane b181cea364 ruby: make DMASequencer inherit from RubyPort
This patch essentially rolls back 10518:30e3715c9405 to make RubyPort the
parent class of DMASequencer.  It removes redundant code and restores some
features which were lost when directly inheriting from MemObject.  For
example,
DMASequencer can now communicate to other devices using PIO, which is useful
for memmory-mapped communication between multiple DMADevices.
2016-02-14 20:28:48 -05:00
Andreas Hansson 83a5977481 mem: Be less conservative in clearing load locks in the cache
Avoid being overly conservative in clearing load locks in the cache,
and allow writes to the line if they are from the same context. This
is in line with ALPHA and ARM.
2016-02-10 04:08:25 -05:00
Andreas Hansson 92f021cbbe mem: Move the point of coherency to the coherent crossbar
This patch introduces the ability of making the coherent crossbar the
point of coherency. If so, the crossbar does not forward packets where
a cache with ownership has already committed to responding, and also
does not forward any coherency-related packets that are not intended
for a downstream memory controller. Thus, invalidations and upgrades
are turned around in the crossbar, and the memory controller only sees
normal reads and writes.

In addition this patch moves the express snoop promotion of a packet
to the crossbar, thus allowing the downstream cache to check the
express snoop flag (as it should) for bypassing any blocking, rather
than relying on whether a cache is responding or not.
2016-02-10 04:08:25 -05:00
Andreas Hansson f84ee031cc mem: Align cache behaviour in atomic when upstream is responding
Adopt the same flow as in timing mode, where the caches on the path to
memory get to keep the line (if present), and we use the
responderHadWritable flag to determine if we need to forward the
(invalidating) packet or not.
2016-02-10 04:08:24 -05:00
Andreas Hansson 986214f181 mem: Align how snoops are handled when hitting writebacks
This patch unifies the snoop handling in case of hitting writebacks
with how we handle snoops hitting in the tags. As a result, we end up
using the same optimisation as the normal snoops, where we inform the
downstream cache if we encounter a line in Modified (writable and
dirty) state, which enables us to avoid sending out express snoops to
invalidate any Shared copies of the line. A few regressions
consequently change, as some transactions are sunk higher up in the
cache hierarchy.
2016-02-10 04:08:24 -05:00
Andreas Hansson fbdeb60316 mem: Deduce if cache should forward snoops
This patch changes how the cache determines if snoops should be
forwarded from the memory side to the CPU side. Instead of having a
parameter, the cache now looks at the port connected on the CPU side,
and if it is a snooping port, then snoops are forwarded. Less error
prone, and less parameters to worry about.

The patch also tidies up the CPU classes to ensure that their I-side
port is not snooping by removing overrides to the snoop request
handler, such that snoop requests will panic via the default
MasterPort implement
2016-02-10 04:08:24 -05:00
Steve Reinhardt f6b828d068 style: eliminate explicit boolean comparisons
Result of running 'hg m5style --skip-all --fix-control -a' to get
rid of '== true' comparisons, plus trivial manual edits to get
rid of '== false'/'== False' comparisons.

Left a couple of explicit comparisons in where they didn't seem
unreasonable:
invalid boolean comparison in src/arch/mips/interrupts.cc:155
>>        DPRINTF(Interrupt, "Interrupts OnCpuTimerINterrupt(tc) == true\n");<<
invalid boolean comparison in src/unittest/unittest.hh:110
>>            "EXPECT_FALSE(" #expr ")", (expr) == false)<<
2016-02-06 17:21:20 -08:00
Steve Reinhardt 5592798865 style: fix missing spaces in control statements
Result of running 'hg m5style --skip-all --fix-control -a'.
2016-02-06 17:21:19 -08:00
Steve Reinhardt dc8018a5c3 style: remove trailing whitespace
Result of running 'hg m5style --skip-all --fix-white -a'.
2016-02-06 17:21:18 -08:00
Brad Beckmann dcd8eeec3b ruby: removed Write_Only AccessPermission 2016-01-22 10:42:12 -05:00
David Hashe 698866d461 ruby: split CPU and GPU latency stats 2015-07-20 09:15:18 -05:00
Tony Gutierrez 1a7d3f9fcb gpu-compute: AMD's baseline GPU model 2016-01-19 14:28:22 -05:00
Tony Gutierrez 28e353e040 mem: write combining for ruby protocols
This patch adds support for write-combining in ruby.
2016-01-19 14:05:03 -05:00
Tony Gutierrez d658b6e1cc * * *
mem: support for gpu-style RMWs in ruby

This patch adds support for GPU-style read-modify-write (RMW) operations in
ruby. Such atomic operations are traditionally executed at the memory controller
(instead of through an L1 cache using cache-line locking).

Currently, this patch works by propogating operation functors through the memory
system.
2016-01-19 13:57:50 -05:00
Blake Hechtman 34fb6b5e35 mem: misc flags for AMD gpu model
This patch add support to mark memory requests/packets with attributes defined
in HSA, such as memory order and scope.
2015-07-20 09:15:18 -05:00
Steve Reinhardt 8406a54907 mem: fix bug in packet access endianness changes
The new Packet::setRaw() method incorrectly still contained
an htog() conversion.  As a result, calls to the old set()
method (now defined as setRaw(htog(v))) underwent two htog
conversions, which breaks things when htog() is not a no-op.

Interestingly the only test that caught this was a SPARC
boot test, where an IsaFake device with a non-zero return
value was getting swapped twice resulting in a register
getting loaded with 0x100000000000000 instead of 1.
(Good reason for keeping SPARC around, perhaps?)
2016-01-11 16:20:38 -05:00
Andreas Hansson 12eb034378 scons: Enable -Wextra by default
Make best use of the compiler, and enable -Wextra as well as
-Wall. There are a few issues that had to be resolved, but they are
all trivial.
2016-01-11 05:52:20 -05:00
Steve Reinhardt 6caa2c9b4e mem: add CacheVerbose debug flag, filter noisy DPRINTFs
Some of the DPRINTFs added to the classic cache in cset 45df88079f04,
while useful to those unfamiliar with the cache code, end up being
noise when you're familiar with the code but are trying to debug tricky
protocol issues.  (Particularly getting two messages from each cache
as it receives a snoop request then declares that there was no match.)

This patch introduces a CacheVerbose debug flag, and moves a subset of
the added DPRINTFs into that category, so that Cache by itself returns
to being a more succinct summary of cache activity.

Also added a CacheAll compound flag to turn on all the cache-related
debug flags (other than CacheTags, which you *really* have to want badly
to turn it on, IMO).
2015-12-31 09:32:09 -08:00
Andreas Hansson c153b669fd mem: Do not rely on the NeedsWritable flag for responses
This patch removes the NeedsWritable flag for all responses, as it is
really only the request that needs a writable response. The response,
on the other hand, should in these cases always provide the line in a
writable state, as indicated by the hasSharers flag not being set.

When we send requests that has NeedsWritable set, the response will
always have the hasSharers flag not set. Additionally, there are cases
where the request did not have NeedsWritable set, and we still get a
writable response with the hasSharers flag not set. This never happens
on snoops, but is used by downstream caches to pass ownership
upstream.

As part of this patch, the affected response types are updated, and
the snoop filter is similarly modified to check only the hasSharers
flag (as it should). A sanity check is also added to the packet class,
asserting that we never look at the NeedsWritable flag for responses.

No regressions are affected.
2015-12-31 09:34:18 -05:00
Andreas Hansson 7fca994d04 mem: Do not allocate space for packet data if not needed
This patch looks at the request and response command to determine if
either actually has any data payload, and if not, we do not allocate
any space for packet data.

The only tricky case is where the command type is changed as part of
the MSHR functionality. In these cases where the original packet had
no data, but the new packet does, we need to explicitly call
allocate().
2015-12-31 09:33:39 -05:00
Andreas Hansson f1ec326be5 mem: Do not alter cache block state on uncacheable snoops
This patch ensures we do not respond with a Modified (dirty and
writable) line if the request is uncacheable, and that the cache
responding retains the line without modifying the state (even if
responding).
2015-12-31 09:33:25 -05:00
Andreas Hansson 0fcb376e5f mem: Make cache terminology easier to understand
This patch changes the name of a bunch of packet flags and MSHR member
functions and variables to make the coherency protocol easier to
understand. In addition the patch adds and updates lots of
descriptions, explicitly spelling out assumptions.

The following name changes are made:

* the packet memInhibit flag is renamed to cacheResponding

* the packet sharedAsserted flag is renamed to hasSharers

* the packet NeedsExclusive attribute is renamed to NeedsWritable

* the packet isSupplyExclusive is renamed responderHadWritable

* the MSHR pendingDirty is renamed to pendingModified

The cache states, Modified, Owned, Exclusive, Shared are also called
out in the cache and MSHR code to make it easier to understand.
2015-12-31 09:32:58 -05:00
Tony Gutierrez a317764577 ruby: slicc: have a static MachineType
This patch is imported from reviewboard patch 2551 by Nilay.
This patch moves from a dynamically defined MachineType to a statically
defined one.  The need for this patch was felt since a dynamically defined
type prevents us from having types for which no machine definition may
exist.

The following changes have been made:
i. each machine definition now uses a type from the MachineType enumeration
instead of any random identifier.  This required changing the grammar and the
*.sm files.
ii. MachineType enumeration defined statically in RubySlicc_Exports.sm.
* * *
normal protocol fixes for nilay's parser machine type fix
2015-07-20 09:15:18 -05:00
Tony Gutierrez 3f68884c0e ruby: slicc: remove support for single machine, multiple types
This patch is imported from reviewboard patch 2550 by Nilay.
It was possible to specify multiple machine types with a single state machine.
This seems unnecessary and is being removed.
2015-07-20 09:15:18 -05:00
Andreas Hansson f5c4a45889 mem: Explicitly check MSHR snoops for cases not dealt with
Add a sanity check to make it explicit that we currently do not allow
an I/O coherent agent to directly issue writes into the coherent part
of the memory system (it has to go via a cache, and get transformed
into a read ex, upgrade or invalidation).
2015-12-28 11:14:18 -05:00
Andreas Hansson f6525ff221 mem: Remove unused cache squash functionality
This patch removes the unused squash function from the MSHR queue, and
the associated (and also unused) threadNum member from the MSHR.
2015-12-28 11:14:16 -05:00
Andreas Hansson fbf3987c7b mem: Avoid unecessary checks when creating HardPFReq in cache
The checks made before sending out a HardPFReq were unecessarily
complex, and checked for cases that never occur. This patch
tidies it up.
2015-12-28 11:14:15 -05:00
Andreas Hansson b93a9d0d51 mem: Do not use sender state to track forwarded snoops in cache
This patch changes how the cache tracks which snoops are forwarded,
and which ones are created locally. Previously the identification was
based on an empty sender state of a specific class, but this method
fails to distinguish which cache actually attached the sender
state. Instead we use the same mechanism as the crossbar, and keep
track of the requests that have outstanding snoops.
2015-12-28 11:14:14 -05:00
Andreas Hansson 036263e280 mem: Fix cache sender state handling and add clarification
This patch addresses a bug in how the cache attached the MSHR as a
sender state. Rather than overwriting any existing sender state it now
pushes a new one. The handling of upward snoops is also clarified.
2015-12-28 11:14:10 -05:00
Andreas Hansson 97887eb6dc mem: Fix memory allocation bug in deferred snoop handling
This patch fixes a corner case in the deferred snoop handling, where
requests ended up being used by multiple packets with different
lifetimes, and inadvertently got deleted while they were still in use.
2015-12-17 17:07:11 -05:00
David Hashe f5f04c3120 mem: add request types for acquire and release
Add support for acquire and release requests.  These synchronization operations
are commonly supported by several modern instruction sets.
2015-07-20 09:15:18 -05:00
Brad Beckmann 173a786921 ruby: more flexible ruby tester support
This patch allows the ruby random tester to use ruby ports that may only
support instr or data requests.  This patch is similar to a previous changeset
(8932:1b2c17565ac8) that was unfortunately broken by subsequent changesets.
This current patch implements the support in a more straight-forward way.
Since retries are now tested when running the ruby random tester, this patch
splits up the retry and drain check behavior so that RubyPort children, such
as the GPUCoalescer, can perform those operations correctly without having to
duplicate code.  Finally, the patch also includes better DPRINTFs for
debugging the tester.
2015-07-20 09:15:18 -05:00
Tony Gutierrez 413f3088ea mem: remove acq/rel cmds from packet and add mem fence req 2015-12-09 22:56:31 -05:00