The following patches had unexpected interactions with the current
upstream code and have been reverted for now:
e07fd01651f3: power: Add support for power models
831c7f2f9e39: power: Low-power idle power state for idle CPUs
4f749e00b667: power: Add power states to ClockedObject
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
--HG--
extra : amend_source : 0b6fb073c6bbc24be533ec431eb51fbf1b269508
In general, the ThreadID parameter is unnecessary in the memory system
as the ContextID is what is used for the purposes of locks/wakeups.
Since we allocate sequential ContextIDs for each thread on MT-enabled
CPUs, ThreadID is unnecessary as the CPUs can identify the requesting
thread through sideband info (SenderState / LSQ entries) or ContextID
offset from the base ContextID for a cpu.
Add 4 power states to the ClockedObject, provides necessary access functions
to check and update the power state. Default power state is UNDEFINED, it is
responsibility of the respective simulation model to provide the startup state
and any other logic for state change.
Add number of transition stat.
Add distribution of time spent in clock gated state.
Add power state residency stat.
Add dump call back function to allow stats update of distribution and residency
stats.
The cache queue reserve is there as an overflow to give us enough
headroom based on when we block the cache, and how many transactions
we may already have accepted before actually blocking. The previous
values were probably chosen to be "big enough", when we actually know
that we check the MSHRs after every single allocation, and for the
write buffers we know that we implicitly may need one entry for every
outstanding MSHR.
* * *
mem: Adjust cache queue reserve to more conservative values
The cache queue reserve is there as an overflow to give us enough
headroom based on when we block the cache, and how many transactions
we may already have accepted before actually blocking. The previous
values were probably chosen to be "big enough", when we actually know
that we check the MSHRs after every single allocation, and for the
write buffers we know that we implicitly may need one entry for every
outstanding MSHR.
This patch breaks out the cache write buffer into a separate class,
without affecting any stats. The goal of the patch is to avoid
encumbering the much-simpler write queue with the complex MSHR
handling. In a follow on patch this simplification allows us to
implement write combining.
The WriteQueue gets its own class, but shares a common ancestor, the
generic Queue, with the MSHRQueue.
This patch adds assertions that enforce that only invalidating snoops
will ever reach into the logic that tracks in-order load completion and
also invalidation of LL/SC (and MONITOR / MWAIT) monitors. Also adds
some comments to MSHR::replaceUpgrades().
This patch fixes an issue where an InvalidationReq only traversed one
level of the cache hierarchy, and was subsequently turned into a
ReadExReq due to it needing writable, and the command not being
checked for explicitly.
This patch essentially rolls back 10518:30e3715c9405 to make RubyPort the
parent class of DMASequencer. It removes redundant code and restores some
features which were lost when directly inheriting from MemObject. For
example,
DMASequencer can now communicate to other devices using PIO, which is useful
for memmory-mapped communication between multiple DMADevices.
Avoid being overly conservative in clearing load locks in the cache,
and allow writes to the line if they are from the same context. This
is in line with ALPHA and ARM.
This patch introduces the ability of making the coherent crossbar the
point of coherency. If so, the crossbar does not forward packets where
a cache with ownership has already committed to responding, and also
does not forward any coherency-related packets that are not intended
for a downstream memory controller. Thus, invalidations and upgrades
are turned around in the crossbar, and the memory controller only sees
normal reads and writes.
In addition this patch moves the express snoop promotion of a packet
to the crossbar, thus allowing the downstream cache to check the
express snoop flag (as it should) for bypassing any blocking, rather
than relying on whether a cache is responding or not.
Adopt the same flow as in timing mode, where the caches on the path to
memory get to keep the line (if present), and we use the
responderHadWritable flag to determine if we need to forward the
(invalidating) packet or not.
This patch unifies the snoop handling in case of hitting writebacks
with how we handle snoops hitting in the tags. As a result, we end up
using the same optimisation as the normal snoops, where we inform the
downstream cache if we encounter a line in Modified (writable and
dirty) state, which enables us to avoid sending out express snoops to
invalidate any Shared copies of the line. A few regressions
consequently change, as some transactions are sunk higher up in the
cache hierarchy.
This patch changes how the cache determines if snoops should be
forwarded from the memory side to the CPU side. Instead of having a
parameter, the cache now looks at the port connected on the CPU side,
and if it is a snooping port, then snoops are forwarded. Less error
prone, and less parameters to worry about.
The patch also tidies up the CPU classes to ensure that their I-side
port is not snooping by removing overrides to the snoop request
handler, such that snoop requests will panic via the default
MasterPort implement
Result of running 'hg m5style --skip-all --fix-control -a' to get
rid of '== true' comparisons, plus trivial manual edits to get
rid of '== false'/'== False' comparisons.
Left a couple of explicit comparisons in where they didn't seem
unreasonable:
invalid boolean comparison in src/arch/mips/interrupts.cc:155
>> DPRINTF(Interrupt, "Interrupts OnCpuTimerINterrupt(tc) == true\n");<<
invalid boolean comparison in src/unittest/unittest.hh:110
>> "EXPECT_FALSE(" #expr ")", (expr) == false)<<
mem: support for gpu-style RMWs in ruby
This patch adds support for GPU-style read-modify-write (RMW) operations in
ruby. Such atomic operations are traditionally executed at the memory controller
(instead of through an L1 cache using cache-line locking).
Currently, this patch works by propogating operation functors through the memory
system.
The new Packet::setRaw() method incorrectly still contained
an htog() conversion. As a result, calls to the old set()
method (now defined as setRaw(htog(v))) underwent two htog
conversions, which breaks things when htog() is not a no-op.
Interestingly the only test that caught this was a SPARC
boot test, where an IsaFake device with a non-zero return
value was getting swapped twice resulting in a register
getting loaded with 0x100000000000000 instead of 1.
(Good reason for keeping SPARC around, perhaps?)
Some of the DPRINTFs added to the classic cache in cset 45df88079f04,
while useful to those unfamiliar with the cache code, end up being
noise when you're familiar with the code but are trying to debug tricky
protocol issues. (Particularly getting two messages from each cache
as it receives a snoop request then declares that there was no match.)
This patch introduces a CacheVerbose debug flag, and moves a subset of
the added DPRINTFs into that category, so that Cache by itself returns
to being a more succinct summary of cache activity.
Also added a CacheAll compound flag to turn on all the cache-related
debug flags (other than CacheTags, which you *really* have to want badly
to turn it on, IMO).
This patch removes the NeedsWritable flag for all responses, as it is
really only the request that needs a writable response. The response,
on the other hand, should in these cases always provide the line in a
writable state, as indicated by the hasSharers flag not being set.
When we send requests that has NeedsWritable set, the response will
always have the hasSharers flag not set. Additionally, there are cases
where the request did not have NeedsWritable set, and we still get a
writable response with the hasSharers flag not set. This never happens
on snoops, but is used by downstream caches to pass ownership
upstream.
As part of this patch, the affected response types are updated, and
the snoop filter is similarly modified to check only the hasSharers
flag (as it should). A sanity check is also added to the packet class,
asserting that we never look at the NeedsWritable flag for responses.
No regressions are affected.
This patch looks at the request and response command to determine if
either actually has any data payload, and if not, we do not allocate
any space for packet data.
The only tricky case is where the command type is changed as part of
the MSHR functionality. In these cases where the original packet had
no data, but the new packet does, we need to explicitly call
allocate().
This patch ensures we do not respond with a Modified (dirty and
writable) line if the request is uncacheable, and that the cache
responding retains the line without modifying the state (even if
responding).
This patch changes the name of a bunch of packet flags and MSHR member
functions and variables to make the coherency protocol easier to
understand. In addition the patch adds and updates lots of
descriptions, explicitly spelling out assumptions.
The following name changes are made:
* the packet memInhibit flag is renamed to cacheResponding
* the packet sharedAsserted flag is renamed to hasSharers
* the packet NeedsExclusive attribute is renamed to NeedsWritable
* the packet isSupplyExclusive is renamed responderHadWritable
* the MSHR pendingDirty is renamed to pendingModified
The cache states, Modified, Owned, Exclusive, Shared are also called
out in the cache and MSHR code to make it easier to understand.
This patch is imported from reviewboard patch 2551 by Nilay.
This patch moves from a dynamically defined MachineType to a statically
defined one. The need for this patch was felt since a dynamically defined
type prevents us from having types for which no machine definition may
exist.
The following changes have been made:
i. each machine definition now uses a type from the MachineType enumeration
instead of any random identifier. This required changing the grammar and the
*.sm files.
ii. MachineType enumeration defined statically in RubySlicc_Exports.sm.
* * *
normal protocol fixes for nilay's parser machine type fix
This patch is imported from reviewboard patch 2550 by Nilay.
It was possible to specify multiple machine types with a single state machine.
This seems unnecessary and is being removed.
Add a sanity check to make it explicit that we currently do not allow
an I/O coherent agent to directly issue writes into the coherent part
of the memory system (it has to go via a cache, and get transformed
into a read ex, upgrade or invalidation).
This patch changes how the cache tracks which snoops are forwarded,
and which ones are created locally. Previously the identification was
based on an empty sender state of a specific class, but this method
fails to distinguish which cache actually attached the sender
state. Instead we use the same mechanism as the crossbar, and keep
track of the requests that have outstanding snoops.
This patch addresses a bug in how the cache attached the MSHR as a
sender state. Rather than overwriting any existing sender state it now
pushes a new one. The handling of upward snoops is also clarified.
This patch fixes a corner case in the deferred snoop handling, where
requests ended up being used by multiple packets with different
lifetimes, and inadvertently got deleted while they were still in use.
This patch allows the ruby random tester to use ruby ports that may only
support instr or data requests. This patch is similar to a previous changeset
(8932:1b2c17565ac8) that was unfortunately broken by subsequent changesets.
This current patch implements the support in a more straight-forward way.
Since retries are now tested when running the ruby random tester, this patch
splits up the retry and drain check behavior so that RubyPort children, such
as the GPUCoalescer, can perform those operations correctly without having to
duplicate code. Finally, the patch also includes better DPRINTFs for
debugging the tester.
This patch adds support to optionally capture the virtual address and asid
for load/store instructions in the elastic traces. If they are present in
the traces, Trace CPU will set those fields of the request during replay.
the sanity check, while generally useful for exposing memory system bugs,
may be spurious with respect to GPU workloads, which may generate many more
requests than typical CPU workloads. the large number of requests generated
by the GPU may cause the req/resp queues to back up, thus queueing more than
100 packets.
This patch adds the necessary commands and cache functionality to
allow clean writebacks. This functionality is crucial, especially when
having exclusive (victim) caches. For example, if read-only L1
instruction caches are not sending clean writebacks, there will never
be any spills from the L1 to the L2. At the moment the cache model
defaults to not sending clean writebacks, and this should possibly be
re-evaluated.
The implementation of clean writebacks relies on a new packet command
WritebackClean, which acts much like a Writeback (renamed
WritebackDirty), and also much like a CleanEvict. On eviction of a
clean block the cache either sends a clean evict, or a clean
writeback, and if any copies are still cached upstream the clean
evict/writeback is dropped. Similarly, if a clean evict/writeback
reaches a cache where there are outstanding MSHRs for the block, the
packet is dropped. In the typical case though, the clean writeback
allocates a block in the downstream cache, and marks it writable if
the evicted block was writable.
The patch changes the O3_ARM_v7a L1 cache configuration and the
default L1 caches in config/common/Caches.py
This patch adds a parameter to control the cache clusivity, that is if
the cache is mostly inclusive or exclusive. At the moment there is no
intention to support strict policies, and thus the options are: 1)
mostly inclusive, or 2) mostly exclusive.
The choice of policy guides the behaviuor on a cache fill, and a new
helper function, allocOnFill, is created to encapsulate the decision
making process. For the timing mode, the decision is annotated on the
MSHR on sending out the downstream packet, and in atomic we directly
pass the decision to handleFill. We (ab)use the tempBlock in cases
where we are not allocating on fill, leaving the rest of the cache
unaffected. Simple and effective.
This patch also makes it more explicit that multiple caches are
allowed to consider a block writable (this is the case
also before this patch). That is, for a mostly inclusive cache,
multiple caches upstream may also consider the block exclusive. The
caches considering the block writable/exclusive all appear along the
same path to memory, and from a coherency protocol point of view it
works due to the fact that we always snoop upwards in zero time before
querying any downstream cache.
Note that this patch does not introduce clean writebacks. Thus, for
clean lines we are essentially removing a cache level if it is made
mostly exclusive. For example, lines from the read-only L1 instruction
cache or table-walker cache are always clean, and simply get dropped
rather than being passed to the L2. If the L2 is mostly exclusive and
does not allocate on fill it will thus never hold the line. A follow
on patch adds the clean writebacks.
The patch changes the L2 of the O3_ARM_v7a CPU configuration to be
mostly exclusive (and stats are affected accordingly).
This patch optimises the handling of writebacks and clean evictions
when using a snoop filter. Instead of snooping into the caches to
determine if the block is cached or not, simply set the status based
on the snoop-filter result.
Instead of conservatively enforcing order for all packets, which may
negatively impact the simulated-system performance, this patch updates
the packet queue such that it only applies the restriction if there
are already packets with the same address in the queue.
The basic need for the order enforcement is due to coherency
interactions where requests/responses to the same cache line must not
over-take each other. We rely on the fact that any packet that needs
order enforcement will have a block-aligned address. Thus, there is no
need for the queue to know about the cacheline size.
This patch enforces insertion order transmission of packets on the
response path in the cache. Note that the logic to enforce order is
already present in the packet queue, this patch simply turns it on for
queues in the response path.
Without this patch, there are corner cases where a request-response is
faster than a response-response forwarded through the cache. This
violation of queuing order causes problems in the snoop filter leaving
it with inaccurate information. This causes assert failures in the
snoop filter later on.
A follow on patch relaxes the order enforcement in the packet queue to
limit the performance impact.
This patch updates the I/O devices, bridge and simple memory to take
the packet header and payload delay into account in their latency
calculations. In all cases we add the header delay, i.e. the
accumulated pipeline delay of any crossbars, and the payload delay
needed for deserialisation of any payload.
Due to the additional unknown latency contribution, the packet queue
of the simple memory is changed to use insertion sorting based on the
time stamp. Moreover, since the memory hands out exclusive (non
shared) responses, we also need to ensure ordering for reads to the
same address.
This patch aligns how the memory-system slaves, i.e. the various
memory controllers and the bridge, identify and deal with sinking of
inhibited packets that are only useful within the coherent part of the
memory system.
In the future we could shift the onus to the crossbar, and add a
parameter "is_point_of_coherence" that would allow it to sink the
aforementioned packets.
This patch changes the CleanEvict command type to not be considered a
write. Initially it was made a zero-sized write to match the writeback
command, but as things developed it became clear that it causes more
problems than it solves. For example, the memory modules (and bridge)
should not consider the CleanEvict as a write, but instead discard
it. With this patch it will be neither a read, nor write, and as it
does not need a response the slave will simply sink it.
This patch unifies how we deal with delayed packet deletion, where the
receiving slave is responsible for deleting the packet, but the
sending agent (e.g. a cache) is still relying on the pointer until the
call to sendTimingReq completes. Previously we used a mix of a
deletion vector and a construct using unique_ptr. With this patch we
ensure all slaves use the latter approach.
The CoherentXBar currently doesn't check its queued slave ports when
receiving a functional snoop. This caused data corruption in cases
when a modified cache lines is forwarded between two caches.
Add the required functional calls into the queued slave ports.
This changeset adds a serial link model for the Hybrid Memory Cube (HMC).
SerialLink is a simple variation of the Bridge class, with the ability to
account for the latency of packet serialization. Also trySendTiming has been
modified to correctly model bandwidth.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This patch models a simple HMC Controller. It simply schedules the incoming
packets to HMC Serial Links using a round robin mechanism. This patch should
be applied in series with other patches modeling a complete HMC device.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This patch addresses the upgrading of deferred targets in the MSHR,
and makes it clearer by explicitly calling out what is happening
(deferred targets are promoted if we get exclusivity without asking
for it).
This patch adds explicit overrides as this is now required when using
"-Wall" with clang >= 3.5, the latter now part of the most recent
XCode. The patch consequently removes "virtual" for those methods
where "override" is added. The latter should be enough of an
indication.
As part of this patch, a few minor issues that clang >= 3.5 complains
about are also resolved (unused methods and variables).
This patch moves away from using M5_ATTR_OVERRIDE and the m5::hashmap
(and similar) abstractions, as these are no longer needed with gcc 4.7
and clang 3.1 as minimum compiler versions.
If a cache entry permission was previously set to NotPresent, but the entry was
not deleted, a following cache allocation can cause the entry to be leaked by
setting the entry pointer to a newly allocated entry. To eliminate this
possibility, check if the new entry is different from the old one, and if so,
delete the old one.
In RubyPort::ruby_eviction_callback, prior changes fixed a memory leak caused
by instantiating separate packets for each port that the eviction was forwarded
to. That change, however, left the instantiated request to also leak. Allocate
it on the stack to avoid the leak.
Recent changes to memory access queuing allocate requests for packets sent to
memory controllers, but did not free the requests. Delete them to avoid leaks.
Changes to the RubyMemoryControl removed the dequeue function, which deleted
MemoryNode instances. This results in leaked MemoryNode instances. Correctly
delete these instances.
This patch fixes a use-after-delete issue in the packet probe points
by adding a PacketInfo struct to retain the key fields before passing
the packet onwards. We want to probe the packet after it is
successfully sent, but by that time the fields may be modified, and
the packet may even be deleted.
Amazingly enough the issue has gone undetected for months, and only
recently popped up in our regressions.
This patch fixes issues in the interactions between deferred snoops
and WriteLineReq. More specifically, the patch addresses an issue
where deferred snoops caused assertion failures when being serviced on
the arrival of an InvalidateResp. The response packet was perceived to
be invalidating, when actually it is not for the cache that sent out
the original invalidation request.
This patch changes the tracking of ports in the snoop filter to use
local dense port IDs so that we can have 64 snooping ports (rather
than crossbar slave ports). This is achieved by adding a simple
remapping vector that translates the actal port IDs into the local
slave IDs used in the SnoopMask.
Ultimately this patch allows us to scale to much larger systems
without introducing a hierarchy of crossbars.
This patch adds a snoop filter to the L2XBar. For now we refrain from
globally adding a snoop filter to the SystemXBar, since the latter is
also used in systems without caches. In scenarios without caches the
snoop filter will not see any writeback/clean evicts from the CPU
ports, despite the fact that they are snooping. To avoid inadvertent
use of the snoop filter in these cases we leave it out for now.
A size check is added to the snoop filter, merely to ensure it does
not grow beyond the total capacity of the caches above it. The size
has to be set manually, and a value of 8 MByte is choosen as suitably
high default.
This patch introduces a private member storing the iterator from the
lookupRequest call, such that it can be re-used when the request
eventually finishes. The method previously called updateRequest is
renamed finishRequest to make it more clear that the two functions
must be called together.
This patch mirrors the logic in timing mode which sends up snoops to
check for cached copies before sending CleanEvicts and Writebacks down
the memory hierarchy. In case there is a copy in a cache above,
discard CleanEvicts and set the BLOCK_CACHED flag in Writebacks so
that writebacks do not reset the cache residency bit in the snoop
filter below.
This patch adds the functionality to properly track CleanEvicts and
Writebacks in the snoop filter. Previously there were no CleanEvicts, and
Writebacks did not send up snoops to ensure there were no copies in
caches above. Hence a writeback could never erase an entry from the
snoop filter.
When a CleanEvict message reaches a snoop filter, it confirms that the
BLOCK_CACHED flag is not set and resets the bits corresponding to the
CleanEvict address and port it arrived on. If none of the other peer
caches have (or have requested) the block, the snoop filter forwards
the CleanEvict to lower levels of memory. In case of a Writeback
message, the snoop filter checks if the BLOCK_CACHED flag is not set
and only then resets the bits corresponding to the Writeback
address. If any of the other peer caches have (or has requested) the
same block, the snoop filter sets the BLOCK_CACHED flag in the
Writeback before forwarding it to lower levels of memory heirarachy.
This patch prevents the snoop filter from creating items for requests
originating from non-snooping ports. The allocation decision is thus
based both on the cacheability of the line, and the snooping status of
the source port. Ultimately we should check if the source of the
packet is caching, since also the CPU ports are snooping (but not
allocating). Thus, at the moment we rely on the snoop filter being
used together with caches.
The patch also transitions to use the Packet::getBlockAddr in
determining the line address.
This patch introduces the concept of a snoop latency. Given the
requirement to snoop and forward packets in zero time (due to the
coherency mechanism), the latency is accounted for later.
On a snoop, we establish the latency, and later add it to the header
delay of the packet. To allow multiple caches to contribute to the
snoop latency, we use a separate variable in the packet, and then take
the maximum before adding it to the header delay.
This patch ensures that the snoop-filter latency only contributes to
the packet latency, and not to the crossbar throughput/occupancy. In
essence we treat the snoop-filter lookup as pipelined.
Created the following HBM configurations:
1) HBM gen1 (x128/CH), 2Gb die, 4H stack, 1Gbps, 8 channels
2) HBM gen2 (x64/PC), 8Gb die, 4H stack, 1Gbps, 16 pseudo-channels
The configuration values are based on:
- The HBM gen1 public JEDEC spec
- Publically released data from MemCon presentations
- Timing extrapolated from existing LPDDR configurations
Will adjust once specs become available.
Changeset 4872dbdea907 replaced Address by Addr, but did not make changes to
print statements. So the addresses which were being printed in hex earlier
along with their line address, were now being printed in decimals. This patch
adds a function printAddress(Addr) that can be used to print the address in hex
along with the lines address. This function has been put to use in some of the
places. At other places, change has been made to print just the address in
hex.