This patch fixes an issue where an InvalidationReq only traversed one
level of the cache hierarchy, and was subsequently turned into a
ReadExReq due to it needing writable, and the command not being
checked for explicitly.
Bug fix for check on protobuf file frequency being different than
global frequency.
The ASCII encoder script is also fixed, and the example trace used in
the regressions is updated.
Add a callback handler for the NoMali reset callback. This callback is
called whenever the GPU is reset using the register interface or the
NoMali API. The callback can be used to override ID registers using
the raw register API.
Refactor and cleanup the NoMaliGpu class:
* Use a std::map instead of a switch block to map the parameter enum
describing the GPU type to a NoMali type.
* Remove redundant NoMali handle from the interrupt callback.
* Make callbacks and API wrappers protected instead of private to
enable future extensions.
* Wrap remaining NoMali API calls.
Ship aarch32 and aarch64 device trees with gem5. We currently ship
device trees as a part of the gem5 Linux kernel repository. This makes
tracking hard since device trees are supposed to be platform dependent
rather than kernel dependent (Linux considers device trees to be a
stable kernel ABI). It also makes code sharing between aarch32 and
aarch64 impossible.
This changeset implements a set of device trees for the new
VExpress_GEM5_V1 platform. The platform is described in a shared file
that is separate from the memory/CPU description. Due to differences
in how secondary CPUs are initialized, aarch32 and aarch64 use
different base files describing CPU nodes and the machine's
compatibility property.
Both Memory Fence is now flagged as Global Memory only to avoid resource
oversubscribing.
Flat instructions now check for Shared Memory resource busy to avoid
oversubscribing resources.
All WaitClass resources now use cycles (not ticks) to register the number
of pipe stages between Scoreboard and Execute to be consistent with
instruction scheduling logic which always used clock cycles.
This patch essentially rolls back 10518:30e3715c9405 to make RubyPort the
parent class of DMASequencer. It removes redundant code and restores some
features which were lost when directly inheriting from MemObject. For
example,
DMASequencer can now communicate to other devices using PIO, which is useful
for memmory-mapped communication between multiple DMADevices.
This patch adds a --debug-end flag to main.py so that debug output can be
stoped at a specified tick, while allowing the simulation to continue. It is
useful in situations where you would like to produce a trace for a region of
interest while still collecting stats for the entire run. This is in contrast
to the currently existing --debug-break flag, which terminates the simulation
at the tick.
Avoid being overly conservative in clearing load locks in the cache,
and allow writes to the line if they are from the same context. This
is in line with ALPHA and ARM.
This patch introduces the ability of making the coherent crossbar the
point of coherency. If so, the crossbar does not forward packets where
a cache with ownership has already committed to responding, and also
does not forward any coherency-related packets that are not intended
for a downstream memory controller. Thus, invalidations and upgrades
are turned around in the crossbar, and the memory controller only sees
normal reads and writes.
In addition this patch moves the express snoop promotion of a packet
to the crossbar, thus allowing the downstream cache to check the
express snoop flag (as it should) for bypassing any blocking, rather
than relying on whether a cache is responding or not.
Adopt the same flow as in timing mode, where the caches on the path to
memory get to keep the line (if present), and we use the
responderHadWritable flag to determine if we need to forward the
(invalidating) packet or not.
This patch unifies the snoop handling in case of hitting writebacks
with how we handle snoops hitting in the tags. As a result, we end up
using the same optimisation as the normal snoops, where we inform the
downstream cache if we encounter a line in Modified (writable and
dirty) state, which enables us to avoid sending out express snoops to
invalidate any Shared copies of the line. A few regressions
consequently change, as some transactions are sunk higher up in the
cache hierarchy.
This patch changes how the cache determines if snoops should be
forwarded from the memory side to the CPU side. Instead of having a
parameter, the cache now looks at the port connected on the CPU side,
and if it is a snooping port, then snoops are forwarded. Less error
prone, and less parameters to worry about.
The patch also tidies up the CPU classes to ensure that their I-side
port is not snooping by removing overrides to the snoop request
handler, such that snoop requests will panic via the default
MasterPort implement
Due to insufficient build deps, the checkpoint tags might not get
updated; this commit solves this. Due to the uncommon nature of the
build target, regenerating tags.cc is a fairly clean solution. Since
SCons hashes file contents, it won't recompile anything unless a new
checkpoint upgrader is actually added.
--HG--
extra : amend_source : ed3879da7668554693f697076deaf5029cc9b954
The previous implementation did a pair of nested RMW operations,
which isn't compatible with the way that locked RMW operations are
implemented in the cache models. It was convenient though in that
it didn't require any new micro-ops, and supported cmpxchg16b using
64-bit memory ops. It also worked in AtomicSimpleCPU where
atomicity was guaranteed by the core and not by the memory system.
It did not work with timing CPU models though.
This new implementation defines new 'split' load and store micro-ops
which allow a single memory operation to use a pair of registers as
the source or destination, then uses a single ldsplit/stsplit RMW
pair to implement cmpxchg. This patch requires support for 128-bit
memory accesses in the ISA (added via a separate patch) to support
cmpxchg16b.
Although the cache models support wider accesses, the ISA descriptions
assume that (for the most part) memory operands are integer types,
which makes it difficult to define instructions that do memory accesses
larger than 64 bits.
This patch adds some generic support for memory operands that are arrays
of uint64_t, and specifically a 'u2qw' operand type for x86 that is an
array of 2 uint64_ts (128 bits). This support is unused at this point,
but will be needed shortly for cmpxchg16b. Ideally the 128-bit SSE
memory accesses will also be rewritten to use this support.
Support for 128-bit accesses could also have been added using the gcc
__int128_t extension, which would have been less disruptive. However,
although clang also supports __int128_t, it's still non-standard.
Also, more importantly, this approach creates a path to defining
256- and 512-byte operands as well, which will be useful for eventual
AVX support.
MemOperand variables were being initialized to 0
"to avoid 'uninitialized variable' errors" but these
no longer seem to be a problem (with the exception of
one use case in POWER that is arguably broken and
easily fixed here).
Getting rid of the initialization is necessary to
set up a subsequent patch which extends memory
operands to possibly not be scalars, making the
'= 0' initialization no longer feasible.
Writing 16 bytes from an 8-byte source value is a bad idea.
This doesn't appear to have broken anything, but showed up
as spurious differences when tracediffing runs.
Result of running 'hg m5style --skip-all --fix-control -a' to get
rid of '== true' comparisons, plus trivial manual edits to get
rid of '== false'/'== False' comparisons.
Left a couple of explicit comparisons in where they didn't seem
unreasonable:
invalid boolean comparison in src/arch/mips/interrupts.cc:155
>> DPRINTF(Interrupt, "Interrupts OnCpuTimerINterrupt(tc) == true\n");<<
invalid boolean comparison in src/unittest/unittest.hh:110
>> "EXPECT_FALSE(" #expr ")", (expr) == false)<<
In the process of trying to get rid of an '== false' comparison,
it became apparent that a slightly more involved solution was
needed. Split this out into its own changeset since it's not
a totally trivial local change like the others.
Added a new Verifier object to check for and fix spacing
between if/while/for and following paren.
Restructured Verifier class to make it easier to add
new subclasses, particularly by using a global list of
verifiers to auto-generate command line options and
simplify the invocation loop.
The functions in these scripts were apparently folded into style.py but the
old scripts were orphaned without being deleted. Get rid of them so their
existence is no longer confusing.
Update NoMali from external revision 9adf9d6 to f08e0a5 and bring in
the following changes:
f08e0a5 Add support for tracking address space state
f11099e Fix job slot register handling when running new jobs
b28c98e api: Add a reset callback
29ac4c3 tests: Update gitignore to cover all future test cases
1c6b893 Propagate reset calls to all job slots
8f8ec15 Remove redundant reg vector in MMU
85d90d2 tests: Fix incorrect extern declaration
mem: support for gpu-style RMWs in ruby
This patch adds support for GPU-style read-modify-write (RMW) operations in
ruby. Such atomic operations are traditionally executed at the memory controller
(instead of through an L1 cache using cache-line locking).
Currently, this patch works by propogating operation functors through the memory
system.
Just changes the metavar for --debug-start from TIME
to TICK in cset 72046b9b3323 and didn't notice that the
comment "must be in ticks" is now redundant.
For historical reasons, the ExecContext interface had a single
function, readMem(), that did two different things depending on
whether the ExecContext supported atomic memory mode (i.e.,
AtomicSimpleCPU) or timing memory mode (all the other models).
In the former case, it actually performed a memory read; in the
latter case, it merely initiated a read access, and the read
completion did not happen until later when a response packet
arrived from the memory system.
This led to some confusing things, including timing accesses
being required to provide a pointer for the return data even
though that pointer was only used in atomic mode.
This patch splits this interface, adding a new initiateMemRead()
function to the ExecContext interface to replace the timing-mode
use of readMem().
For consistency and clarity, the readMemTiming() helper function
in the ISA definitions is renamed to initiateMemRead() as well.
For x86, where the access size is passed in explicitly, we can
also get rid of the data parameter at this level. For other ISAs,
where the access size is determined from the type of the data
parameter, we have to keep the parameter for that purpose.