Ruby's controller block_on behavior aimed to block MessageBuffer requests into
SLICC controllers when a Locked_RMW was in flight. Unfortunately, this
functionality only partially works: When non-Locked_RMW memory accesses are
issued to the sequencer to an address with an in-flight Locked_RMW, the
sequencer may pass those accesses through to the controller. At the controller,
a number of incorrect activities can occur depending on the protocol. In
MOESI_hammer, for example, an intermediate IFETCH will cause an L1D to L2
transfer, which cannot be serviced, because the block_on functionality blocks
the trigger queue, resulting in a deadlock. Further, if an intermediate store
arrives (e.g. from a separate SMT thread), the sequencer allows the request
through to the controller, and the atomicity of the Locked_RMW may be broken.
To avoid these problems, disallow the Sequencer from passing any memory
accesses to the controller besides Locked_RMW_Write when a Locked_RMW is in-
flight.
Remve the assertion that we always need to add a delta larger than
zero as that does not seem to be true when we hit it in the
'PMU reset cycle counter to zero' case.
Committed by Jason Lowe-Power <power.jg@gmail.com>
A few warnings (and thus errors) pop up after being added to -Wall:
1. -Wmisleading-indentation
In the auto-generated code there were instances of if/else blocks that
were not indented to gcc's liking. This is addressed by adding braces.
2. -Wshift-negative-value
gcc is clever enougn to consider ~0 a negative constant, and
rightfully complains. This is addressed by using mask() which
explicitly casts to unsigned before shifting.
That is all. Porting done.
Queued prefetcher entries now count with a priority field. The idea is to
add packets ordered by priority and then by age.
For the existing algorithms in which priority doesn't make sense, it is set
to 0 for all deferred packets in the queue.
Some common functionality added to the base prefetcher, mainly dealing with
extracting the block address, page address, block index inside the page and
some other information that can be inferred from the block address. This is
used for some prefetching algorithms, and having the methods in the base,
as well as the block size and other information is the sensible way.
In general, the ThreadID parameter is unnecessary in the memory system
as the ContextID is what is used for the purposes of locks/wakeups.
Since we allocate sequential ContextIDs for each thread on MT-enabled
CPUs, ThreadID is unnecessary as the CPUs can identify the requesting
thread through sideband info (SenderState / LSQ entries) or ContextID
offset from the base ContextID for a cpu.
This is a re-spin of 20264eb after the revert (bd1c6789) and includes
some fixes of that commit.
This patch adds a configurable indirect branch predictor that can be indexed
by a combination of GHR and path history hashes. Implements the functionality
described in:
"Target prediction for indirect jumps" by Chang, Hao, and Patt
http://dl.acm.org/citation.cfm?id=264209
This is a re-spin of fb9d142 after the revert (bd1c6789).
The extant BTB code doesn't hash on the thread id but does check the
thread id for 'btb hits'. This results in 1-thread of a multi-threaded
workload taking a BTB entry, and all other threads missing for the same branch
missing.
This changeset updates the dot output to bail out if it is unable to
resolve the voltage or clock domains (which will cause it to raise an
AttributeError). Additionally, the DVFS dot output is disabled by
default for speed purposes.
Minor fixup for 0aeca8f.
The following patches had unexpected interactions with the current
upstream code and have been reverted for now:
e07fd01651f3: power: Add support for power models
831c7f2f9e39: power: Low-power idle power state for idle CPUs
4f749e00b667: power: Add power states to ClockedObject
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
--HG--
extra : amend_source : 0b6fb073c6bbc24be533ec431eb51fbf1b269508
In general, the ThreadID parameter is unnecessary in the memory system
as the ContextID is what is used for the purposes of locks/wakeups.
Since we allocate sequential ContextIDs for each thread on MT-enabled
CPUs, ThreadID is unnecessary as the CPUs can identify the requesting
thread through sideband info (SenderState / LSQ entries) or ContextID
offset from the base ContextID for a cpu.
This patch adds a configurable indirect branch predictor that can be indexed
by a combination of GHR and path history hashes. Implements the functionality
described in:
"Target prediction for indirect jumps" by Chang, Hao, and Patt
http://dl.acm.org/citation.cfm?id=264209
The extant BTB code doesn't hash on the thread id but does check the
thread id for 'btb hits'. This results in 1-thread of a multi-threaded
workload taking a BTB entry, and all other threads missing for the same branch
missing.
This patch adds some basic support for power models in gem5.
The power interface is defined so it can interact with thermal
models as well. It implements a simple power evaluator that
can be used for simple power models that express power in the
form of a math expression. These expressions can use stats
within the same SimObject (or down its hierarchy) and some
magic variables such as "temp" for temperature.
In future patches we will extend this functionality to allow
slightly more complex expressions.
The model allows it to be extended to use other kinds of models.
Finally, the thermal model is updated to use the power usage as input.
Add 4 power states to the ClockedObject, provides necessary access functions
to check and update the power state. Default power state is UNDEFINED, it is
responsibility of the respective simulation model to provide the startup state
and any other logic for state change.
Add number of transition stat.
Add distribution of time spent in clock gated state.
Add power state residency stat.
Add dump call back function to allow stats update of distribution and residency
stats.
This patch enables Linux to read the temperature using hwmon infrastructure.
In order to use this in your gem5 you need to compile the kernel using the
following configs:
CONFIG_HWMON=y
CONFIG_SENSORS_VEXPRESS=y
And a proper dts file (containing an entry such as):
dcc {
compatible = "arm,vexpress,config-bus";
arm,vexpress,config-bridge = <&v2m_sysreg>;
temp@0 {
compatible = "arm,vexpress-temp";
arm,vexpress-sysreg,func = <4 0>;
label = "DCC";
};
};
This patch adds basic thermal support to gem5. It models energy dissipation
through a circuital equivalent, which allows us to use RC networks.
This lays down the basic infrastructure to do so, but it does not "work" due
to the lack of power models. For now some hardcoded number is used as a PoC.
The solver is embedded in the patch.
This patch adds a secondary dot output file which shows the DVFS domains. This
has been done separately for now to avoid cluttering the already existing
diagram. Due to the way that the clock domains are assigned to components in
gem5, this output must be generated after the C++ objects have been
instantiated. This further motivates the need to generate this file separately
to the current dot output, and not to replace it entirely.
This patch adds some additional information when draining the system which
allows the user to debug which SimObject(s) in the system is failing to drain.
Only enabled for the builds with tracing enabled and is subject to the Drain
debug flag being set at runtime.
This patch addresses an issue with the unserialization of clock
domains. Previously, the previous performance level was not restored
due to a bug in the code, which detected the post-unserialize update
as superfluous. This patch splits the setting of the clock domain into
two parts. The original interface of perfLevel is retained, but the
actual update takes place in signalPerfLevelUpdate, which is private
to the class. The perfLevel method checks that if the new performance
level is different to the previous performance level, and will only
call signalPerfLevelUpdate if there is a change. Therefore, the
performance level is only updated, and voltage domains notified, if
there is an actual change. The split functionality allows
signalPerfLevelUpdate to be called by startup() to explicitly force an
update post unserialization.
This patch adds the ability for the simulator to query the number of
instructions a CPU has executed so far per hw-thread. This can be used
to enable more flexible periodic events such as taking checkpoints
starting 1s into simulation and X instructions thereafter.
The openFlagTable and mmapFlagTables for emulated Linux
platforms are basically identical, but are specified
repetitively for every platform. Use a common file
that gets included for each platform so that we only
have one copy, making them more consistent and simplifying
changes (like adding #ifdefs).
In the process, made some minor fixes that slipped through
due to previous inconsistencies, and added more #ifdefs
to try to fix building on alternative hosts.
The style refactor change (style: Refactor the style checker as a
Python package) moved region.py from src/python/m5/util/ to
util/style/. The SConscript update accidentally got lost in that
commit. This commit removes region.py from src/python/SConscript.
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
--HG--
extra : amend_source : f69b75bf636dd4a4232af3e10c29f7eaa4d59dc8
Refactor the style checker into a Python module that can be reused by
command line tools that integrate with git. In particular:
* Create a style package in util
* Move style validators from style.py to the style/validators.py.
* Move style verifiers from style.py to the style/verifiers.py.
* Move utility functions (sort_includes, region handling,
file_types) into the style package
* Move generic code from style.py to style/style.py.
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Reviewed-by: Steve Reinhardt <steve.reinhardt@amd.com>
--HG--
rename : util/style.py => util/hgstyle.py
rename : util/sort_includes.py => util/style/sort_includes.py
extra : rebase_source : ad6cf9b9a18c48350dfc7b7c77bea6c5344fb53c
This changeset adds an option to force the kvm-based CPUs to always
synchronize the gem5 thread context representation on entry/exit into
the kernel. This is very useful for debugging. Unfortunately, it is
also the only way to get reliable register contents when using remote
gdb functionality. The long-term solution for the latter would be to
implement a kvm-specific thread context.
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Reviewed-by: Alexandru Dutu <alexandru.dutu@amd.com>
Refactor the TLB and page table walker test interface to use a dynamic
registration mechanism. Instead of patching a couple of empty methods
to wire up a TLB tester, this change allows such testers to register
themselves using the setTestInterface() method.
Libraries are loaded into the process address space using the
mmap system call. Conveniently, this happens to be a good
time to update the process symbol table with the library's
incoming symbols so we handle the table update from within the
system call.
This works just like an application's normal symbols. The only
difference between a dynamic library and a main executable is
when the symbol table update occurs. The symbol table update for
an executable happens at program load time and is finished before
the process ever begins executing. Since dynamic linking happens
at runtime, the symbol loading happens after the library is
first loaded into the process address space. The library binary
is examined at this time for a symbol section and that section
is parsed for symbol types with specific bindings (global,
local, weak). Subsequently, these symbols are added to the table
and are available for use by gem5 for things like trace
generation.
Checkpointing should work just as it did previously. The address
space (and therefore the library) will be recorded and the symbol
table will be entirely recorded. (It's not possible to do anything
clever like checkpoint a program and then load the program back
with different libraries with LD_LIBRARY_PATH, because the
library becomes part of the address space after being loaded.)
The mmapGrowsDown() method was a static method on the OperatingSystem
class (and derived classes), which worked OK for the templated syscall
emulation methods, but made it hard to access elsewhere. This patch
moves the method to be a virtual function on the LiveProcess method,
where it can be overridden for specific platforms (for now, Alpha).
This patch also changes the value of mmapGrowsDown() from being false
by default and true only on X86Linux32 to being true by default and
false only on Alpha, which seems closer to reality (though in reality
most people use ASLR and this doesn't really matter anymore).
In the process, also got rid of the unused mmap_start field on
LiveProcess and OperatingSystem mmapGrowsUp variable.
For O3, which has a stat that counts reg reads, there is an additional
reg read per mmap() call since there's an arg we no longer ignore.
Otherwise, stats should not be affected.
The structure definition only had the open system call flag set in mind when
it was named, so we rename it here with the intention of using it to define
additional tables to translate flags for other system calls in the future.
Breaks the debug output from system calls into two levels: Base and Verbose.
A macro is added specifically for system calls which allows developers to
easily add new debug messages in a consistent manner. The macro also contains
a field to print thread IDs along with the CPU ID.
The cache queue reserve is there as an overflow to give us enough
headroom based on when we block the cache, and how many transactions
we may already have accepted before actually blocking. The previous
values were probably chosen to be "big enough", when we actually know
that we check the MSHRs after every single allocation, and for the
write buffers we know that we implicitly may need one entry for every
outstanding MSHR.
* * *
mem: Adjust cache queue reserve to more conservative values
The cache queue reserve is there as an overflow to give us enough
headroom based on when we block the cache, and how many transactions
we may already have accepted before actually blocking. The previous
values were probably chosen to be "big enough", when we actually know
that we check the MSHRs after every single allocation, and for the
write buffers we know that we implicitly may need one entry for every
outstanding MSHR.
This patch breaks out the cache write buffer into a separate class,
without affecting any stats. The goal of the patch is to avoid
encumbering the much-simpler write queue with the complex MSHR
handling. In a follow on patch this simplification allows us to
implement write combining.
The WriteQueue gets its own class, but shares a common ancestor, the
generic Queue, with the MSHRQueue.
It's apparently not widely known that our scons scripts allow you to
put the build directory wherever you want; not only does it not have
to be immediately under the root of your repo, it doesn't even have
to be underneath the root at all. (For example, sometimes it's useful
to build on a local disk if your repo is on a slow NFS mount.)
I point this out because this functionality has been broken for close
to two years but no one seems to have noticed yet. This patch fixes
an assumption that crept in in changeset be0e1724eb39 (May 09 2014)
that the build dir would be immediately under the top level of the
repo, preventing builds anywhere else.
Adding voltage function which returns the current voltage
for a given clocked object. It's handy for power models and
similar stuff that need to retrieve voltage. Function
frequency() is already there, so I see no reason for not having
this one too.
fu_pool and inst_queue were using -1 for "no such FU" and -2 for "all those
FUs are busy at the moment" when requesting for a FU and replying. This
patch introduces new constants NoCapableFU and NoFreeFU respectively.
In addition, the condition (idx == -2 || idx != -1) is equivalent to
(idx != -1), so this patch also simplifies that.
--HG--
extra : rebase_source : 4833717b9d1e09d7594d1f34f882e13fc4b86846
We can't/shouldn't use KVM after a fork since the child and parent
probably point to the same VM. Knowing the exact effects of this is
hard, but they are likely to be messy. We also disconnect the
performance counters attached to the guest. This works around what
seems to be a kernel bug where spurious SIGIOs get delivered to the
forked child process.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
[sascha.bischoff@arm.com: Rebased patches onto a newer gem5 version]
Signed-off-by: Sascha Bischoff <sascha.bischoff@arm.com>
[andreas.sandberg@arm.com: Fatal if entering KVM in child process ]
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
This changeset adds forking capabilities to the gem5 python scripts. A fork
method is added to simulate.py. This method is responsible for forking the
simulator itself, and will direct all output files to a new output directory
based on the fork sequence number. The default name of the output directory is
the same as the parent with the suffix ".fN" added where N is the fork sequence
number. The fork method provides the option to specify if the system should be
drained prior to forking, or not. By default the system is drained to ensure
that there are no in-flight transactions.
When forking the simulator, the fork method returns the PID of the child
process, or returns 0 if running in the child. This is in line with the standard
Python forking interface.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
[sascha.bischoff@arm.com: Rebased patches onto a newer gem5 version]
Signed-off-by: Sascha Bischoff <sascha.bischoff@arm.com>
[andreas.sandberg@arm.com: Updated to comply with modern draining semantics ]
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
This changeset adds support for notifying the disk images that the simulator has
been forked. We need to disable the saving of the CoW disk image from the child
process, and we need to make sure that systems which use a raw disk image are
not allowed to fork to avoid two or more gem5 processes writing to the same disk
image.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
[sascha.bischoff@arm.com: Rebased patches onto a newer gem5 version]
Signed-off-by: Sascha Bischoff <sascha.bischoff@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
When forking a gem5 process, some objects need to clean up resources
(mainly file descriptions) shared between the child and the parent of
the fork. This changeset adds the notifyFork() method to Drainable,
which is called in the child process.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
[sascha.bischoff@arm.com: Rebased patches onto a newer gem5 version]
Signed-off-by: Sascha Bischoff <sascha.bischoff@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
This changeset adds support for changing the simulator output
directory. This can be useful when the simulation goes through several
stages (e.g., a warming phase, a simulation phase, and a verification
phase) since it allows the output from each stage to be located in a
different directory. Relocation is done by calling core.setOutputDir()
from Python or simout.setOutputDirectory() from C++.
This change affects several parts of the design of the gem5's output
subsystem. First, files returned by an OutputDirectory instance (e.g.,
simout) are of the type OutputStream instead of a std::ostream. This
allows us to do some more book keeping and control re-opening of files
when the output directory is changed. Second, new subdirectories are
OutputDirectory instances, which should be used to create files in
that sub-directory.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
[sascha.bischoff@arm.com: Rebased patches onto a newer gem5 version]
Signed-off-by: Sascha Bischoff <sascha.bischoff@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
This patch adds assertions that enforce that only invalidating snoops
will ever reach into the logic that tracks in-order load completion and
also invalidation of LL/SC (and MONITOR / MWAIT) monitors. Also adds
some comments to MSHR::replaceUpgrades().
Writes to locked memory addresses (LLSC) did not wake up the locking
CPU. This can lead to deadlocks on multi-core runs. In AtomicSimpleCPU,
recvAtomicSnoop was checking if the incoming packet was an invalidation
(isInvalidate) and only then handled a locked snoop. But, writes are
seen instead of invalidates when running without caches (fast-forward
configurations). As as simple fix, now handleLockedSnoop is also called
even if the incoming snoop packet are from writes.
Properly done for the ERET instruction in v8, but not for v7.
Many control register changes are only visible after explicit
instruction synchronization barriers or exception entry/exit.
This means mode changing instructions should squash any
younger in-flight speculative instructions.
This patch fixes an issue where an InvalidationReq only traversed one
level of the cache hierarchy, and was subsequently turned into a
ReadExReq due to it needing writable, and the command not being
checked for explicitly.
Bug fix for check on protobuf file frequency being different than
global frequency.
The ASCII encoder script is also fixed, and the example trace used in
the regressions is updated.
Add a callback handler for the NoMali reset callback. This callback is
called whenever the GPU is reset using the register interface or the
NoMali API. The callback can be used to override ID registers using
the raw register API.
Refactor and cleanup the NoMaliGpu class:
* Use a std::map instead of a switch block to map the parameter enum
describing the GPU type to a NoMali type.
* Remove redundant NoMali handle from the interrupt callback.
* Make callbacks and API wrappers protected instead of private to
enable future extensions.
* Wrap remaining NoMali API calls.
Both Memory Fence is now flagged as Global Memory only to avoid resource
oversubscribing.
Flat instructions now check for Shared Memory resource busy to avoid
oversubscribing resources.
All WaitClass resources now use cycles (not ticks) to register the number
of pipe stages between Scoreboard and Execute to be consistent with
instruction scheduling logic which always used clock cycles.
This patch essentially rolls back 10518:30e3715c9405 to make RubyPort the
parent class of DMASequencer. It removes redundant code and restores some
features which were lost when directly inheriting from MemObject. For
example,
DMASequencer can now communicate to other devices using PIO, which is useful
for memmory-mapped communication between multiple DMADevices.
This patch adds a --debug-end flag to main.py so that debug output can be
stoped at a specified tick, while allowing the simulation to continue. It is
useful in situations where you would like to produce a trace for a region of
interest while still collecting stats for the entire run. This is in contrast
to the currently existing --debug-break flag, which terminates the simulation
at the tick.
Avoid being overly conservative in clearing load locks in the cache,
and allow writes to the line if they are from the same context. This
is in line with ALPHA and ARM.
This patch introduces the ability of making the coherent crossbar the
point of coherency. If so, the crossbar does not forward packets where
a cache with ownership has already committed to responding, and also
does not forward any coherency-related packets that are not intended
for a downstream memory controller. Thus, invalidations and upgrades
are turned around in the crossbar, and the memory controller only sees
normal reads and writes.
In addition this patch moves the express snoop promotion of a packet
to the crossbar, thus allowing the downstream cache to check the
express snoop flag (as it should) for bypassing any blocking, rather
than relying on whether a cache is responding or not.
Adopt the same flow as in timing mode, where the caches on the path to
memory get to keep the line (if present), and we use the
responderHadWritable flag to determine if we need to forward the
(invalidating) packet or not.
This patch unifies the snoop handling in case of hitting writebacks
with how we handle snoops hitting in the tags. As a result, we end up
using the same optimisation as the normal snoops, where we inform the
downstream cache if we encounter a line in Modified (writable and
dirty) state, which enables us to avoid sending out express snoops to
invalidate any Shared copies of the line. A few regressions
consequently change, as some transactions are sunk higher up in the
cache hierarchy.
This patch changes how the cache determines if snoops should be
forwarded from the memory side to the CPU side. Instead of having a
parameter, the cache now looks at the port connected on the CPU side,
and if it is a snooping port, then snoops are forwarded. Less error
prone, and less parameters to worry about.
The patch also tidies up the CPU classes to ensure that their I-side
port is not snooping by removing overrides to the snoop request
handler, such that snoop requests will panic via the default
MasterPort implement
Due to insufficient build deps, the checkpoint tags might not get
updated; this commit solves this. Due to the uncommon nature of the
build target, regenerating tags.cc is a fairly clean solution. Since
SCons hashes file contents, it won't recompile anything unless a new
checkpoint upgrader is actually added.
--HG--
extra : amend_source : ed3879da7668554693f697076deaf5029cc9b954
The previous implementation did a pair of nested RMW operations,
which isn't compatible with the way that locked RMW operations are
implemented in the cache models. It was convenient though in that
it didn't require any new micro-ops, and supported cmpxchg16b using
64-bit memory ops. It also worked in AtomicSimpleCPU where
atomicity was guaranteed by the core and not by the memory system.
It did not work with timing CPU models though.
This new implementation defines new 'split' load and store micro-ops
which allow a single memory operation to use a pair of registers as
the source or destination, then uses a single ldsplit/stsplit RMW
pair to implement cmpxchg. This patch requires support for 128-bit
memory accesses in the ISA (added via a separate patch) to support
cmpxchg16b.
Although the cache models support wider accesses, the ISA descriptions
assume that (for the most part) memory operands are integer types,
which makes it difficult to define instructions that do memory accesses
larger than 64 bits.
This patch adds some generic support for memory operands that are arrays
of uint64_t, and specifically a 'u2qw' operand type for x86 that is an
array of 2 uint64_ts (128 bits). This support is unused at this point,
but will be needed shortly for cmpxchg16b. Ideally the 128-bit SSE
memory accesses will also be rewritten to use this support.
Support for 128-bit accesses could also have been added using the gcc
__int128_t extension, which would have been less disruptive. However,
although clang also supports __int128_t, it's still non-standard.
Also, more importantly, this approach creates a path to defining
256- and 512-byte operands as well, which will be useful for eventual
AVX support.
MemOperand variables were being initialized to 0
"to avoid 'uninitialized variable' errors" but these
no longer seem to be a problem (with the exception of
one use case in POWER that is arguably broken and
easily fixed here).
Getting rid of the initialization is necessary to
set up a subsequent patch which extends memory
operands to possibly not be scalars, making the
'= 0' initialization no longer feasible.
Writing 16 bytes from an 8-byte source value is a bad idea.
This doesn't appear to have broken anything, but showed up
as spurious differences when tracediffing runs.
Result of running 'hg m5style --skip-all --fix-control -a' to get
rid of '== true' comparisons, plus trivial manual edits to get
rid of '== false'/'== False' comparisons.
Left a couple of explicit comparisons in where they didn't seem
unreasonable:
invalid boolean comparison in src/arch/mips/interrupts.cc:155
>> DPRINTF(Interrupt, "Interrupts OnCpuTimerINterrupt(tc) == true\n");<<
invalid boolean comparison in src/unittest/unittest.hh:110
>> "EXPECT_FALSE(" #expr ")", (expr) == false)<<
In the process of trying to get rid of an '== false' comparison,
it became apparent that a slightly more involved solution was
needed. Split this out into its own changeset since it's not
a totally trivial local change like the others.
mem: support for gpu-style RMWs in ruby
This patch adds support for GPU-style read-modify-write (RMW) operations in
ruby. Such atomic operations are traditionally executed at the memory controller
(instead of through an L1 cache using cache-line locking).
Currently, this patch works by propogating operation functors through the memory
system.
Just changes the metavar for --debug-start from TIME
to TICK in cset 72046b9b3323 and didn't notice that the
comment "must be in ticks" is now redundant.
For historical reasons, the ExecContext interface had a single
function, readMem(), that did two different things depending on
whether the ExecContext supported atomic memory mode (i.e.,
AtomicSimpleCPU) or timing memory mode (all the other models).
In the former case, it actually performed a memory read; in the
latter case, it merely initiated a read access, and the read
completion did not happen until later when a response packet
arrived from the memory system.
This led to some confusing things, including timing accesses
being required to provide a pointer for the return data even
though that pointer was only used in atomic mode.
This patch splits this interface, adding a new initiateMemRead()
function to the ExecContext interface to replace the timing-mode
use of readMem().
For consistency and clarity, the readMemTiming() helper function
in the ISA definitions is renamed to initiateMemRead() as well.
For x86, where the access size is passed in explicitly, we can
also get rid of the data parameter at this level. For other ISAs,
where the access size is determined from the type of the data
parameter, we have to keep the parameter for that purpose.
The read() function merely initiates a memory read operation; the
data doesn't arrive until the access completes and a response packet
is received from the memory system. Thus there's no need to provide
a data pointer; its existence is historical.
Getting this pointer out of this internal o3 interface sets the
stage for similar cleanup in the ExecContext interface. Also
found that we were pointlessly setting the contents at this pointer
on a store forward (the useful memcpy happens just a few lines
below the deleted one).
The readMemAtomic/writeMemAtomic helper functions were calling
readMemTiming/writeMemTiming respectively. This is functionally
correct, since the *Timing functions are doing the same access
initiation operation as the *Atomic functions (just that the
*Atomic versions also complete the access in line). It also
provides for some (very minimal) code reuse. Unfortunately,
it's potentially pretty confusing, since it makes it look like
the atomic accesses are somehow being converted to timing
accesses. It also gets in the way of specializing the timing
interface (as will be done in a future patch).
By ignoring SIG_TRAP, using --debug-break <N> when not connected to
a debugger becomes a no-op. Apparently this was intended to be a
feature, though the rationale is not clear.
If we don't ignore SIG_TRAP, then using --debug-break <N> when not
connected to a debugger causes the simulation process to terminate
at tick N. This is occasionally useful, e.g., if you just want to
collect a trace for a specific window of execution then you can combine
this with --debug-start to do exactly that.
In addition to not ignoring the signal, this patch also updates
the --debug-break help message and deletes a handful of unprotected
calls to Debug::breakpoint() that relied on the prior behavior.
Add a platform with support for both aarch32 and aarch64. This
platform implements a subset of the devices in a real Versatile
Express and extends it with some gem5-specific functionality. It is in
many ways similar to the old VExpress_EMM64 platform, but supports the
following new features:
* Automatic PCI interrupt assignment
* PCI interrupts allocated in a contiguous range.
* Automatic boot loader selection (32-bit / 64-bit)
* Cleaner memory map where gem5-specific devices live in CS5 which
isn't used by current Versatile Express platforms.
* No fake devices. Devices that were previously faked will be
removed from the device tree instead.
* Support for 510 GiB contiguous memory
Add support for automatic PCI interrupt routing using a device's ID on
the PCI bus. Our current DTBs typically tell the kernel that we do
this or something similar when declaring the PCI controller. This
changeset adds an option to make the simulator behave in the same way.
Interrupt routing can be selected by setting the int_policy parameter
in the GenericArmPciHost. The following values are supported:
* ARM_PCI_INT_STATIC: Use the old static routing policy using the
interrupt line from a device's configurtion space.
* ARM_PCI_INT_DEV: Use device number on the PCI bus to map to an
interrupt in the GIC. The interrupt is computed as:
gic_int = int_base + (pci_dev % int_count)
* ARM_PCI_INT_PIN: Use device interrupt pin on the PCI bus to map to
an interrupt in the GIC. The PCI specification reserves pin ID 0
for devices without interrupts, the interrupt therefore computed
as:
gic_int = int_base + ((pin - 1) % int_count)
The new Packet::setRaw() method incorrectly still contained
an htog() conversion. As a result, calls to the old set()
method (now defined as setRaw(htog(v))) underwent two htog
conversions, which breaks things when htog() is not a no-op.
Interestingly the only test that caught this was a SPARC
boot test, where an IsaFake device with a non-zero return
value was getting swapped twice resulting in a register
getting loaded with 0x100000000000000 instead of 1.
(Good reason for keeping SPARC around, perhaps?)
This patch replaces the gzstream zlib wrapper with the iostream3
wrapper provided as part of zlib contributions. The main reason for
the switch is to avoid including LGPL in the default gem5
build. iostream3 is provided under a more permissive license:
The code is provided "as is", with the permission to use, copy,
modify, distribute and sell it for any purpose without fee.
Distributed gem5 (abbreviated dist-gem5) is the result of the
convergence effort between multi-gem5 and pd-gem5 (from Univ. of
Wisconsin). It relies on the base multi-gem5 infrastructure for packet
forwarding, synchronisation and checkpointing but combines those with
the elaborated network switch model from pd-gem5.
--HG--
rename : src/dev/net/multi_etherlink.cc => src/dev/net/dist_etherlink.cc
rename : src/dev/net/multi_etherlink.hh => src/dev/net/dist_etherlink.hh
rename : src/dev/net/multi_iface.cc => src/dev/net/dist_iface.cc
rename : src/dev/net/multi_iface.hh => src/dev/net/dist_iface.hh
rename : src/dev/net/multi_packet.hh => src/dev/net/dist_packet.hh
Some of the DPRINTFs added to the classic cache in cset 45df88079f04,
while useful to those unfamiliar with the cache code, end up being
noise when you're familiar with the code but are trying to debug tricky
protocol issues. (Particularly getting two messages from each cache
as it receives a snoop request then declares that there was no match.)
This patch introduces a CacheVerbose debug flag, and moves a subset of
the added DPRINTFs into that category, so that Cache by itself returns
to being a more succinct summary of cache activity.
Also added a CacheAll compound flag to turn on all the cache-related
debug flags (other than CacheTags, which you *really* have to want badly
to turn it on, IMO).
This patch removes the NeedsWritable flag for all responses, as it is
really only the request that needs a writable response. The response,
on the other hand, should in these cases always provide the line in a
writable state, as indicated by the hasSharers flag not being set.
When we send requests that has NeedsWritable set, the response will
always have the hasSharers flag not set. Additionally, there are cases
where the request did not have NeedsWritable set, and we still get a
writable response with the hasSharers flag not set. This never happens
on snoops, but is used by downstream caches to pass ownership
upstream.
As part of this patch, the affected response types are updated, and
the snoop filter is similarly modified to check only the hasSharers
flag (as it should). A sanity check is also added to the packet class,
asserting that we never look at the NeedsWritable flag for responses.
No regressions are affected.
This patch looks at the request and response command to determine if
either actually has any data payload, and if not, we do not allocate
any space for packet data.
The only tricky case is where the command type is changed as part of
the MSHR functionality. In these cases where the original packet had
no data, but the new packet does, we need to explicitly call
allocate().
This patch ensures we do not respond with a Modified (dirty and
writable) line if the request is uncacheable, and that the cache
responding retains the line without modifying the state (even if
responding).
This patch changes the name of a bunch of packet flags and MSHR member
functions and variables to make the coherency protocol easier to
understand. In addition the patch adds and updates lots of
descriptions, explicitly spelling out assumptions.
The following name changes are made:
* the packet memInhibit flag is renamed to cacheResponding
* the packet sharedAsserted flag is renamed to hasSharers
* the packet NeedsExclusive attribute is renamed to NeedsWritable
* the packet isSupplyExclusive is renamed responderHadWritable
* the MSHR pendingDirty is renamed to pendingModified
The cache states, Modified, Owned, Exclusive, Shared are also called
out in the cache and MSHR code to make it easier to understand.
This patch is imported from reviewboard patch 2551 by Nilay.
This patch moves from a dynamically defined MachineType to a statically
defined one. The need for this patch was felt since a dynamically defined
type prevents us from having types for which no machine definition may
exist.
The following changes have been made:
i. each machine definition now uses a type from the MachineType enumeration
instead of any random identifier. This required changing the grammar and the
*.sm files.
ii. MachineType enumeration defined statically in RubySlicc_Exports.sm.
* * *
normal protocol fixes for nilay's parser machine type fix
This patch is imported from reviewboard patch 2550 by Nilay.
It was possible to specify multiple machine types with a single state machine.
This seems unnecessary and is being removed.
Add a sanity check to make it explicit that we currently do not allow
an I/O coherent agent to directly issue writes into the coherent part
of the memory system (it has to go via a cache, and get transformed
into a read ex, upgrade or invalidation).
This patch changes how the cache tracks which snoops are forwarded,
and which ones are created locally. Previously the identification was
based on an empty sender state of a specific class, but this method
fails to distinguish which cache actually attached the sender
state. Instead we use the same mechanism as the crossbar, and keep
track of the requests that have outstanding snoops.
This patch addresses a bug in how the cache attached the MSHR as a
sender state. Rather than overwriting any existing sender state it now
pushes a new one. The handling of upward snoops is also clarified.
Currently, the wire format of register values in g- and G-packets is
modelled using a union of uint8/16/32/64 arrays. The offset positions
of each register are expressed as a "register count" scaled according
to the width of the register in question. This results in counter-
intuitive and error-prone "register count arithmetic", and some
formats would even be altogether unrepresentable in such model, e.g.
a 64-bit register following a 32-bit one would have a fractional index
in the regs64 array.
Another difficulty is that the array is allocated before the actual
architecture of the workload is known (and therefore before the correct
size for the array can be calculated).
With this patch I propose a simpler mechanism for expressing the
register set structure. In the new code, GdbRegCache is an abstract
class; its subclasses contain straightforward structs reflecting the
register representation. The determination whether to use e.g. the
AArch32 vs. AArch64 register set (or SPARCv8 vs SPARCv9, etc.) is made
by polymorphically dispatching getregs() to the concrete subclass.
The subclass is not instantiated until it is needed for actual
g-/G-packet processing, when the mode is already known.
This patch is not meant to be merged in on its own, because it changes
the contract between src/base/remote_gdb.* and src/arch/*/remote_gdb.*,
so as it stands right now, it would break the other architectures.
In this patch only the base and the ARM code are provided for review;
once we agree on the structure, I will provide src/arch/*/remote_gdb.*
for the other architectures; those patches could then be merged in
together.
Review Request: http://reviews.gem5.org/r/3207/
Pushed by Joel Hestness <jthestness@gmail.com>
When adding an option to forward work items to the Python environment,
the new behavior was accidentally enabled by default. Set the value of
exit_on_work_items to False by default to revert to the old behavior
unless the simulation scripts explicitly requests work item
forwarding.
This patch fixes a corner case in the deferred snoop handling, where
requests ended up being used by multiple packets with different
lifetimes, and inadvertently got deleted while they were still in use.
There are cases where we want the Python world to handle work items
instead of the C++ world. However, that's currently not possible. This
changeset adds the forward_work_items option to the System class. Then
it is set to True, work items will generate workbegin/workend
simulation exists with the work item ID as the exit code and the old
C++ handling is completely bypassed.
--HG--
extra : rebase_source : 8de637a744fc4b6ff2bc763f00cdf8ddf2bff885
This patch allows the ruby random tester to use ruby ports that may only
support instr or data requests. This patch is similar to a previous changeset
(8932:1b2c17565ac8) that was unfortunately broken by subsequent changesets.
This current patch implements the support in a more straight-forward way.
Since retries are now tested when running the ruby random tester, this patch
splits up the retry and drain check behavior so that RubyPort children, such
as the GPUCoalescer, can perform those operations correctly without having to
duplicate code. Finally, the patch also includes better DPRINTFs for
debugging the tester.
Move pcidev.(hh|cc) to src/dev/pci/device.(hh|cc) and update existing
devices to use the new header location. This also renames the PCIDEV
debug flag to have a capitalization that is consistent with the PCI
host and other devices.
--HG--
rename : src/dev/Pci.py => src/dev/pci/PciDevice.py
rename : src/dev/pcidev.cc => src/dev/pci/device.cc
rename : src/dev/pcidev.hh => src/dev/pci/device.hh
rename : src/dev/pcireg.h => src/dev/pci/pcireg.h
The writefile pseudo instruction uses OutputDirectory::create and
OutputDirectory::openFile to create the output files. However, by
default these will check the file extention for .gz, and create a gzip
compressed stream if the file ending matches. When writing out files,
we want to write them out exactly as they are in the guest simulation,
and never want to compress them with gzio. Additionally, this causes
m5 writefile to fail when checking the error flags for the output
steam.
With this patch we add an additional no_gz argument to
OutputDirectory::create and OutputDirectory::openFile which allows us
to override the gzip compression. Therefore, for m5 writefile we
disable the filename check, and always create a standard ostream.
Previous ARM-based simulations were limited to 8 cores due to
limitations in GICv2 and earlier. This changeset adds a set of
gem5-specific extensions that enable support for up to 256 cores.
When the gem5 extensions are enabled, the GIC uses CPU IDs instead of
a CPU bitmask in the GIC's register interface. To OS can enable the
extensions by setting bit 0x200 in ICDICTR.
This changeset is based on previous work by Matt Evans.
There's a well-meaning check in Process::allocFD() to return an invalid
target fd (-1) if the incoming host fd is -1. However, this means that
emulated drivers, which want to allocate a target fd that doesn't
correspond to a host fd, can't use -1 to indicate an intentionally
invalid host fd.
It turns out the allocFD() check is redundant, as callers always test
the host fd for validity before calling. Also, callers never test the
return value of allocFD() for validity, so even if the test failed,
it would likely have the undesirable result of returning -1 to the
target app as a file descriptor without setting errno.
Thus the check is pointless and is now getting in the way, so it seems
we should just get rid of it.
This patch adds support to optionally capture the virtual address and asid
for load/store instructions in the elastic traces. If they are present in
the traces, Trace CPU will set those fields of the request during replay.
This patch replaces the booleans that specified the elastic trace record
type with an enum type. The source of change is the proto message for
elastic trace where the enum is introduced. The struct definitions in the
elastic trace probe listener as well as the Trace CPU replace the boleans
with the proto message enum.
The patch does not impact functionality, but traces are not compatible with
previous version. This is preparation for adding new types of records in
subsequent patches.
This patch defines a TraceCPU that replays trace generated using the elastic
trace probe attached to the O3 CPU model. The elastic trace is an execution
trace with data dependencies and ordering dependencies annoted to it. It also
replays fixed timestamp instruction fetch trace that is also generated by the
elastic trace probe.
The TraceCPU inherits from BaseCPU as a result of which some methods need
to be defined. It has two port subclasses inherited from MasterPort for
instruction and data ports. It issues the memory requests deducing the
timing from the trace and without performing real execution of micro-ops.
As soon as the last dependency for an instruction is complete,
its computational delay, also provided in the input trace is added. The
dependency-free nodes are maintained in a list, called 'ReadyList',
ordered by ready time. Instructions which depend on load stall until the
responses for read requests are received thus achieving elastic replay. If
the dependency is not found when adding a new node, it is assumed complete.
Thus, if this node is found to be completely dependency-free its issue time is
calculated and it is added to the ready list immediately. This is encapsulated
in the subclass ElasticDataGen.
If ready nodes are issued in an unconstrained way there can be more nodes
outstanding which results in divergence in timing compared to the O3CPU.
Therefore, the Trace CPU also models hardware resources. A sub-class to model
hardware resources is added which contains the maximum sizes of load buffer,
store buffer and ROB. If resources are not available, the node is not issued.
The 'depFreeQueue' structure holds nodes that are pending issue.
Modeling the ROB size in the Trace CPU as a resource limitation is arguably the
most important parameter of all resources. The ROB occupancy is estimated using
the newly added field 'robNum'. We need to use ROB number as sequence number is
at times much higher due to squashing and trace replay is focused on correct
path modeling.
A map called 'inFlightNodes' is added to track nodes that are not only in
the readyList but also load nodes that are executed (and thus removed from
readyList) but are not complete. ReadyList handles what and when to execute
next node while the inFlightNodes is used for resource modelling. The oldest
ROB number is updated when any node occupies the ROB or when an entry in the
ROB is released. The ROB occupancy is equal to the difference in the ROB number
of the newly dependency-free node and the oldest ROB number in flight.
If no node dependends on a non load/store node then there is no reason to track
it in the dependency graph. We filter out such nodes but count them and add a
weight field to the subsequent node that we do include in the trace. The weight
field is used to model ROB occupancy during replay.
The depFreeQueue is chosen to be FIFO so that child nodes which are in
program order get pushed into it in that order and thus issued in the in
program order, like in the O3CPU. This is also why the dependents is made a
sequential container, std::set to std::vector. We only check head of the
depFreeQueue as nodes are issued in order and blocking on head models that
better than looping the entire queue. An alternative choice would be to inspect
top N pending nodes where N is the issue-width. This is left for future as the
timing correlation looks good as it is.
At the start of an execution event, first we attempt to issue such pending
nodes by checking if appropriate resources have become available. If yes, we
compute the execute tick with respect to the time then. Then we proceed to
complete nodes from the readyList.
When a read response is received, sometimes a dependency on it that was
supposed to be released when it was issued is still not released. This occurs
because the dependent gets added to the graph after the read was sent. So the
check is made less strict and the dependency is marked complete on read
response instead of insisting that it should have been removed on read sent.
There is a check for requests spanning two cache lines as this condition
triggers an assert fail in the L1 cache. If it does then truncate the size
to access only until the end of that line and ignore the remainder.
Strictly-ordered requests are skipped and the dependencies on such requests
are handled by simply marking them complete immediately.
The simulated seconds can be calculated as the difference between the
final_tick stat and the tickOffset stat. A CountedExitEvent that contains
a static int belonging to the Trace CPU class as a down counter is used to
implement multi Trace CPU simulation exit.
The elastic trace is a type of probe listener and listens to probe points
in multiple stages of the O3CPU. The notify method is called on a probe
point typically when an instruction successfully progresses through that
stage.
As different listener methods mapped to the different probe points execute,
relevant information about the instruction, e.g. timestamps and register
accesses, are captured and stored in temporary InstExecInfo class objects.
When the instruction progresses through the commit stage, the timing and the
dependency information about the instruction is finalised and encapsulated in
a struct called TraceInfo. TraceInfo objects are collected in a list instead
of writing them out to the trace file one a time. This is required as the
trace is processed in chunks to evaluate order dependencies and computational
delay in case an instruction does not have any register dependencies. By this
we achieve a simpler algorithm during replay because every record in the
trace can be hooked onto a record in its past. The instruction dependency
trace is written out as a protobuf format file. A second trace containing
fetch requests at absolute timestamps is written to a separate protobuf
format file.
If the instruction is not executed then it is not added to the trace.
The code checks if the instruction had a fault, if it predicated
false and thus previous register values were restored or if it was a
load/store that did not have a request (e.g. when the size of the
request is zero). In all these cases the instruction is set as
executed by the Execute stage and is picked up by the commit probe
listener. But a request is not issued and registers are not written.
So practically, skipping these should not hurt the dependency modelling.
If squashing results in squashing younger instructions, it may happen that
the squash probe discards the inst and removes it from the temporary
store but execute stage deals with the instruction in the next cycle which
results in the execute probe seeing this inst as 'new' inst. A sequence
number of the last processed trace record is used to trap these cases and
not add to the temporary store.
The elastic instruction trace and fetch request trace can be read in and
played back by the TraceCPU.
This patch adds probe points in Fetch, IEW, Rename and Commit stages as follows.
A probe point is added in the Fetch stage for probing when a fetch request is
sent. Notify is fired on the probe point when a request is sent succesfully in
the first attempt as well as on a retry attempt.
Probe points are added in the IEW stage when an instruction begins to execute
and when execution is complete. This points can be used for monitoring the
execution time of an instruction.
Probe points are added in the Rename stage to probe renaming of source and
destination registers and when there is squashing. These probe points can be
used to track register dependencies and remove when there is squashing.
A probe point for squashing is added in Commit to probe squashed instructions.
The gem5's current PCI host functionality is very ad hoc. The current
implementations require PCI devices to be hooked up to the
configuration space via a separate configuration port. Devices query
the platform to get their config-space address range. Un-mapped parts
of the config space are intercepted using the XBar's default port
mechanism and a magic catch-all device (PciConfigAll).
This changeset redesigns the PCI host functionality to improve code
reuse and make config-space and interrupt mapping more
transparent. Existing platform code has been updated to use the new
PCI host and configured to stay backwards compatible (i.e., no
guest-side visible changes). The current implementation does not
expose any new functionality, but it can easily be extended with
features such as automatic interrupt mapping.
PCI devices now register themselves with a PCI host controller. The
host controller interface is defined in the abstract base class
PciHost. Registration is done by PciHost::registerDevice() which takes
the device, its bus position (bus/dev/func tuple), and its interrupt
pin (INTA-INTC) as a parameter. The registration interface returns a
PciHost::DeviceInterface that the PCI device can use to query memory
mappings and signal interrupts.
The host device manages the entire PCI configuration space. Accesses
to devices decoded into the devices bus position and then forwarded to
the correct device.
Basic PCI host functionality is implemented in the GenericPciHost base
class. Most platforms can use this class as a basic PCI controller. It
provides the following functionality:
* Configurable configuration space decoding. The number of bits
dedicated to a device is a prameter, making it possible to support
both CAM, ECAM, and legacy mappings.
* Basic interrupt mapping using the interruptLine value from a
device's configuration space. This behavior is the same as in the
old implementation. More advanced controllers can override the
interrupt mapping method to dynamically assign host interrupts to
PCI devices.
* Simple (base + addr) remapping from the PCI bus's address space to
physical addresses for PIO, memory, and DMA.
The assert in lsq_unit_impl.hh line 963 needs pktPending to be initialized to
NULL (I got the assertion failure several times without the fix).
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
The last SimObject using the legacy serialize API with non-const
methods has now been transitioned to the new API. This changeset
removes the serializeOld() methods from the serialization base class
as they are no longer used.
Add support for automatically discover available platforms. The
Python-side uses functionality similar to what we use when
auto-detecting available CPU models. The machine IDs have been updated
to match the platform configurations. If there isn't a matching
machine ID, the configuration scripts default to -1 which Linux uses
for device tree only platforms.
The HDLCD model implements a workaround that swaps the red and blue
channels. This works around an issue in certain old kernels. The new
driver doesn't seem to have this behavior, so disable the workaround
by default and enable it in the affected platforms.
Devices behind the Versatile Express configuration controllers are
currently all lumped into one SimObject. This will make DTB generation
challenging since the DTB assumes them to be in different parts of the
hierarchy. It also makes it hard to model other CoreTiles without also
replicating devices from the motherboard.
This changeset splits the VExpressCoreTileCtrl into two subsystems:
VExpressMCC for all motherboard-related devices and CoreTile2A15DCC
for Core Tile specific devices.
Add functionality to generate a back trace if gem5 crashes (SIGABRT or
SIGSEGV). The current implementation uses glibc's stack traversal
support if available and stubs out the call to print_backtrace()
otherwise.
Add support for automatically selecting a boot loader that matches the
guest system's kernel. Instead of accepting a single boot loader, the
ArmSystem class now accepts a vector of boot loaders. When
initializing a system, the we now look for the first boot loader with
an architecture that matches the kernel.
This changeset makes it possible to use the same system for both
64-bit and 32-bit kernels.
The MaltaPChip class is currently unused and identical (except for the
class name) to the TsunamiPChip. If someone decides to implement PCI
for Malta, they should make sure to share code with the Tsunami
implementation if they are similar.
The gem5 option '--list-sim-objects' is supposed to list all available
SimObjects and their parameters. It currently chokes on SimObjects
with parameters that have an object instance as their default
value. This is caused by __str__ in SimObject trying to resolve its
complete path. When the path resolution method reaches the parent
object (a MetaSimObject since it hasn't been instantiated), it dies
with a Python exception.
This changeset adds a guard to stop path resolution if the parent
object is a MetaSimObject.
Added the missing types EthernetAddr and Current to the JSON/INI file
reader example configs/example/read_config.py.
Also added __str__ to EthernetAddr to make values appear in the same form
in JSON an INI files.
The flash model has typos in its serialization code for
unknownPages, locationTable, blockValidEntries, and blockEmptyEntries
arrays where it would save each entry in the array under the same
name in the checkpoint. This patch fixes these typos.
As per the x86 architecture specification, matching TLB entries need to be
invalidated on a page fault. For instance, after a page fault due to inadequate
protection bits on a TLB hit, the TLB entry needs to be invalidated. This
behavior is clearly specified in the x86 architecture manuals from both AMD and
Intel. This invalidation is missing currently in gem5, due to which linux
kernel versions 3.8 and up cannot be simulated efficiently. This is exposed by
a linux optimisation in commit e4a1cc56e4d728eb87072c71c07581524e5160b1, which
removes a tlb flush on updating page table entries in x86.
Testing: Linux kernel versions 3.8 onwards were booting very slowly in FS mode,
due to repeated page faults (~300000 before the first print statement in a
bash file). Ensured that page fault rate drops drastically and observed
reduction in boot time from order of hours to minutes for linux kernel v3.8
and v3.11
doCpuid() has to identical warn messages about unimplemented functions. Add
the family to the log message to make them distinguishable.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Remove sparc V8 TBR register from list of registers since it is not part of
sparc V9. This brings the number of registers in sync with what gdb expects
Without this patch gdb complains about receoved packet too long.
with this patch gdb is able to work properly with gem5 for remote debugging.
Note: gdb is version 7.8
Note: gdb is configured with --target=sparc64-sun-solaris2.8
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
the sanity check, while generally useful for exposing memory system bugs,
may be spurious with respect to GPU workloads, which may generate many more
requests than typical CPU workloads. the large number of requests generated
by the GPU may cause the req/resp queues to back up, thus queueing more than
100 packets.
The IICRPR register in the GIC is currently not being initialized when
the GIC is instantiated. Initialize to the value mandated by the
architecture specification.
This patch adds very basic checkpoint support for the VirtIO9PProxy
device. Previously, attempts to checkpoint gem5 with a present 9P
device caused gem5 to fatal as none of the state is tracked. We still
do not track any state, but we replace the fatal with a warning which
is triggered if the device has been used by the guest system. In the
event that it has not been used, we assume that no state is lost
during checkpointing. The warning is triggered on both a serialize and
an unserialize to ensure maximum visibility for the user.
Cleanup PCI devices to avoid using the PciDevice::platform pointer
directly. The PCI-specific functionality provided by the Platform
should be accessed through the wrappers in PciDevice.
This patch adds the necessary commands and cache functionality to
allow clean writebacks. This functionality is crucial, especially when
having exclusive (victim) caches. For example, if read-only L1
instruction caches are not sending clean writebacks, there will never
be any spills from the L1 to the L2. At the moment the cache model
defaults to not sending clean writebacks, and this should possibly be
re-evaluated.
The implementation of clean writebacks relies on a new packet command
WritebackClean, which acts much like a Writeback (renamed
WritebackDirty), and also much like a CleanEvict. On eviction of a
clean block the cache either sends a clean evict, or a clean
writeback, and if any copies are still cached upstream the clean
evict/writeback is dropped. Similarly, if a clean evict/writeback
reaches a cache where there are outstanding MSHRs for the block, the
packet is dropped. In the typical case though, the clean writeback
allocates a block in the downstream cache, and marks it writable if
the evicted block was writable.
The patch changes the O3_ARM_v7a L1 cache configuration and the
default L1 caches in config/common/Caches.py
This patch adds a parameter to control the cache clusivity, that is if
the cache is mostly inclusive or exclusive. At the moment there is no
intention to support strict policies, and thus the options are: 1)
mostly inclusive, or 2) mostly exclusive.
The choice of policy guides the behaviuor on a cache fill, and a new
helper function, allocOnFill, is created to encapsulate the decision
making process. For the timing mode, the decision is annotated on the
MSHR on sending out the downstream packet, and in atomic we directly
pass the decision to handleFill. We (ab)use the tempBlock in cases
where we are not allocating on fill, leaving the rest of the cache
unaffected. Simple and effective.
This patch also makes it more explicit that multiple caches are
allowed to consider a block writable (this is the case
also before this patch). That is, for a mostly inclusive cache,
multiple caches upstream may also consider the block exclusive. The
caches considering the block writable/exclusive all appear along the
same path to memory, and from a coherency protocol point of view it
works due to the fact that we always snoop upwards in zero time before
querying any downstream cache.
Note that this patch does not introduce clean writebacks. Thus, for
clean lines we are essentially removing a cache level if it is made
mostly exclusive. For example, lines from the read-only L1 instruction
cache or table-walker cache are always clean, and simply get dropped
rather than being passed to the L2. If the L2 is mostly exclusive and
does not allocate on fill it will thus never hold the line. A follow
on patch adds the clean writebacks.
The patch changes the L2 of the O3_ARM_v7a CPU configuration to be
mostly exclusive (and stats are affected accordingly).
This patch optimises the handling of writebacks and clean evictions
when using a snoop filter. Instead of snooping into the caches to
determine if the block is cached or not, simply set the status based
on the snoop-filter result.
Instead of conservatively enforcing order for all packets, which may
negatively impact the simulated-system performance, this patch updates
the packet queue such that it only applies the restriction if there
are already packets with the same address in the queue.
The basic need for the order enforcement is due to coherency
interactions where requests/responses to the same cache line must not
over-take each other. We rely on the fact that any packet that needs
order enforcement will have a block-aligned address. Thus, there is no
need for the queue to know about the cacheline size.
This patch enforces insertion order transmission of packets on the
response path in the cache. Note that the logic to enforce order is
already present in the packet queue, this patch simply turns it on for
queues in the response path.
Without this patch, there are corner cases where a request-response is
faster than a response-response forwarded through the cache. This
violation of queuing order causes problems in the snoop filter leaving
it with inaccurate information. This causes assert failures in the
snoop filter later on.
A follow on patch relaxes the order enforcement in the packet queue to
limit the performance impact.
This patch updates the I/O devices, bridge and simple memory to take
the packet header and payload delay into account in their latency
calculations. In all cases we add the header delay, i.e. the
accumulated pipeline delay of any crossbars, and the payload delay
needed for deserialisation of any payload.
Due to the additional unknown latency contribution, the packet queue
of the simple memory is changed to use insertion sorting based on the
time stamp. Moreover, since the memory hands out exclusive (non
shared) responses, we also need to ensure ordering for reads to the
same address.
This patch aligns how the memory-system slaves, i.e. the various
memory controllers and the bridge, identify and deal with sinking of
inhibited packets that are only useful within the coherent part of the
memory system.
In the future we could shift the onus to the crossbar, and add a
parameter "is_point_of_coherence" that would allow it to sink the
aforementioned packets.
This patch changes the CleanEvict command type to not be considered a
write. Initially it was made a zero-sized write to match the writeback
command, but as things developed it became clear that it causes more
problems than it solves. For example, the memory modules (and bridge)
should not consider the CleanEvict as a write, but instead discard
it. With this patch it will be neither a read, nor write, and as it
does not need a response the slave will simply sink it.
This patch unifies how we deal with delayed packet deletion, where the
receiving slave is responsible for deleting the packet, but the
sending agent (e.g. a cache) is still relying on the pointer until the
call to sendTimingReq completes. Previously we used a mix of a
deletion vector and a construct using unique_ptr. With this patch we
ensure all slaves use the latter approach.
The CoherentXBar currently doesn't check its queued slave ports when
receiving a functional snoop. This caused data corruption in cases
when a modified cache lines is forwarded between two caches.
Add the required functional calls into the queued slave ports.
This changeset adds a serial link model for the Hybrid Memory Cube (HMC).
SerialLink is a simple variation of the Bridge class, with the ability to
account for the latency of packet serialization. Also trySendTiming has been
modified to correctly model bandwidth.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This patch models a simple HMC Controller. It simply schedules the incoming
packets to HMC Serial Links using a round robin mechanism. This patch should
be applied in series with other patches modeling a complete HMC device.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Fix a bug in which the flash device would write out of bounds and
could either trigger a segfault and corrupt the memory of other
objects. This was caused by using pageSize in the place of
pagesPerBlock when running the garbage collector.
Also, added an assert to flag this condition in the future.
This patch fixes the drain logic for the UFSHostDevice and the
FlashDevice. In the case of the FlashDevice, the logic for CheckDrain
needed to be reversed, whilst in the case of the UFSHostDevice check
drain was never being called. In both cases the system would never
complete draining if the initial attampt to drain failed.
This patch addresses the upgrading of deferred targets in the MSHR,
and makes it clearer by explicitly calling out what is happening
(deferred targets are promoted if we get exclusivity without asking
for it).
This patch adds explicit overrides as this is now required when using
"-Wall" with clang >= 3.5, the latter now part of the most recent
XCode. The patch consequently removes "virtual" for those methods
where "override" is added. The latter should be enough of an
indication.
As part of this patch, a few minor issues that clang >= 3.5 complains
about are also resolved (unused methods and variables).
This patch moves away from using M5_ATTR_OVERRIDE and the m5::hashmap
(and similar) abstractions, as these are no longer needed with gcc 4.7
and clang 3.1 as minimum compiler versions.
ARM uses UDelayEvents to emulate kernel __*udelay functions and speed up
simulation. UDelayEvents call Pseudoinst::quiesceNs to quiesce the system for
a specified delay. Changeset 10341:0b4d10f53c2d introduced the requirement
that any quiesce process that is started must also be completed by scheduling
an EndQuiesceEvent. This change causes the CPU to hang if an IsQuiesce
instruction is executed, but the corresponding EndQuiesceEvent is not
scheduled.
Changeset 11058:d0934b57735a introduces a fix for uses of PseudoInst::quiesce*
that would conditionally execute the EndQuiesceEvent. ARM UDelayEvents specify
quiesce period of 0 ns (src/arch/arm/linux/system.cc), so changeset 11058
causes these events to now execute full quiesce processes, greatly increasing
the total instructions executed in kernel delay loops and slowing simulation.
This patch updates the UDelayEvent to conditionally execute
PseudoInst::quiesceNs (**a quiesce operation**) only if the specified
delay is >0 ns. The result is ARM delay loops no longer execute instructions
for quiesce handling, and regression time returns to normal.
The decoder is responsible for splitting instructions in micro
operations (uops). Given that different micro architectures may split
operations differently, this patch allows to specify which micro
architecture each isa implements, so different cores in the system can
split instructions differently, also decoupling uop splitting
(microArch) from ISA (Arch). This is done making the decodification
calls templates that receive a type 'DecoderFlavour' that maps the
name of the operation to the class that implements it. This way there
is only one selection point (converting the command line enum to the
appropriate DecodeFeatures object). In addition, there is no explicit
code replication: template instantiation hides that, and the compiler
should be able to resolve a number of things at compile-time.
Although some decent error messages were getting generated inside
isa_parser.py, they weren't always getting printed because of the
screwy way we were handling exceptions. (Basically an inner
exception would get hidden by an outer exception, and the more
informative inner error message would not get printed.)
Also line numbers were messed up, since they were taken from the
lexer, which is typically a token (or more) ahead of the grammar
rule that's being matched. Using the 'lineno' attribute that
PLY associates with the grammar production is more accurate.
The new LineTracker class extends lineno to track filenames as
well as line numbers.
This information is useful if you have a bunch of simulations running
and want to know which one to kill, but you've neglected to take
advantage of the previous patch and embed the pid in your output path.
These are packed single-precision approximate reciprocal operations,
vector and scalar versions, respectively.
This code was basically developed by copying the code for
sqrtps and sqrtss. The mrcp micro-op was simplified relative to
msqrt since there are no double-precision versions of this operation.
fild loads an integer value into the x87 top of stack register.
fucomi/fucomip compare two x87 register values (the latter
also doing a stack pop).
These instructions are used by some versions of GNU libstdc++.
The DTRACE() macro tests both Trace::enabled and the specific flag. This
change uses the same administrative interface for enabling/disabling
tracing, but masks the SimpleFlags settings directly. This eliminates a
load for every DTRACE() test, e.g. DPRINTF.
In ARM, certain variables are only updated when a necessary change is
detected. Having 2 SMT threads share a TLB resulted in these not being
updated as required. This patch adds a thread context identifer to
assist in the invalidation of these variables.
Adds SMT support to the "simple" CPU models so that they can be
used with other SMT-supported CPUs. Example usage: this enables
the TimingSimpleCPU to be used to warmup caches before swapping to
detailed mode with the in-order or out-of-order based CPU models.
Trying to run an SE system with varying threads per core (SMT cores + Non-SMT
cores) caused failures due to the CPU id assignment logic. The comment
about thread assignment (worrying about core 0 not having tid 0) seems
not to be valid given that our configuration scripts initialize them in
order.
This removes that constraint so a heterogenously threaded sytem can work.
If a cache entry permission was previously set to NotPresent, but the entry was
not deleted, a following cache allocation can cause the entry to be leaked by
setting the entry pointer to a newly allocated entry. To eliminate this
possibility, check if the new entry is different from the old one, and if so,
delete the old one.
IntDevice::recvResponse is called from two places in current mainline: (1) the
short circuit path of X86ISA::IntDevice::IntMasterPort::sendMessage for atomic
mode, and (2) the full request->response path to and from the x86 interrupts
device (finally called from MessageMasterPort::recvTimingResp). In the former
case, the packet was deleted correctly, but in the latter case, the packet and
request leak. To fix the leak, move request and packet deletion into IntDevice
inherited class implementations of recvResponse.
In RubyPort::ruby_eviction_callback, prior changes fixed a memory leak caused
by instantiating separate packets for each port that the eviction was forwarded
to. That change, however, left the instantiated request to also leak. Allocate
it on the stack to avoid the leak.
Recent changes to memory access queuing allocate requests for packets sent to
memory controllers, but did not free the requests. Delete them to avoid leaks.
Changes to the RubyMemoryControl removed the dequeue function, which deleted
MemoryNode instances. This results in leaked MemoryNode instances. Correctly
delete these instances.
The recent changeset to readlink() to handle reading the /proc/self/exe link
introduces a number of problems. This patch fixes two:
1) Because readlink() called on /proc/self/exe now uses LiveProcess::progName()
to find the binary path, it will only get the zeroth parameter of the simulated
system command line. However, if a config script also specifies the process'
executable, the executable parameter is used to create the LiveProcess rather
than the zeroth command line parameter. Thus, the zeroth command line parameter
is not necessarily the correct path to the binary executing in the simulated
system. To fix this, add a LiveProcess data member, 'executable', which is
correctly set during instantiation and returned from progName().
2) If a config script allows a user to pass a relative path as the zeroth
simulated system command line parameter or process executable, readlink() will
incorrecly return a relative path when called on '/proc/self/exe'.
/proc/self/exe is always set to a full path, so running benchmarks can fail if
a relative path is returned. To fix this, clean up the handling of
LiveProcess::progName() within readlink() to get the full binary path.
NOTE: This patch still leaves the potential problem that host full path to the
binary bleeds into the simulated system, potentially causing the appearance of
non-deterministic simulated system execution.
This patch fixes a use-after-delete issue in the packet probe points
by adding a PacketInfo struct to retain the key fields before passing
the packet onwards. We want to probe the packet after it is
successfully sent, but by that time the fields may be modified, and
the packet may even be deleted.
Amazingly enough the issue has gone undetected for months, and only
recently popped up in our regressions.
This patch fixes issues in the interactions between deferred snoops
and WriteLineReq. More specifically, the patch addresses an issue
where deferred snoops caused assertion failures when being serviced on
the arrival of an InvalidateResp. The response packet was perceived to
be invalidating, when actually it is not for the cache that sent out
the original invalidation request.
This patch changes the tracking of ports in the snoop filter to use
local dense port IDs so that we can have 64 snooping ports (rather
than crossbar slave ports). This is achieved by adding a simple
remapping vector that translates the actal port IDs into the local
slave IDs used in the SnoopMask.
Ultimately this patch allows us to scale to much larger systems
without introducing a hierarchy of crossbars.
This patch adds a snoop filter to the L2XBar. For now we refrain from
globally adding a snoop filter to the SystemXBar, since the latter is
also used in systems without caches. In scenarios without caches the
snoop filter will not see any writeback/clean evicts from the CPU
ports, despite the fact that they are snooping. To avoid inadvertent
use of the snoop filter in these cases we leave it out for now.
A size check is added to the snoop filter, merely to ensure it does
not grow beyond the total capacity of the caches above it. The size
has to be set manually, and a value of 8 MByte is choosen as suitably
high default.
This patch introduces a private member storing the iterator from the
lookupRequest call, such that it can be re-used when the request
eventually finishes. The method previously called updateRequest is
renamed finishRequest to make it more clear that the two functions
must be called together.
This patch mirrors the logic in timing mode which sends up snoops to
check for cached copies before sending CleanEvicts and Writebacks down
the memory hierarchy. In case there is a copy in a cache above,
discard CleanEvicts and set the BLOCK_CACHED flag in Writebacks so
that writebacks do not reset the cache residency bit in the snoop
filter below.
This patch adds the functionality to properly track CleanEvicts and
Writebacks in the snoop filter. Previously there were no CleanEvicts, and
Writebacks did not send up snoops to ensure there were no copies in
caches above. Hence a writeback could never erase an entry from the
snoop filter.
When a CleanEvict message reaches a snoop filter, it confirms that the
BLOCK_CACHED flag is not set and resets the bits corresponding to the
CleanEvict address and port it arrived on. If none of the other peer
caches have (or have requested) the block, the snoop filter forwards
the CleanEvict to lower levels of memory. In case of a Writeback
message, the snoop filter checks if the BLOCK_CACHED flag is not set
and only then resets the bits corresponding to the Writeback
address. If any of the other peer caches have (or has requested) the
same block, the snoop filter sets the BLOCK_CACHED flag in the
Writeback before forwarding it to lower levels of memory heirarachy.
This patch prevents the snoop filter from creating items for requests
originating from non-snooping ports. The allocation decision is thus
based both on the cacheability of the line, and the snooping status of
the source port. Ultimately we should check if the source of the
packet is caching, since also the CPU ports are snooping (but not
allocating). Thus, at the moment we rely on the snoop filter being
used together with caches.
The patch also transitions to use the Packet::getBlockAddr in
determining the line address.
This patch introduces the concept of a snoop latency. Given the
requirement to snoop and forward packets in zero time (due to the
coherency mechanism), the latency is accounted for later.
On a snoop, we establish the latency, and later add it to the header
delay of the packet. To allow multiple caches to contribute to the
snoop latency, we use a separate variable in the packet, and then take
the maximum before adding it to the header delay.
This patch ensures that the snoop-filter latency only contributes to
the packet latency, and not to the crossbar throughput/occupancy. In
essence we treat the snoop-filter lookup as pipelined.
Created the following HBM configurations:
1) HBM gen1 (x128/CH), 2Gb die, 4H stack, 1Gbps, 8 channels
2) HBM gen2 (x64/PC), 8Gb die, 4H stack, 1Gbps, 16 pseudo-channels
The configuration values are based on:
- The HBM gen1 public JEDEC spec
- Publically released data from MemCon presentations
- Timing extrapolated from existing LPDDR configurations
Will adjust once specs become available.
Changeset 4872dbdea907 replaced Address by Addr, but did not make changes to
print statements. So the addresses which were being printed in hex earlier
along with their line address, were now being printed in decimals. This patch
adds a function printAddress(Addr) that can be used to print the address in hex
along with the lines address. This function has been put to use in some of the
places. At other places, change has been made to print just the address in
hex.
The DataMember class in Type.py was being derived from PairContainer. A
separate Var object was also created for the DataMember. This meant some
duplication of across the members of these two classes (Var and DataMember).
This patch changes DataMember from Var instead. There is no obvious reason to
derive from PairContainer which can only hold pairs, something that Var class
already supports. The only thing that DataMember has over Var is init_code,
which is being retained. This change would later on help in having pointers
in DataMembers.
Some blocks in MOESI hammer were not getting deallocated when they were set to
an idle state (e.g. by invalidate or other_getx/s messages). While
functionally correct, this caused some bad effects on performance, such as
blocks in I in the L1s getting sent to the L2 upon eviction, in turn evicting
valid blocks. Also, if a valid block was in LRU, that block could be evicted
rather than a block in I. This patch adds in the missing deallocations.
Committed by: Nilay Vaish<nilay@cs.wisc.edu>
The recent changes to make MessageBuffers SimObjects required them to be
initialized in a particular order, which could break some protocols. Fix this
by calling initNetQueues on the external nodes of each external link in the
constructor of Network.
This patch also refactors the duplicated code for checking network allocation
and setting net queues (which are called by initNetQueues) from the simple and
garnet networks to be in Network.
This patch changes MessageBuffer and TimerTable, two structures used for
buffering messages by components in ruby. These structures would no longer
maintain pointers to clock objects. Functions in these structures have been
changed to take as input current time in Tick. Similarly, these structures
will not operate on Cycle valued latencies for different operations. The
corresponding functions would need to be provided with these latencies by
components invoking the relevant functions. These latencies should also be
in Ticks.
I felt the need for these changes while trying to speed up ruby. The ultimate
aim is to eliminate Consumer class and replace it with an EventManager object in
the MessageBuffer and TimerTable classes. This object would be used for
scheduling events. The event itself would contain information on the object and
function to be invoked.
In hindsight, it seems I should have done this while I was moving away from use
of a single global clock in the memory system. That change led to introduction
of clock objects that replaced the global clock object. It never crossed my
mind that having clock object pointers is not a good design. And now I really
don't like the fact that we have separate consumer, receiver and sender
pointers in message buffers.
The eventual aim of this change is to pass RubySystem pointers through to
objects generated from the SLICC protocol code.
Because some of these objects need to dereference their RubySystem pointers,
they need access to the System.hh header file.
In src/mem/ruby/SConscript, the MakeInclude function creates single-line header
files in the build directory that do nothing except include the corresponding
header file from the source tree.
However, SLICC also generates a list of header files from its symbol table, and
writes it to mem/protocol/Types.hh in the build directory. This code assumes
that the header file name is the same as the class name.
The end result of this is the many of the generated slicc files try to include
RubySystem.hh, when the file they really need is System.hh. The path of least
resistence is just to rename System.hh to RubySystem.hh.
--HG--
rename : src/mem/ruby/system/System.cc => src/mem/ruby/system/RubySystem.cc
rename : src/mem/ruby/system/System.hh => src/mem/ruby/system/RubySystem.hh
This register is writable according to UA2005
Tried to boot NetBSD which starts the kernel by writing to the tick_cmpr
register. Without the patch gem5 crashes with a panic. With the patch NetBSD
starts to boot normally (although sun4v support in NetBSD is not complete yet)
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Handle bad IDE disk image size 0. When image size is 0, gem5 will cause an
exception with log "Floating point exception (core dumped)".
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
When a branch gets squashed, it's speculative branch predictor state should get
rolled back in squash(). However, only the globalHistory state was being
rolled back. This patch adds (at least some) support for rolling back the
local predictor state also.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This patch enables instructions in LSQ to track two physical addresses for
corresponding two split requests. Later, the information is used in
checksnoop() to search for/invalidate the corresponding LD instructions.
The current implementation has kept track of only the physical address that is
referenced by the first split request. Thus, for checksnoop(), the line
accessed by the second request has not been considered, causing potential
correctness issues.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Refactored the code in operateVnet(), moved partly to a new function
operateMessageBuffer(). This is required since a later patch moves to having a
wakeup event per MessageBuffer instead of one event for the entire Switch.
There are two reasons for doing so:
a. provide a source of clock to PerfectSwitch. A follow on patch removes sender
and receiver pointers from MessageBuffer means that the object owning the
buffer should have some way of providing timing info.
b. schedule events. A follow on patch removes the consumer class. So the
PerfectSwitch needs some EventManager object to schedule events on its own.
Add a stat that counts buffer underruns in the HDLCD controller. The
stat counts at most one underrun per frame since the controller aborts
the current frame if it underruns.
Rewrite the HDLCD controller to use the new DMA engine and pixel
pump. This fixes several bugs in the current implementation:
* Broken/missing interrupt support (VSync, underrun, DMA end)
* Fragile resolution changes (changing resolutions used
to cause assertion errors).
* Support for resolutions with a width that isn't divisible by 32.
* The pixel clock can now be set dynamically.
This breaks checkpoint compatibility. Checkpoints can be upgraded with
the checkpoint conversion script. However, upgraded checkpoints won't
contain the state of the current frame. That means that HDLCD
controllers restoring from a converted checkpoint immediately start
drawing a new frame (i.e, expect timing differences).
Currently the sequencer calls the function setMRU that updates the replacement
policy structures with the first level caches. While functionally this is
correct, the problem is that this requires calling findTagInSet() which is an
expensive function. This patch removes the calls to setMRU from the sequencer.
All controllers should now update the replacement policy on their own.
The set and the way index for a given cache entry can be found within the
AbstractCacheEntry structure. Use these indicies to update the replacement
policy structures.
The current Set data structure is slow and therefore is being reimplemented
using std::bitset. A maximum limit of 64 is being set on the number of
controllers of each type. This means that for simulating a system with more
controllers of a given type, one would need to change the value of the variable
NUMBER_BITS_PER_SET
MessageBuffer is a SimObject now. There were protocols that still declared
some of the message buffers are variables of the controller, but not as input
parameters. Special handling was required for these variables in the SLICC
compiler. This patch changes this. Now all message buffers are declared as
input parameters.
In cases where a newly added target does not have any upstream MSHR to
mark as downstreamPending, remember that nothing is marked. This
allows us to avoid attempting to find the MSHR as part of the clearing
of downstreamPending.
This commit addresses gem5 checkpoints' linear versioning bottleneck.
Since development is distributed across many private trees, there exists
a sort of 'race' for checkpoint version numbers: internally a checkpoint
version may be used but then resynchronizing with the external tree causes
a conflict on that version. This change replaces the linear version number
with a set of unique strings called tags. Now the only conflicts that can
arise are of tag names, where collisions are much easier to avoid.
The checkpoint upgrader (util/cpt_upgrader.py) upgrades the version
representation, as one would expect. Each tag version implements its
upgrader code in a python file in the util/cpt_upgraders directory
rather than adding a function to the upgrader script itself.
The version tags are stored in the 'Globals' section rather than 'root'
(as the version was previously) because 'Globals' gets unserialized
first and can provide a warning before any other unserialization errors
can occur.
This is in support of tag-based checkpoint versioning. It should be
possible to examine an optional parameter in a checkpoint during
unserialization and not have it throw a warning.
We no longer use the C library based random number generator: random().
Instead we use the C++ library provided rng. So setting the random seed for
the RubySystem class has no effect. Hence the variable and the corresponding
option are being dropped.
Event auto-serialization no longer in use and has been broken ever
since the introduction of PDES support almost two years
ago. Additionally, serializing the individual event queues is
undesirable since it exposes the thread structure of the
simulator. What this means in practice is that the number of threads
in the simulator must be the same when taking a checkpoint and when
loading the checkpoint.
This changeset removes support for the AutoSerialize event flag and
the associated serialization code.
EtherLink currently uses a fire-and-forget link delay event that
delays sending of packets by a fixed number of ticks. In order to
serialize this event, it relies on the event queue's auto
serialization support. However, support for event auto serialization
has been broken for more than two years, which means that checkpoints
of multi-system setups are likely to drop in-flight packets.
This changeset the replaces rewrites this part of the EtherLink to use
a packet queue instead. The queue contains a (tick, packet) tuple. The
tick indicates when the packet will be ready. Instead of relying on
event autoserialization, we now explicitly serialize the packet queue
in the EhterLink::Link class.
Note that this changeset changes the way in-flight packages are
serialized. Old checkpoints will still load, but in-flight packets
will be dropped (just as before). There has been no attempt to upgrade
checkpoints since this would actually change the behavior of existing
checkpoints.
This changeset removes the support for the autoserialize parameter in
GlobalSimLoopExitEvent (including exitSimLoop()) and
LocalSimLoopExitEvent.
Auto-serialization of the LocalSimLoopExitEvent was never used, so
this is not expected to affect anything. However, it was sometimes
used for GlobalSimLoopExitEvent. Unfortunately, serialization of
global events has never been supported, so checkpoints with such
events will currently cause simulation panics.
The serialize parameter to exitSimLoop() has been left in-place to
maintain API compatibility (removing it would affect m5ops). Instead
of just dropping it, we now print a warning if the parameter is set
and the exit event is scheduled in the future (i.e., not at the
current tick).
The object resolver isn't serialization specific and shouldn't live in
serialize.hh. Move it to sim_object.hh since it queries to the
SimObject hierarchy.
This member indicates whether or not a particular virtual network is in use.
Instead of having a default big value for the number of virtual networks and
then checking whether a virtual network is in use, the next patch removes the
default value and the protocol configuration file would now specify the
number of virtual networks it requires.
Additionally, the patch also refactors some of the code used for computing the
virtual channel next in the round robin order.
Both FuncCallExprAST and MethodCallExprAST had code for checking the arguments
with which a function is being called. The patch does away with this
duplication. Now the code for checking function call arguments resides in the
Func class.
The new serialization code (kudos to Tim Jones) moves all of the state
mangling in RubySystem to memWriteback. This makes it possible to use
the new const serialization interface.
This changeset moves the cache recorder cleanup from the checkpoint()
method to drainResume() to make checkpointing truly constant and
updates the checkpointing code to use the new interface.