The Topology class in Ruby does not need to inherit from SimObject class.
This patch turns it into a regular class. The topology object is now created
in the constructor of the Network class. All the parameters for the topology
class have been moved to the network class.
The default cache configuration script currently import the O3_ARM_v7a
model configuration, which depends on the O3 CPU. This breaks if gem5
has been compiled without O3 support. This changeset removes the
dependency by only importing the model if it is requested by the
user. As a bonus, it actually removes some code duplication in the
configuration scripts.
CPU switching consists of the following steps:
1. Drain the system
2. Switch out old CPUs (cpu.switchOut())
3. Change the system timing mode to the mode the new CPUs require
4. Flush caches if switching to hardware virtualization
5. Inform new CPUs of the handover (cpu.takeOverFrom())
6. Resume the system
m5.switchCpus() previously only did step 2 & 5. Since information
about the new processors' memory system requirements is now exposed,
do all of the steps above.
This patch adds automatic memory system switching and flush (if
needed) to switchCpus(). Additionally, it adds optional draining to
switchCpus(). This has the following implications:
* changeToTiming and changeToAtomic are no longer needed, so they have
been removed.
* changeMemoryMode is only used internally, so it is has been renamed
to be private.
* switchCpus requires a reference to the system containing the CPUs as
its first parameter.
WARNING: This changeset breaks compatibility with existing
configuration scripts since it changes the signature of
m5.switchCpus().
The CPUs supported by the configuration scripts used to be
hard-coded. This was not ideal for several reasons. For example, the
configuration scripts depend on all CPU models even though only a
subset might have been compiled.
This changeset adds a new module to the configuration scripts that
automatically discovers the available CPU models from the compiled
SimObjects. As a nice bonus, the use of introspection allows us to
automatically generate a list of available CPU models suitable for
printing. This list is augmented with the Python doc string from the
underlying class if available.
The configuration scripts currently hard-code the requirements of each
CPU. This is clearly not optimal as it makes writing new configuration
scripts painful and adding new CPU models requires existing scripts to
be updated. This patch adds the following class methods to the base
CPU and all relevant CPUs:
* memory_mode -- Return a string describing the current memory mode
(invalid/atomic/timing).
* require_caches -- Does the CPU model require caches?
* support_take_over -- Does the CPU support CPU handover?
The run() method in Simulation.py used to call sys.exit() when the
simulator exits. This is undesirable when user has requested the
simulator to be run in interactive mode since it causes the simulator
to exit rather than entering the interactive Python environment.
This patch moves the default DRAM parameters from the SimpleDRAM class
to two different subclasses, one for DDR3 and one for LPDDR2. More can
be added as we go forward.
The regressions that previously used the SimpleDRAM are now using
SimpleDDR3 as this is the most similar configuration.
This patch moves the branch predictor files in the o3 and inorder directories
to src/cpu/pred. This allows sharing the branch predictor across different
cpu models.
This patch was originally posted by Timothy Jones in July 2010
but never made it to the repository.
--HG--
rename : src/cpu/o3/bpred_unit.cc => src/cpu/pred/bpred_unit.cc
rename : src/cpu/o3/bpred_unit.hh => src/cpu/pred/bpred_unit.hh
rename : src/cpu/o3/bpred_unit_impl.hh => src/cpu/pred/bpred_unit_impl.hh
rename : src/cpu/o3/sat_counter.hh => src/cpu/pred/sat_counter.hh
This patch moves the contollers to be children of the ruby_system instead of
'system' under the python object hierarchy. This is so that these objects
can inherit some of the ruby_system's parameter values without resorting to
calling a global system pointer during run-time.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Used as a command in full-system scripts helps the user ensure the benchmarks have finished successfully.
For example, one can use:
/path/to/benchmark args || /sbin/m5 fail 1
and thus ensure gem5 will exit with an error if the benchmark fails.
The defer_registration parameter is used to prevent a CPU from
initializing at startup, leaving it in the "switched out" mode. The
name of this parameter (and the help string) is confusing. This patch
renames it to switched_out, which should be more descriptive.
This patch generalises the address range resolution for the I/O cache
and I/O bridge such that they do not assume a single memory. The patch
involves adding a parameter to the system which is then defined based
on the memories that are to be visible from the I/O subsystem, whether
behind a cache or a bridge.
The change is needed to allow interleaved memory controllers in the
system.
The ISA class on stores the contents of ID registers on many
architectures. In order to make reset values of such registers
configurable, we make the class inherit from SimObject, which allows
us to use the normal generated parameter headers.
This patch introduces a Python helper method, BaseCPU.createThreads(),
which creates a set of ISAs for each of the threads in an SMT
system. Although it is currently only needed when creating
multi-threaded CPUs, it should always be called before instantiating
the system as this is an obvious place to configure ID registers
identifying a thread/CPU.
The directed tester supports only generating only read or only write accesses. The
patch modifies the tester to support streams that have both read and write accesses.
globalHistoryBits, globalPredictorSize, and choicePredictorSize are decoupled.
globalHistoryBits controls how much history is kept, global and choice
predictor sizes control how much of that history is used when accessing
predictor tables. This way, global and choice predictors can actually be
different sizes, and it is no longer possible to walk off the predictor arrays
and cause a seg fault.
There are now individual thresholds for choice, global, and local saturating
counters, so that taken/not taken decisions are correct even when the
predictors' counters' sizes are different.
The interface for localPredictorSize has been removed from TournamentBP because
the value can be calculated from localHistoryBits.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
There is no point in exporting the old drain() method in
Simulate.py. It should only be used internally by doDrain(). This
patch moves the old drain() method into doDrain() and renames
doDrain() to drain().
Changeset 4f54b0f229b5 removed the call to doDrain in changeToTiming
based on the assumption that the system does not need draining when
running in atomic mode. This is a false assumption since at least the
System class requires the system to be drained before it allows
switching of memory modes. This patch reverts that part of the
changeset.
This patch unified the L1 and L2 caches used throughout the
regressions instead of declaring different, but very similar,
configurations in the different scripts.
The patch also changes the default L2 configuration to match what it
used to be for the fs and se scripts (until the last patch that
updated the regressions to also make use of the cache config). The
MSHRs and targets per MSHR are now set to a more realistic default of
20 and 12, respectively.
As a result of both the aforementioned changes, many of the regression
stats are changed. A follow-on patch will bump the stats.
This patch favours using SimpleDRAM with the default timing instead of
SimpleMemory for all regressions that involve the o3 or inorder CPU,
or are full system (in other words, where the actual performance of
the memory is important for the overall performance).
Moving forward, the solution for FSConfig and the users of fs.py and
se.py is probably something similar to what we use to choose the CPU
type. I envision a few pre-set configurations SimpleLPDDR2,
SimpleDDR3, etc that can be choosen by a dram_type option. Feedback on
this part is welcome.
This patch changes plenty stats and adds all the DRAM controller
related stats. A follow-on patch updates the relevant statistics. The
total run-time for the entire regression goes up with ~5% with this
patch due to the added complexity of the SimpleDRAM model. This is a
concious trade-off to ensure that the model is properly tested.
This patch uses the common L1, L2 and IOCache configuration for the
regressions that all share the same cache parameters. There are a few
regressions that use a slightly different configuration (memtest,
o3-timing=mp, simple-atomic-mp and simple-timing-mp), and the latter
are not changed in this patch. They will be updated in a future patch.
The common cache configurations are changed to match the ones used in
the regressions, and are slightly changed with respect to what they
were. Hopefully this means we can converge on a common base
configuration, used both in the normal user configurations and
regressions.
As only regressions that shared the same cache configuration are
updated, no regressions are affected.
This patch adds support to different entities in the ruby memory system
for more reliable functional read/write accesses. Only the simple network
has been augmented as of now. Later on Garnet will also support functional
accesses.
The patch adds functional access code to all the different types of messages
that protocols can send around. These messages are functionally accessed
by going through the buffers maintained by the network entities.
The patch also rectifies some of the bugs found in coherence protocols while
testing the patch.
With this patch applied, functional writes always succeed. But functional
reads can still fail.
This patch changes the cache-related latencies from an absolute time
expressed in Ticks, to a number of cycles that can be scaled with the
clock period of the caches. Ultimately this patch serves to enable
future work that involves dynamic frequency scaling. As an immediate
benefit it also makes it more convenient to specify cache performance
without implicitly assuming a specific CPU core operating frequency.
The stat blocked_cycles that actually counter in ticks is now updated
to count in cycles.
As the timing is now rounded to the clock edges of the cache, there
are some regressions that change. Plenty of them have very minor
changes, whereas some regressions with a short run-time are perturbed
quite significantly. A follow-on patch updates all the statistics for
the regressions.
This patch changes the CoherentBus between the L1s and L2 to use the
CPU clock and also four times the width compared to the default
bus. The parameters are not intending to fit every single scenario,
but rather serve as a better startingpoint than what we previously
had.
Note that the scripts that do not use the addTwoLevelCacheHiearchy are
not affected by this change.
A separate patch will update the stats.
The memtest.py script used to connect the system port directly to the
SimpleMemory, but the latter is now single ported. Since the system
port is not used for anything in this particular example, a quick fix
is to attach it to the functional bus instead.
In the current caches the hit latency is paid twice on a miss. This patch lets
a configurable response latency be set of the cache for the backward path.
This patch simplifies the Range object hierarchy in preparation for an
address range class that also allows striping (e.g. selecting a few
bits as matching in addition to the range).
To extend the AddrRange class to an AddrRegion, the first step is to
simplify the hierarchy such that we can make it as lean as possible
before adding the new functionality. The only class using Range and
MetaRange is AddrRange, and the three classes are now collapsed into
one.
In order to ensure correct functionality of switch CPUs, the TLB walker ports
must be connected to the Ruby system in x86 simulation.
This fixes x86 assertion failures that the TLB walker ports are not connected
during the CPU switch process.
When switching from an atomic CPU to any of the timing CPUs, a drain is
unnecessary since no events are scheduled in atomic mode. However, when
trying to switch CPUs starting with a timing CPU, there may be events
scheduled. This change ensures that all events are drained from the system
by calling m5.drain before switching CPUs.
This patch allows for specifying multiple programs via command line. It also
adds an option for specifying whether to use of SMT. But SMT does not work for
the o3 cpu as of now.
This patch removes the NACKing in the bridge, as the split
request/response busses now ensure that protocol deadlocks do not
occur, i.e. the message-dependency chain is broken by always allowing
responses to make progress without being stalled by requests. The
NACKs had limited support in the system with most components ignoring
their use (with a suitable call to panic), and as the NACKs are no
longer needed to avoid protocol deadlocks, the cleanest way is to
simply remove them.
The bridge is the starting point as this is the only place where the
NACKs are created. A follow-up patch will remove the code that deals
with NACKs in the endpoints, e.g. the X86 table walker and DMA
port. Ultimately the type of packet can be complete removed (until
someone sees a need for modelling more complex protocols, which can
now be done in parts of the system since the port and interface is
split).
As a consequence of the NACK removal, the bridge now has to send a
retry to a master if the request or response queue was full on the
first attempt. This change also makes the bridge ports very similar to
QueuedPorts, and a later patch will change the bridge to use these. A
first step in this direction is taken by aligning the name of the
member functions, as done by this patch.
A bit of tidying up has also been done as part of the simplifications.
Surprisingly, this patch has no impact on any of the
regressions. Hence, there was never any NACKs issued. In a follow-up
patch I would suggest changing the size of the bridge buffers set in
FSConfig.py to also test the situation where the bridge fills up.
This patch fixes the checkpointing by ensuring that the directory is
passer to the scriptCheckpoints function, and that the num_checkpoints
is not used before it is initialised.