Names of DRAM configurations were updated to reflect both
the channel and device data width.
Previous naming format was:
<DEVICE_TYPE>_<DATA_RATE>_<CHANNEL_WIDTH>
The following nomenclature is now used:
<DEVICE_TYPE>_<DATA_RATE>_<n>x<w>
where n = The number of devices per rank on the channel
x = Device width
Total channel width can be calculated by n*w
Example:
A 64-bit DDR4, 2400 channel consisting of 4-bit devices:
n = 16
w = 4
The resulting configuration name is:
DDR4_2400_16x4
Updated scripts to match new naming convention.
Added unique configurations for DDR4 for:
1) 16x4
2) 8x8
3) 4x16
Change-Id: Ibd7f763b7248835c624309143cb9fc29d56a69d1
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
This patch adds the ability for an application to request dist-gem5 to begin/
end synchronization using an m5 op. When toggling on sync, all nodes agree
on the next sync point based on the maximum of all nodes' ticks. CPUs are
suspended until the sync point to avoid sending network messages until sync has
been enabled. Toggling off sync acts like a global execution barrier, where
all CPUs are disabled until every node reaches the toggle off point. This
avoids tricky situations such as one node hitting a toggle off followed by a
toggle on before the other nodes hit the first toggle off.
This patch breaks out the most basic configuration options into a set
of base options, to allow them to be used also by scripts that do not
involve any ISA, and thus no actual CPUs or devices.
The patch also fixes a few modules so that they can be imported in a
NULL build, and avoid dragging in FSConfig every time Options is
imported.
This patch adds changes to the configuration scripts to support elastic
tracing and replay.
The patch adds a command line option to enable elastic tracing in SE mode
and FS mode. When enabled the Elastic Trace cpu probe is attached to O3CPU
and a few O3 CPU parameters are tuned. The Elastic Trace probe writes out
both instruction fetch and data dependency traces. The patch also enables
configuring the TraceCPU to replay traces using the SE and FS script.
The replay run is designed to resume from checkpoint using atomic cpu to
restore state keeping it consistent with FS run flow. It then switches to
TraceCPU to replay the input traces.
Add support for automatically discover available platforms. The
Python-side uses functionality similar to what we use when
auto-detecting available CPU models. The machine IDs have been updated
to match the platform configurations. If there isn't a matching
machine ID, the configuration scripts default to -1 which Linux uses
for device tree only platforms.
Transaction Level Modeling (TLM2.0) is widely used in industry for creating
virtual platforms (IEEE 1666 SystemC). This patch contains a standard compliant
implementation of an external gem5 port, that enables the usage of gem5 as a
TLM initiator component in SystemC based virtual platforms. Both TLM coding
paradigms loosely timed (b_transport) and aproximately timed (nb_transport) are
supported.
Compared to the original patch a TLM memory manager was added. Furthermore, the
transaction object was removed and for each TLM payload a PacketPointer that
points to the original gem5 packet is added as an TLM extension. For event
handling single events are now created.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This patch adds an example configuration in ext/sst/tests/ that allows
an SST/gem5 instance to simulate a 4-core AArch64 system with SST's
memHierarchy components providing all the caches and memories.
When using gem5 as a slave simulator, it will not advance the
clock on its own and depends on the master simulator calling
simulate(). This new option lets us use the Python scripts
to do all the configuration while stopping short of actually
simulating anything.
This patch enables users to speficy --os-type on the command
line. This option is used to take specific actions for an OS type,
such as changing the kernel command line. This patch is part of the
Android KitKat enablement.
This patch gives the user direct influence over the number of DRAM
ranks to make it easier to tune the memory density without affecting
the bandwidth (previously the only means of scaling the device count
was through the number of channels).
The patch also adds some basic sanity checks to ensure that the number
of ranks is a power of two (since we rely on bit slices in the address
decoding).
This patch adds the --memchecker option, to denote that a MemChecker
should be instantiated for the system. The exact usage of the MemChecker
depends on the system configuration.
For now CacheConfig.py makes use of the option, adding MemCheckerMonitor
instances between CPUs and D-Caches.
Note, however, that currently this only provides limited checking on a
running system; other parts of the system, such as I/O devices are not
monitored, and may cause warnings to be issued by the monitor.
More documentation at http://gem5.org/Simpoints
Steps to profile, generate, and use SimPoints with gem5:
1. To profile workload and generate SimPoint BBV file, use the
following option:
--simpoint-profile --simpoint-interval <interval length>
Requires single Atomic CPU and fastmem.
<interval length> is in number of instructions.
2. Generate SimPoint analysis using SimPoint 3.2 from UCSD.
(SimPoint 3.2 not included with this flow.)
3. To take gem5 checkpoints based on SimPoint analysis, use the
following option:
--take-simpoint-checkpoint=<simpoint file path>,<weight file
path>,<interval length>,<warmup length>
<simpoint file> and <weight file> is generated by SimPoint analysis
tool from UCSD. SimPoint 3.2 format expected. <interval length> and
<warmup length> are in number of instructions.
4. To resume from gem5 SimPoint checkpoints, use the following option:
--restore-simpoint-checkpoint -r <N> --checkpoint-dir <simpoint
checkpoint path>
<N> is (SimPoint index + 1). E.g., "-r 1" will resume from SimPoint
#0.
Both options accept template which will, through python string formatting,
have "mem", "disk", and "script" values substituted in from the mdesc.
Additional values can be used on a case by case basis by passing them as
keyword arguments to the fillInCmdLine function. That makes it possible to
have specialized parameters for a particular ISA, for instance.
The first option lets you specify the template directly, and the other lets
you specify a file which has the template in it.
This changes the default ARM system to a Versatile Express-like system that supports
2GB of memory and PCI devices and updates the default kernels/file-systems for
AArch64 ARM systems (64-bit) to support up to 32GB of memory and PCI devices. Some
platforms that are no longer supported have been pruned from the configuration files.
In addition a set of 64-bit ARM regressions have been added to the regression system.
Adds the parameter --num-work-ids to Options.py and reads the parameter
into the System params in Simulation.py. This parameter enables setting
the number of possible work items to different than 16. Support for this
parameter already exists in src/sim/System.py, so this changeset only
affects the Python config files.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Make the default memory type DDR3-1600 x64, and use the open-adaptive
page policy. This change is aiming to ensure that users by default are
using a realistic memory system.
Note: AArch64 and AArch32 interworking is not supported. If you use an AArch64
kernel you are restricted to AArch64 user-mode binaries. This will be addressed
in a later patch.
Note: Virtualization is only supported in AArch32 mode. This will also be fixed
in a later patch.
Contributors:
Giacomo Gabrielli (TrustZone, LPAE, system-level AArch64, AArch64 NEON, validation)
Thomas Grocutt (AArch32 Virtualization, AArch64 FP, validation)
Mbou Eyole (AArch64 NEON, validation)
Ali Saidi (AArch64 Linux support, code integration, validation)
Edmund Grimley-Evans (AArch64 FP)
William Wang (AArch64 Linux support)
Rene De Jong (AArch64 Linux support, performance opt.)
Matt Horsnell (AArch64 MP, validation)
Matt Evans (device models, code integration, validation)
Chris Adeniyi-Jones (AArch64 syscall-emulation)
Prakash Ramrakhyani (validation)
Dam Sunwoo (validation)
Chander Sudanthi (validation)
Stephan Diestelhorst (validation)
Andreas Hansson (code integration, performance opt.)
Eric Van Hensbergen (performance opt.)
Gabe Black
This Python script generates an ARM DS-5 Streamline .apc project based
on gem5 run. To successfully convert, the gem5 runs needs to be run
with the context-switch-based stats dump option enabled (The guest
kernel also needs to be patched to allow gem5 interrogate its task
information.) See help for more information.
A couple of recent changesets added/deleted/edited some variables
that are needed for running the example ruby scripts. This changeset
edits these scripts to bring them to a working state.
This patch adds support for specifying multi-channel memory
configurations on the command line, e.g. 'se/fs.py
--mem-type=ddr3_1600_x64 --mem-channels=4'. To enable this, it
enhances the functionality of MemConfig and moves the existing
makeMultiChannel class method from SimpleDRAM to the support scripts.
The se/fs.py example scripts are updated to make use of the new
feature.
This patch adds the notion of voltage domains, and groups clock
domains that operate under the same voltage (i.e. power supply) into
domains. Each clock domain is required to be associated with a voltage
domain, and the latter requires the voltage to be explicitly set.
A voltage domain is an independently controllable voltage supply being
provided to section of the design. Thus, if you wish to perform
dynamic voltage scaling on a CPU, its clock domain should be
associated with a separate voltage domain.
The current implementation of the voltage domain does not take into
consideration cases where there are derived voltage domains running at
ratio of native voltage domains, as with the case where there can be
on-chip buck/boost (charge pumps) voltage regulation logic.
The regression and configuration scripts are updated with a generic
voltage domain for the system, and one for the CPUs.
This patch contains three fixes to max tick options handling in Options.py and
Simulation.py:
1) Since the global simulator frequency isn't bound until m5.instantiate()
is called, the maxtick resolution needs to happen after this call, since
changes to the global frequency will cause m5.simulate() to misinterpret the
maxtick value. Shuffling this also requires tweaking the checkpoint directory
handling to signal the checkpoint restore tick back to run(). Fixing this
completely and correctly will require storing the simulation frequency into
checkpoints, which is beyond the scope of this patch.
2) The maxtick option in Options.py was defaulted to MaxTicks, so the old code
would always skip over the maxtime part of the conditionals at the beginning
of run(). Change the maxtick default to None, and set the maxtick local
variable in run() appropriately.
3) To clarify whether max ticks settings are relative or absolute, split the
maxtick option into separate options, for relative and absolute. Ensure that
these two options and the maxtime option are handled appropriately to set the
maxtick variable in Simulation.py.
This patch adds a 'sys_clock' command-line option and use it to assign
clocks to the system during instantiation.
As part of this change, the default clock in the System class is
removed and whenever a system is instantiated a system clock value
must be set. A default value is provided for the command-line option.
The configs and tests are updated accordingly.
This patch adds a 'cpu_clock' command-line option and uses the value
to assign clocks to components running at the CPU speed (L1 and L2
including the L2-bus). The configuration scripts are updated
accordingly.
The 'clock' option is left unchanged in this patch as it is still used
by a number of components. In follow-on patches the latter will be
disambiguated further.
The --restore-with-cpu option didn't use CpuConfig.cpu_names() to
determine which CPU names are valid, instead it used a static list of
known CPU names. This changeset makes the option parsing code use the
CPU list from the CpuConfig module instead.
This patch enables selection of the memory controller class through a
mem-type command-line option. Behind the scenes, this option is
treated much like the cpu-type, and a similar framework is used to
resolve the valid options, and translate the short-hand description to
a valid class.
The regression scripts are updated with a hardcoded memory class for
the moment. The best solution going forward is probably to get the
memory out of the makeSystem functions, but Ruby complicates things as
it does not connect the memory controller to the membus.
--HG--
rename : configs/common/CpuConfig.py => configs/common/MemConfig.py
This patch is based on http://reviews.m5sim.org/r/1474/ originally written by
Mitch Hayenga. Basic block vectors are generated (simpoint.bb.gz in simout
folder) based on start and end addresses of basic blocks.
Some comments to the original patch are addressed and hooks are added to create
and resume from checkpoints based on instruction counts dictated by external
SimPoint analysis tools.
SimPoint creation/resuming options will be implemented as a separate patch.
The CPUs supported by the configuration scripts used to be
hard-coded. This was not ideal for several reasons. For example, the
configuration scripts depend on all CPU models even though only a
subset might have been compiled.
This changeset adds a new module to the configuration scripts that
automatically discovers the available CPU models from the compiled
SimObjects. As a nice bonus, the use of introspection allows us to
automatically generate a list of available CPU models suitable for
printing. This list is augmented with the Python doc string from the
underlying class if available.
This patch allows for specifying multiple programs via command line. It also
adds an option for specifying whether to use of SMT. But SMT does not work for
the o3 cpu as of now.
This patch adds a --repeat-switch option that will enable repeat core
switching at a user defined period (set with --switch-freq option).
currently, a switch can only occur between like CPU types. inorder CPU
switching is not supported.
*note*
this patch simply allows a config that will perform repeat switching, it
does not fix drain/switchout functionality. if you run with repeat switching
you will hit assertion failures and/or your workload with hang or die.
Added the options to Options.py for FS mode with backward compatibility. It is
good to provide an option to specify the disk image and the memory size from
command line since a lot of disk images are created to support different
benchmark suites as well as per user needs. Change in program also leads to
change in memory requirements. These options provide the interface to provide
both disk image and memory size from the command line and gives more
flexibility.
I am not too happy with the way options are added in files se.py and fs.py
currently. This patch moves all the options to the file Options.py, functions
from which are called when required.
Enables the CheckerCPU to be selected at runtime with the --checker option
from the configs/example/fs.py and configs/example/se.py configuration
files. Also merges with the SE/FS changes.
Currently there is an assumption that restoration from a checkpoint will
happen by first restoring to an atomic CPU and then switching to a timing
CPU. This patch adds support for directly restoring to a timing CPU. It
adds a new option '--restore-with-cpu' which is used to specify the type
of CPU to which the checkpoint should be restored to. It defaults to
'atomic' which was the case before.
This patch adds a new option for cpu type. This option is of type 'choice'
which is similar to a C++ enum, except that it takes string values as
possible choices. Following options are being removed -- detailed, timing,
inorder.
--HG--
extra : rebase_source : 58885e2e8a88b6af8e6ff884a5922059dbb1a6cb