This patch detects garnet network deadlock by monitoring
network interfaces. If a network interface continuously
fails to allocate virtual channels for a message, a
possible deadlock is detected.
This patch adds an IOCache to the example bigLITTLE
configuration. An IOCache is required for correct DMA
transfers when we have caches in the system.
Change-Id: Ifeddc1b360aacbb16b1393f361dd98873c834012
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
This change adds the option to use the memcheck with random memory
hierarchies at the moment limited to a maximum depth of 3 allowing
testing with uncommon topologies.
Change-Id: Id2c2fe82a8175d9a67eb4cd7f3d2e2720a809b60
Reviewed-by: Andreas Hansson <andreas.hansson@arm.com>
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
This patch adds the ability for an application to request dist-gem5 to begin/
end synchronization using an m5 op. When toggling on sync, all nodes agree
on the next sync point based on the maximum of all nodes' ticks. CPUs are
suspended until the sync point to avoid sending network messages until sync has
been enabled. Toggling off sync acts like a global execution barrier, where
all CPUs are disabled until every node reaches the toggle off point. This
avoids tricky situations such as one node hitting a toggle off followed by a
toggle on before the other nodes hit the first toggle off.
this patch adds an ordered response buffer to the GM pipeline
to ensure in-order data delivery. the buffer is implemented as
a stl ordered map, which sorts the request in program order by
using their sequence ID. when requests return to the GM pipeline
they are marked as done. only the oldest request may be serviced
from the ordered buffer, and only if is marked as done.
the FIFO response buffers are kept and used in OoO delivery mode
This patch breaks out the most basic configuration options into a set
of base options, to allow them to be used also by scripts that do not
involve any ISA, and thus no actual CPUs or devices.
The patch also fixes a few modules so that they can be imported in a
NULL build, and avoid dragging in FSConfig every time Options is
imported.
Modify the opClass assigned to AArch64 FP instructions from SimdFloat* to
Float*. Also create the FloatMemRead and FloatMemWrite opClasses, which
distinguishes writes to the INT and FP register banks.
Change the latency of (Simd)FloatMultAcc to 5, based on the Cortex-A72,
where the "latency" of FMADD is 3 if the next instruction is a FMADD and
has only the augend to destination dependency, otherwise it's 7 cycles.
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
Continue along the same line as the recent patch that made the
Ruby-related config scripts Python packages and make also the
configs/common directory a package.
All affected config scripts are updated (hopefully).
Note that this change makes it apparent that the current organisation
and naming of the config directory and its subdirectories is rather
chaotic. We mix scripts that are directly invoked with scripts that
merely contain convenience functions. While it is not addressed in
this patch we should follow up with a re-organisation of the
config structure, and renaming of some of the packages.
This patch moves the addition of network options into the Ruby module
to avoid the regressions all having to add it explicitly. Doing this
exposes an issue in our current config system though, namely the fact
that addtoPath is relative to the Python script being executed. Since
both example and regression scripts use the Ruby module we would end
up with two different (relative) paths being added. Instead we take a
first step at turning the config modules into Python packages, simply
by adding a __init__.py in the configs/ruby, configs/topologies and
configs/network subdirectories.
As a result, we can now add the top-level configs directory to the
Python search path, and then use the package names in the various
modules. The example scripts are also updated, and the messy
path-deducing variations in the scripts are unified.
This patch adds port direction names to the links during topology
creation, which can be used for better printed names for the links
or for users to code up their own adaptive routing algorithms.
It also adds support for every router to have an independent latency
value to support heterogeneous topologies with the subsequent
garnet2.0 patch.
This patch makes the internal links within the network topology
unidirectional, thus allowing any deadlock-free routing algorithms to
be specified from the topology itself using weights.
This patch also renames Mesh.py and MeshDirCorners.py to
Mesh_XY.py and MeshDirCorners_XY.py (Mesh with XY routing).
It also adds a Mesh_westfirst.py and CrossbarGarnet.py topologies.
networktest is essentially a collection of synthetic traffic patterns
for the network. The protocol name and the tester having the same name
led to multiple python configuration files with the same name, adding
confusion. This patch renames networktest to garnet_synthetic_traffic,
and also adds more synthetic traffic patterns.
Over the past 6 years, we realized that the protocol is essentially used
to run the garnet network in a standalone manner, and feed standard synthetic
traffic patterns through it.
Adjust the traffic generator time-out so that the script works out of
the box
Change-Id: I6b3b6b11f98b094ae3acdbe09488c26e4aeb0ab4
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
This patch refactors the configuration file to use a more
object-oriented design.
Change-Id: I44ac2d063c2b5901f385544fb6ce3f259459cb05
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Gabor Dozsa <gabor.dozsa@arm.com>
Add support for using KVM to accelerate APU simulations. The intended use
case is to fast-forward through runtime initialization until the first
kernel launch.
This patch changes the default behaviour of the SystemXBar, adding a
snoop filter. With the recent updates to the snoop filter allocation
behaviour this change no longer causes problems for the regressions
without caches.
Change-Id: Ibe0cd437b71b2ede9002384126553679acc69cc1
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Tony Gutierrez <anthony.gutierrez@amd.com>
Ruby on ARM is currently very experimental. Fail with a fatal error
that explains this to make sure users are aware of the limitations (it
doesn't actually work yet!).
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Add initial support for creating an ARM system with a Ruby-based
memory system. This support is currently experimental and limited to
the new VExpress_GEM5_V1 platform.
Change-Id: I36baeb68b0d891e34ea46aafe17b5e55217b4bfa
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Brad Beckmann <brad.beckmann@amd.com>
There are cases where we want to put boot ROMs on the PIO bus. Ruby
currently doesn't support functional accesses to such memories since
functional accesses are always assumed to go to physical memory. Add
the required support for routing functional accesses to the PIO bus.
Change-Id: Ia5b0fcbe87b9642bfd6ff98a55f71909d1a804e3
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Jason Lowe-Power <jason@lowepower.com>
Reviewed-by: Brad Beckmann <brad.beckmann@amd.com>
Reviewed-by: Michael LeBeane <michael.lebeane@amd.com>
An ARM big.LITTLE system consists of two cpu clusters: the big
CPUs are typically complex out-of-order cores and the little
CPUs are simpler in-order ones. The fs_bigLITTLE.py script
can run a full system simulation with various number of big
and little cores and cache hierarchy. The commit also includes
two example device tree files for booting Linux on the
bigLITTLE system.
Change-Id: I6396fb3b2d8f27049ccae49d8666d643b66c088b
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
At the moment the SPARC FS machine configuration comes with a hardcoded
value for using the Solaris 10 disk image from the OpenSPARC tarball. The
--disk-image option is completely ignored for SPARC. This simple patch
modifies the behavior so that --disk-image option is both taken into
account and also required. This makes it possible to easily change SPARC FS
images without having to modify the configuration files.
This patch provides the example test script to configure different HMC
architecture and run traffic through traffic generator.
Committed by Jason Lowe-Power <jason@lowepower.com>
In this new hmc configuration we have used the existing components in gem5
mainly [SerialLink] [NoncoherentXbar]& [DRAMCtrl] to define 3 different
architecture for HMC.
Highlights
1- It explores 3 different HMC architectures
2- It creates 4-HMC crossbars and attaches 16 vault controllers with it.
This will connect vaults to serial links
3- From the previous version, HMCController with round robin funtionality
is being removed and all the serial links are being accessible directly
from user ports
4- Latency incorporated by HMCController (in previous version) is being
added to SerialLink
Committed by Jason Lowe-Power <jason@lowepower.com>
This patch ensures a walker cache is instantiated if specfied.
Change-Id: I2c6b4bf3454d56bb19558c73b406e1875acbd986
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Reviewed-by: Mitch Hayenga <mitch.hayenga@arm.com>
Eliminate the VSZ constant that defined the Wavefront size (in numbers of work
items); replaced it with a parameter in the GPU.py configuration script.
Changed all data structures dependent on the Wavefront size to be dynamically
sized. Legal values of Wavefront size are 16, 32, 64 for now and checked at
initialization time.
Disable the default snoop filter in the SystemXBar so that the
typical membus does not have a snoop filter by default. Instead,
add the snoop filter only when there are caches added to the system
(with the caches / l2cache options).
The underlying problem is that the snoop filter grows without
bounds (for now) if there are no caches to tell it that lines have
been evicted. This causes slow regression runs for all the atomic
regressions. This patch fixes this behaviour.
--HG--
extra : source : f97c20511828209757440839ed48d741d02d428f
According to the Intel Multi Processor Specification rev 1.4 (-006) (*),
section 4.3.2 Bus Entries, Bus type strings are >>6-character ASCII
(blank-filled) strings<<.
This patch properly pads the entries with the missing spaces at the end.
(*) http://www.intel.com/design/pentium/datashts/24201606.pdf
Committed by Jason Lowe-Power <power.jg@gmail.com>
Distributed gem5 is the result of the convergence effort between
multi-gem5 and pd-gem5. It relies on the base multi-gem5 infrastructure
for packet forwarding, synchronisation and checkpointing but combines
those with the elaborated network switch model from pd-gem5.
This patch adds a config script that broadly replicates the behaviour
of lat_mem_rd. The test is based on traffic generators, and as such we
simply randomise addresses in increasingly large ranges, and play them
back using the trace functionality of the traffic generator.
The test script is accompanied by a post-processing and visualisation
script. At the moment no configurability is added to tweak the memory
hierarchy, but a follow on patch could easily extend the
functionality.
This patch introduces the ability of making the coherent crossbar the
point of coherency. If so, the crossbar does not forward packets where
a cache with ownership has already committed to responding, and also
does not forward any coherency-related packets that are not intended
for a downstream memory controller. Thus, invalidations and upgrades
are turned around in the crossbar, and the memory controller only sees
normal reads and writes.
In addition this patch moves the express snoop promotion of a packet
to the crossbar, thus allowing the downstream cache to check the
express snoop flag (as it should) for bypassing any blocking, rather
than relying on whether a cache is responding or not.
This patch changes how the cache determines if snoops should be
forwarded from the memory side to the CPU side. Instead of having a
parameter, the cache now looks at the port connected on the CPU side,
and if it is a snooping port, then snoops are forwarded. Less error
prone, and less parameters to worry about.
The patch also tidies up the CPU classes to ensure that their I-side
port is not snooping by removing overrides to the snoop request
handler, such that snoop requests will panic via the default
MasterPort implement
Add a platform with support for both aarch32 and aarch64. This
platform implements a subset of the devices in a real Versatile
Express and extends it with some gem5-specific functionality. It is in
many ways similar to the old VExpress_EMM64 platform, but supports the
following new features:
* Automatic PCI interrupt assignment
* PCI interrupts allocated in a contiguous range.
* Automatic boot loader selection (32-bit / 64-bit)
* Cleaner memory map where gem5-specific devices live in CS5 which
isn't used by current Versatile Express platforms.
* No fake devices. Devices that were previously faked will be
removed from the device tree instead.
* Support for 510 GiB contiguous memory
This patch allows the ruby random tester to use ruby ports that may only
support instr or data requests. This patch is similar to a previous changeset
(8932:1b2c17565ac8) that was unfortunately broken by subsequent changesets.
This current patch implements the support in a more straight-forward way.
Since retries are now tested when running the ruby random tester, this patch
splits up the retry and drain check behavior so that RubyPort children, such
as the GPUCoalescer, can perform those operations correctly without having to
duplicate code. Finally, the patch also includes better DPRINTFs for
debugging the tester.
This patch adds changes to the configuration scripts to support elastic
tracing and replay.
The patch adds a command line option to enable elastic tracing in SE mode
and FS mode. When enabled the Elastic Trace cpu probe is attached to O3CPU
and a few O3 CPU parameters are tuned. The Elastic Trace probe writes out
both instruction fetch and data dependency traces. The patch also enables
configuring the TraceCPU to replay traces using the SE and FS script.
The replay run is designed to resume from checkpoint using atomic cpu to
restore state keeping it consistent with FS run flow. It then switches to
TraceCPU to replay the input traces.
The gem5's current PCI host functionality is very ad hoc. The current
implementations require PCI devices to be hooked up to the
configuration space via a separate configuration port. Devices query
the platform to get their config-space address range. Un-mapped parts
of the config space are intercepted using the XBar's default port
mechanism and a magic catch-all device (PciConfigAll).
This changeset redesigns the PCI host functionality to improve code
reuse and make config-space and interrupt mapping more
transparent. Existing platform code has been updated to use the new
PCI host and configured to stay backwards compatible (i.e., no
guest-side visible changes). The current implementation does not
expose any new functionality, but it can easily be extended with
features such as automatic interrupt mapping.
PCI devices now register themselves with a PCI host controller. The
host controller interface is defined in the abstract base class
PciHost. Registration is done by PciHost::registerDevice() which takes
the device, its bus position (bus/dev/func tuple), and its interrupt
pin (INTA-INTC) as a parameter. The registration interface returns a
PciHost::DeviceInterface that the PCI device can use to query memory
mappings and signal interrupts.
The host device manages the entire PCI configuration space. Accesses
to devices decoded into the devices bus position and then forwarded to
the correct device.
Basic PCI host functionality is implemented in the GenericPciHost base
class. Most platforms can use this class as a basic PCI controller. It
provides the following functionality:
* Configurable configuration space decoding. The number of bits
dedicated to a device is a prameter, making it possible to support
both CAM, ECAM, and legacy mappings.
* Basic interrupt mapping using the interruptLine value from a
device's configuration space. This behavior is the same as in the
old implementation. More advanced controllers can override the
interrupt mapping method to dynamically assign host interrupts to
PCI devices.
* Simple (base + addr) remapping from the PCI bus's address space to
physical addresses for PIO, memory, and DMA.
Add support for automatically discover available platforms. The
Python-side uses functionality similar to what we use when
auto-detecting available CPU models. The machine IDs have been updated
to match the platform configurations. If there isn't a matching
machine ID, the configuration scripts default to -1 which Linux uses
for device tree only platforms.