This patch enables the use of the generator behaviours outside the
TrafficGen module. This is useful e.g. to allow packet replay modes
for other devices in the system without having to replace them with a
TrafficGen in the configuration files.
This change also enables more specific behaviours to be composed as
specific modules, e.g. BaseBandModem can use a number of generators
and have application-specific parameters based around a specific set
of generators.
This patch enables selection of the memory controller class through a
mem-type command-line option. Behind the scenes, this option is
treated much like the cpu-type, and a similar framework is used to
resolve the valid options, and translate the short-hand description to
a valid class.
The regression scripts are updated with a hardcoded memory class for
the moment. The best solution going forward is probably to get the
memory out of the makeSystem functions, but Ruby complicates things as
it does not connect the memory controller to the membus.
--HG--
rename : configs/common/CpuConfig.py => configs/common/MemConfig.py
This patch adds a WideIO 200 MHz configuration that can be used as a
baseline to compare with DDRx and LPDDRx. Note that it is a single
channel and that it should be replicated 4 times. It is based on
publically available information and attempts to capture an envisioned
8 Gbit single-die part (i.e. without TSVs).
This patch provides useful printouts throughut the memory system. This
includes pretty-printed cache tags and function call messages
(call-stack like).
This patch changes the SimpleTimingPort and RubyPort to panic on
inhibited requests as this should never happen in either of the
cases. The SimpleTimingPort is only used for the I/O devices PIO port
and the DMA devices config port and should thus never see an inhibited
request. Similarly, the SimpleTimingPort is also used for the
MessagePort in x86, and there should also not be any cases where the
port sees an inhibited request.
This changeset adds support for m5 pseudo-ops when running in
kvm-mode. Unfortunately, we can't trap the normal gem5 co-processor
entry in KVM (it doesn't seem to be possible to trap accesses to
non-existing co-processors). We therefore use BZJ instructions to
cause a trap from virtualized mode into gem5. The BZJ instruction is
becomes a normal branch to the gem5 fallback code when running in
simulated mode, which means that this patch does not need to change
the ARM ISA-specific code.
Note: This requires a patched host kernel.
All architectures execute m5 pseudo instructions by setting up
arguments according to the ABI and executing a magic instruction that
contains an operation number. Handling of such instructions is
currently spread across the different ISA implementations. This
changeset introduces the PseudoInst::pseudoInst function which handles
most of this in an architecture independent way. This is function is
mainly intended to be used from KVM, but can also be used from the
simulated CPUs.
Architecture specific limitations:
* LPAE is currently not supported by gem5. We therefore panic if LPAE
is enabled when returning to gem5.
* The co-processor based interface to the architected timer is
unsupported. We can't support this due to limitations in the KVM
API on ARM.
* M5 ops are currently not supported. This requires either a kernel
hack or a memory mapped device that handles the guest<->m5
interface.
Add the method checkRaw to ArmISA::Interrupts. This method can be used
to query the raw state (ignoring CPSR masks) of an interrupt. It is
primarily intended for hardware virtualized CPUs.
Add support for using the CPU cycle counter instead of a normal POSIX
timer to generate timed exits to gem5. This should, in theory, provide
better resolution when requesting timer signals.
The perf-based timer requires a fairly recent kernel since it requires
a working PERF_EVENT_IOC_PERIOD ioctl. This ioctl has existed in the
kernel for a long time, but it used to be completely broken due to an
inverted match when the kernel copied things from user
space. Additionally, the ioctl does not change the sample period
correctly on all kernel versions which implement it. It is currently
only known to work reliably on kernel version 3.7 and above on ARM.
This changeset adds support for initializing a KVM VM in the
BaseSystem test class and adds the following methods in run.py:
require_file -- Test if a file exists and abort/skip if not.
require_kvm -- Test if KVM support has been compiled into gem5 (i.e.,
BaseKvmCPU exists) and the KVM device exists on the
host.
KVM-based CPUs need a KVM VM object in the system to manage
system-global KVM stuff (VM creation, interrupt delivery, memory
managment, etc.). This changeset adds a VM to the system if KVM has
been enabled at compile time (the BaseKvmCPU object exists) and a
KVM-based CPU has been selected at runtime.
Reduce the number of KVM->TC synchronizations by overloading the
getContext() method and only request an update when the TC is
requested as opposed to every time KVM returns to gem5.
This changeset introduces the architecture independent parts required
to support KVM-accelerated CPUs. It introduces two new simulation
objects:
KvmVM -- The KVM VM is a component shared between all CPUs in a shared
memory domain. It is typically instantiated as a child of the
system object in the simulation hierarchy. It provides access
to KVM VM specific interfaces.
BaseKvmCPU -- Abstract base class for all KVM-based CPUs. Architecture
dependent CPU implementations inherit from this class
and implement the following methods:
* updateKvmState() -- Update the
architecture-dependent KVM state from the gem5
thread context associated with the CPU.
* updateThreadContext() -- Update the thread context
from the architecture-dependent KVM state.
* dump() -- Dump the KVM state using (optional).
In order to deliver interrupts to the guest, CPU
implementations typically override the tick() method and
check for, and deliver, interrupts prior to entering
KVM.
Hardware-virutalized CPU currently have the following limitations:
* SE mode is not supported.
* PC events are not supported.
* Timing statistics are currently very limited. The current approach
simply scales the host cycles with a user-configurable factor.
* The simulated system must not contain any caches.
* Since cycle counts are approximate, there is no way to request an
exact number of cycles (or instructions) to be executed by the CPU.
* Hardware virtualized CPUs and gem5 CPUs must not execute at the
same time in the same simulator instance.
* Only single-CPU systems can be simulated.
* Remote GDB connections to the guest system are not supported.
Additionally, m5ops requires an architecture specific interface and
might not be supported.
Add the options 'panic_on_panic' and 'panic_on_oops' to the
LinuxArmSystem SimObject. When these option are enabled, the simulator
panics when the guest kernel panics or oopses. Enable panic on panic
and panic on oops in ARM-based test cases.
Previously, nextCycle() could return the *current* cycle if the current tick was
already aligned with the clock edge. This behavior is not only confusing (not
quite what the function name implies), but also caused problems in the
drainResume() function. When exiting/re-entering the sim loop (e.g., to take
checkpoints), the CPUs will drain and resume. Due to the previous behavior of
nextCycle(), the CPU tick events were being rescheduled in the same ticks that
were already processed before draining. This caused divergence from runs that
did not exit/re-entered the sim loop. (Initially a cycle difference, but a
significant impact later on.)
This patch separates out the two behaviors (nextCycle() and clockEdge()),
uses nextCycle() in drainResume, and uses clockEdge() everywhere else.
Nothing (other than name) should change except for the drainResume timing.
This patch is based on http://reviews.m5sim.org/r/1474/ originally written by
Mitch Hayenga. Basic block vectors are generated (simpoint.bb.gz in simout
folder) based on start and end addresses of basic blocks.
Some comments to the original patch are addressed and hooks are added to create
and resume from checkpoints based on instruction counts dictated by external
SimPoint analysis tools.
SimPoint creation/resuming options will be implemented as a separate patch.
Newer core tiles / daughterboards for the Versatile Express platform have an
HDLCD controller that supports HD-quality output. This patch adds an
implementation of the controller.
This changeset adds support for forwarding arguments to the PC
event constructors to following methods:
addKernelFuncEvent
addFuncEvent
Additionally, this changeset adds the following helper method to the
System base class:
addFuncEventOrPanic - Hook a PCEvent to a symbol, panic on failure.
addKernelFuncEventOrPanic - Hook a PCEvent to a kernel symbol, panic
on failure.
System implementations have been updated to use the new functionality
where appropriate.
This change fixes the switcheroo test that broke earlier this month. The code
that was checking for the pipeline being blocked wasn't checking for a pending
translation, only for a icache access.
Without loading weak symbols into gem5, some function names and the given PC
cannot correspond correctly, because the binding attributes of unction names
in an ELF file are not only STB_GLOBAL or STB_LOCAL, but also STB_WEAK. This
patch adds a function for loading weak symbols.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This patch adds a missing flag to the ldr_ret_uop microop instruction.
The flag is added when the instruction is used, not directly in the
constructor of the instruction.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>"
This patch fixes two instances of incorrect use of the seekp/seekg
stream member functions. These two functions return a stream reference
(*this), and should not be compared to an integer value.
The new changeset that can reorder Ruby profilers will cause the ruby.stats
files to reordered statistics (the point of the patch). Update the references
to ensure that these changes are reflected in regressions.
In Simulation.py, calls to m5.simulate(num_ticks) will run the simulated system
for num_ticks after the current tick. Fix calls to m5.simulate in
scriptCheckpoints() and benchCheckpoints() to appropriately handle the maxticks
variable.
When using the o3 or inorder CPUs with many Ruby protocols, the caches may
need to forward invalidations to the CPUs. The RubyPort was instantiating a
packet to be sent to the CPUs to signal the eviction, but the packets were
not being freed by the CPUs. Consistent with the classic memory model, stack
allocate the packet and heap allocate the request so on
ruby_eviction_callback() completion, the packet deconstructor is called, and
deletes the request (*Note: stack allocating the request causes double
deletion, since it will be deleted in the packet destructor). This results in
the least memory allocations without memory errors.
When warming up caches in Ruby, the CacheRecorder sends fetch requests into
Ruby Sequencers with packet types that require responses. Since responses are
never generated for these CacheRecorder requests, the requests are not deleted
in the packet destructor called from the Ruby hit callback. Free the request.
This allows you to have (i.e.) an L2 cache that is not named "L2Cache"
but is still a GenericMachineType_L2Cache. This is particularly
helpful if the protocol has multiple L2 controllers.
When Ruby stats are printed for events and transitions, they include stats
for all of the controllers of the same type, but they are not necessarily
printed in order of the controller ID "version", because of the way the
profilers were added to the profiler vector. This patch fixes the push order
problem so that the stats are printed in ascending order 0->(# controllers),
so statistics parsers may correctly assume the controller to which the stats
belong.
When connecting message buffers between Ruby controllers, it is
easy to mistakenly connect multiple controllers to the same message
buffer. This patch prints a more descriptive fatal message than the
previous assert statement in order to facilitate easier debugging.
The cache trace variables are array allocated uint8_t* in the RubySystem and
the Ruby CacheRecorder, but the code used delete to free the memory, resulting
in Valgrind memory errors. Change these deletes to delete [] to get rid of the
errors.
Currently the commit stage keeps a local copy of the interrupt object.
Since the interrupt is usually handled several cycles after the commit
stage becomes aware of it, it is possible that the local copy of the
interrupt object may not be the interrupt that is actually handled.
It is possible that another interrupt occurred in the
interval between interrupt detection and interrupt handling.
This patch creates a copy of the interrupt just before the interrupt
is handled. The local copy is ignored.
It is possible that operating system wants to shutdown the
lapic timer by writing timer's initial count to 0. This patch
adds a check that the timer event is only scheduled if the
count is 0.
The patch also converts few of the panics related to the keyboard
to warnings since we are any way not interested in simulating the
keyboard.
As of now, we mark the top 1MB of memory space as unusable. Part of
it is actually usable and is required to be marked so by some of the
newer versions of linux kernel. This patch marks the top 639KB as usable.
This value was chosen by looking at QEMU's output for bios memory map.
Fixes a latency calculation bug for accesses during a cache line fill.
Under a cache miss, before the line is filled, accesses to the cache are
associated with a MSHR and marked as targets. Once the line fill completes,
MSHR target packets pay an additional latency of
"responseLatency + busSerializationLatency". However, the "whenReady"
field of the cache line is only set to an additional delay of
"busSerializationLatency". This lacks the responseLatency component of
the fill. It is possible for accesses that occur on the cycle of
(or briefly after) the line fill to respond without properly paying the
responseLatency. This also creates the situation where two accesses to the
same address may be serviced in an order opposite of how they were received
by the cache. For stores to the same address, this means that although the
cache performs the stores in the order they were received, acknowledgements
may be sent in a different order.
Adding the responseLatency component to the whenReady field preserves the
penalty that should be paid and prevents these ordering issues.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
There's not much to do about it other than disable the offending
warning anyway, so it's not worth terminating the build over.
Also suppress uninitialized variable warnings on gcc (happens
at least with gcc 4.4 and swig 1.3.40).
This patch adds a simple Python script that reads the protobuf-encoded
packet traces (not gzipped), and prints them to an ASCII trace file.
The script can also be used as a template for other packet output
formats.