This patch addresses a number of smaller issues identified by the code
inspection utility cppcheck. There are a number of identified leaks in
the arm/linux/system.cc (although the function only get's called once
so it is not a major problem), a few deletes in dev/x86/i8042.cc that
were not array deletes, and sprintfs where the character array had one
element less than needed. In the IIC tags there was a function
allocating an array of longs which is in fact never used.
This patch updates the stats to reflect the change in how cache
latencies are expressed. In addition, the latencies are now rounded to
multiples of the clock period, thus also affecting other stats.
This patch changes the cache-related latencies from an absolute time
expressed in Ticks, to a number of cycles that can be scaled with the
clock period of the caches. Ultimately this patch serves to enable
future work that involves dynamic frequency scaling. As an immediate
benefit it also makes it more convenient to specify cache performance
without implicitly assuming a specific CPU core operating frequency.
The stat blocked_cycles that actually counter in ticks is now updated
to count in cycles.
As the timing is now rounded to the clock edges of the cache, there
are some regressions that change. Plenty of them have very minor
changes, whereas some regressions with a short run-time are perturbed
quite significantly. A follow-on patch updates all the statistics for
the regressions.
This patch changes the memtest clock from 1THz (the default) to 2GHz,
similar to the CPUs in the other regressions. This is useful as the
caches will adopt the same clock as the CPU. The bus clock rate is
scaled accordingly, and the L1-L2 bus is kept at the CPU clock while
the memory bus is at half that frequency.
A separate patch updates the affected stats.
This patch changes the CoherentBus between the L1s and L2 to use the
CPU clock and also four times the width compared to the default
bus. The parameters are not intending to fit every single scenario,
but rather serve as a better startingpoint than what we previously
had.
Note that the scripts that do not use the addTwoLevelCacheHiearchy are
not affected by this change.
A separate patch will update the stats.
This patch unifies the full-system regression config scripts and uses
the BaseCPU convenience method addTwoLevelCacheHierarchy to connect up
the L1s and L2, and create the bus inbetween.
The patch is a step on the way to use the clock period to express the
cache latencies, as the CPU is now the parent of the L1, L2 and L1-L2
bus, and these modules thus use the CPU clock.
The patch does not change the value of any stats, but plenty names,
and a follow-up patch contains the update to the stats, chaning
system.l2c to system.cpu.l2cache.
This patch changes the default 1 Tick clock period to a proxy that
resolves the parents clock. As a result of this, the caches and
L1-to-L2 bus, for example, will automatically use the clock period of
the CPU unless explicitly overridden.
To ensure backwards compatibility, the System class overrides the
proxy and specifies a 1 Tick clock. We could change this to something
more reasonable in a follow-on patch, perhaps 1 GHz or something
similar.
With this patch applied, all clocked objects should have a reasonable
clock period set, and could start specifying delays in Cycles instead
of absolute time.
This patch modifies how proxies are traversed and unproxied to allow
chained proxies. The issue that is solved manifested itself when a
proxy during its evaluation ended up being hitting another proxy, and
the second one got evaluated using the object that was originally used
for the first proxy.
For a more tangible example, see the following patch on making the
default clock being inherited from the parent. In this patch, the CPU
clock is a proxy Parent.clock, which is overridden in the system to be
an actual value. This all works fine, but the AlphaLinuxSystem has a
boot_cpu_frequency parameter that is Self.cpu[0].clock.frequency. When
the latter is evaluated, it all happens relative to the current object
of the proxy, i.e. the system. Thus the cpu.clock is evaluated as
Parent.clock, but using the system rather than the cpu as the object
to enquire.
This patch transitions the bus to use the AddrRange operations instead
of directly accessing the start and end. The change facilitates the
move to a more elaborate AddrRange class that also supports address
striping in the bus by specifying interleaving bits in the ranges.
Two new functions are added to the AddrRange to determine if two
ranges intersect, and if one is a subset of another. The bus
propagation of address ranges is also tweaked such that an update is
only propagated if the bus received information from all the
downstream slave modules. This avoids the iteration and need for the
cycle-breaking scheme that was previously used.
This patch moves the block size computation from findBlockSize to
initialisation time, once all the neighbouring ports are connected.
There is no need to dynamically update the block size, and the caching
of the value effectively avoided that anyhow. This is very similar to
what was already in place, just with a slightly leaner implementation.
This patch bumps the Doxyfile to match more recent versions of
Doxygen. The sections that are deprecated have been removed, and the
new ones added. The project name has also been updated.
This patch removes the parts of slicc that were required for multi-chip
protocols. Going ahead, it seems multi-chip protocols would be implemented
by playing with the network itself.
This patch moves the code for functional accesses to ruby system. This is
because the subsequent patches add support for making functional accesses
to the messages in the interconnect. Making those accesses from the ruby port
would be cumbersome.
The memtest.py script used to connect the system port directly to the
SimpleMemory, but the latter is now single ported. Since the system
port is not used for anything in this particular example, a quick fix
is to attach it to the functional bus instead.
In the current caches the hit latency is paid twice on a miss. This patch lets
a configurable response latency be set of the cache for the backward path.
This patch adds a function, periodicStatDump(long long period), which will dump
and reset the statistics every period. This function is designed to be called
from the python configuration scripts. This allows the periodic stats dumping to
be configured more easilly at run time.
The period is currently specified as a long long as there are issues passing
Tick into the C++ from the python as they have conflicting definitions. If the
period is less than curTick, the first occurance occurs at curTick. If the
period is set to 0, then the event is descheduled and the stats are not
periodically dumped.
Due to issues when resumung from a checkpoint, the StatDump event must be moved
forward such that it occues AFTER the current tick. As the function is called
from the python, the event is scheduled before the system resumes from the
checkpoint. Therefore, the event is moved using the updateEvents() function.
This is called from simulate.py once the system has resumed from the checkpoint.
NOTE: It should be noted that this is a fairly temporary patch which re-adds the
capability to extract temporal information from the communication monitors. It
should not be used at the same time as anything that relies on dumping the
statistics based on in simulation events i.e. a context switch.
Newer Linux kernels require DTB (device tree blobs) to specify platform
configurations. The input DTB filename can be specified through gem5 parameters
in LinuxArmSystem.
This script (util/diff_config.pl) takes two config.ini files and compares them.
It highlights value changes, as well as displaying which parts are unique to
a specific config.ini file. This is useful when trying to replicate an earlier
experiment and when trying to make small changes to an existing configuration.
Instead of statically defining miscRegName to contain NUM_MISCREGS
elements, let the compiler determine the length of the array. This
allows us to use a static_assert to test that all registers are listed
in the name vector.
C++11 has support for static_asserts to provide compile-time assertion
checking. This is very useful when testing, for example, structure
sizes to make sure that the compiler got the right alignment or vector
sizes.
Remove SimObject::setMemoryMode from the main SimObject class since it
is only valid for the System class. In addition to removing the method
from the C++ sources, this patch also removes getMemoryMode and
changeTiming from SimObject.py and updates the simulation code to call
the (get|set)MemoryMode method on the System object instead.
This patch adds an explicit dependency between param_%s.i and the
Python source file defining the object. Previously, the build system
didn't rebuild SWIG interfaces correctly when an object's Python
sources were updated.
This patch merely adds a clock other than the default 1 Tick for the
CPUs of both the test system and drive system for the twosys-tsunami
regression.
The CPU frequency of the driver system is choosed to be twice that of
the test system to ensure it is not the bottleneck (although in this
case it mostly serves as a demonstration of a two-system setup),
Fix the drain functionality of the RubyPort to only call drain on child ports
during a system-wide drain process, instead of calling each time that a
ruby_hit_callback is executed.
This fixes the issue of the RubyPort ports being reawakened during the drain
simulation, possibly with work they didn't previously have to complete. If
they have new work, they may call process on the drain event that they had
not registered work for, causing an assertion failure when completing the
drain event.
Also, in RubyPort, set the drainEvent to NULL when there are no events
to be drained. If not set to NULL, the drain loop can result in stale
drainEvents used.
This patch introduces a high-level model of a DRAM controller, with a
basic read/write buffer structure, a selectable and customisable
arbiter, a few address mapping options, and the basic DRAM timing
constraints. The parameters make it possible to turn this model into
any desired DDRx/LPDDRx/WideIOx memory controller.
The intention is not to be cycle accurate or capture every aspect of a
DDR DRAM interface, but rather to enable exploring of the high-level
knobs with a good simulation speed. Thus, contrary to e.g. DRAMSim
this module emphasizes simulation speed with a good-enough accuracy.
This module is merely a starting point, and there are plenty additions
and improvements to come. A notable addition is the support for
address-striping in the bus to enable a multi-channel DRAM
controller. Also note that there are still a few "todo's" in the code
base that will be addressed as we go along.
A follow-up patch will add basic performance regressions that use the
traffic generator to exercise a few well-defined corner cases.
This patch adds a basic regression for the traffic generator. The
regression also serves as an example of the file formats used. More
complex regressions that make use of a DRAM controller model will
follow shortly.
This patch adds a traffic generator to the code base. The generator is
aimed to be used as a black box model to create appropriate use-cases
and benchmarks for the memory system, and in particular the
interconnect and the memory controller.
The traffic generator is a master module, where the actual behaviour
is captured in a state-transition graph where each state generates
some sort of traffic. By constructing a graph it is possible to create
very elaborate scenarios from basic generators. Currencly the set of
generators include idling, linear address sweeps, random address
sequences and playback of traces (recording will be done by the
Communication Monitor in a follow-up patch). At the moment the graph
and the states are described in an ad-hoc line-based format, and in
the future this should be aligned with our used of e.g. the Google
protobufs. Similarly for the traces, the format is currently a
simplistic ad-hoc line-based format that merely serves as a starting
point.
In addition to being used as a black-box model for system components,
the traffic generator is also useful for creating test cases and
regressions for the interconnect and memory system. In future patches
we will use the traffic generator to create DRAM test cases for the
controller model.
The patch following this one adds a basic regressions which also
contains an example configuration script and trace file for playback.