Previously, .sm files were allowed to use the same name for a type and a
variable. This is unnecessarily confusing and has some bad side effects, like
not being able to declare later variables in the same scope with the same type.
This causes the compiler to complain and die on things like Address Address.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Change all occurrances of Address as a variable name to instead use Addr.
Address is an allowed name in slicc even when Address is also being used as a
type, leading to declarations of "Address Address". While this works, it
prevents adding another field of type Address because the compiler then thinks
Address is a variable name, not type.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
The current implementation of the x87 never updates the x87 tag
word. This is currently not a big issue since the simulated x87 never
checks for stack overflows, however this becomes an issue when
switching between a virtualized CPU and a simulated CPU. This
changeset adds support, which is enabled by default, for updating the
tag register to every floating point microop that updates the stack
top using the spm mechanism.
The new tag words is generated by the helper function
X86ISA::genX87Tags(). This function is currently limited to flagging a
stack position as valid or invalid and does not try to distinguish
between the valid, zero, and special states.
This changeset actually fixes two issues:
* The lfpimm instruction didn't work correctly when applied to a
floating point constant (it did work for integers containing the
bit string representation of a constant) since it used
reinterpret_cast to convert a double to a uint64_t. This caused a
compilation error, at least, in gcc 4.6.3.
* The instructions loading floating point constants in the x87
processor didn't work correctly since they just stored a truncated
integer instead of a double in the floating point register. This
changeset fixes the old microcode by using lfpimm instruction
instead of the limm instructions.
The current implementation of fprem simply does an fmod and doesn't
simulate any of the iterative behavior in a real fprem. This isn't
normally a problem, however, it can lead to problems when switching
between CPU models. If switching from a real CPU in the middle of an
fprem loop to a simulated CPU, the output of the fprem loop becomes
correupted. This changeset changes the fprem implementation to work
like the one on real hardware.
Reuse the address finalization code in the TLB instead of replicating
it when handling MMIO. This patch also adds support for injecting
memory mapped IPR requests into the memory system.
The rflags register is spread across several different registers. Most
of the flags are stored in MISCREG_RFLAGS, but some are stored in
microcode registers. When accessing RFLAGS, we need to reconstruct it
from these registers. This changeset adds two functions,
X86ISA::getRFlags() and X86ISA::setRFlags(), that take care of this
magic.
This changeset fixes two problems in the FABS and FCHS
implementation. First, the ISA parser expects the assignment in
flag_code to be a pure assignment and not an and-assignment, which
leads to the isa_parser omitting the misc reg update. Second, the FCHS
and FABS macro-ops don't set the SetStatus flag, which means that the
default micro-op version, which doesn't update FSW, is executed.
This patch moves the instantiation of system.membus in se.py to the area of
code where classic memory system has been dealt with. Ruby does not require
this bus and hence it should not be instantiated.
This changeset adds the following stats to KVM:
* numVMHalfEntries: Number of entries into KVM to finalize pending
IO operations without executing guest instructions. These typically
happen as a result of a drain where the guest must finalize some
operations before the guest state is consistent.
* numExitSignal: Number of VM exits that have been triggered by a
signal. These usually happen as a result of the timer that limits
the time spent in KVM.
We used to use the KVM CPU's clock to specify the host frequency. This
was not ideal for several reasons. One of them being that the clock
parameter of a CPU determines the frequency of some of the components
connected to the CPU. This changeset adds a separate hostFreq
parameter that should be used to specify the host frequency until we
add code to autodetect it. The hostFactor should still be used to
specify the conversion factor between the host performance and that of
the simulated system.
We currently execute instructions in the guest and then handle any IO
request right after we break out of the virtualized environment. This
has the effect of executing IO requests in the exact same tick as the
first instruction in the sequence that was just run. There seem to be
cases where this simplification upsets some timing-sensitive devices.
This changeset splits execute and IO (and other services) across
multiple ticks. This is implemented by adding a separate
RunningService state to the CPU state machine. When a VM requires
service, it enters into this state and pending IO is then serviced in
the future instead of immediately. The delay between getting the
request and servicing it depends on the number of cycles executed in
the guest, which allows other components to catch up with the CPU.
Update the system's totalNumInst counter when exiting from KVM and
maintain an internal absolute instruction count instead of relying on
the one from perf.
The TSC value stored in MISCREG_TSC is actually just an offset from
the current CPU cycle to the actual TSC value. Writes with
side-effects to the TSC subtract the current cycle count before
storing the new value, while reads add the current cycle count. When
switching CPUs, the current value is copied without side-effects. This
works as long as the source and the destination CPUs have the same
clock frequencies. The TSC will jump, sometimes backwards, if they
have different clock frequencies. Most OSes assume the TSC to be
monotonic and break when this happens.
This changeset makes sure that the TSC is copied with side-effects to
ensure that the offset is updated to match the new CPU.
HG changset 34e3295b0e39 introduced a check in the main simulation
loop that discards exit events that happen at the same tick as another
exit event. This was supposed to fix a problem where a simulation
script got confused by multiple exit events. This obviously breaks the
simulator since it can hide important simulation events, such as a
simulation failure, that happen at the same time as a non-fatal
simulation event.
Currently, the only way to get a CPU to stop after a fixed number of
instructions/loads is to set a property on the CPU that causes a
SimLoopExitEvent to be scheduled when the CPU is constructed. This is
clearly not ideal in cases where the simulation script wants the CPU
to stop at multiple instruction counts (e.g., SimPoint generation).
This changeset adds the methods scheduleInstStop() and
scheduleLoadStop() to the BaseCPU. These methods are exported to
Python and are designed to be used from the simulation script. By
using these methods instead of the old properties, a simulation script
can schedule a stop at any point during simulation or schedule
multiple stops. The number of instructions specified when scheduling a
stop is relative to the current point of execution.
Ruby's controller statistics have been mostly moved to stats.txt now.
Plus stats.txt for solaris/t1000-simple-atomic and arm/20.parser are
also being updated.
This patch removes per processor cycle count, histogram for filter stats,
histogram for multicasts, histogram for prefetch wait, some function
prototypes that do not have definitions.
The Profiler class does not need an event for dumping statistics
periodically. This is because there is a method for dumping statistics
for all the sim objects periodically. Since Ruby is a sim object, its
statistics are also included.
This moves event and transition count statistics for cache controllers to
gem5's statistics. It does the same for the statistics associated with the
memory controller in ruby.
All the cache/directory/dma controllers individually collect the event and
transition counts. A callback function, collateStats(), has been added that
is invoked on the controller version 0 of each controller class. This
function adds all the individual controller statistics to a vector
variables. All the code for registering the statistical variables and
collating them is generated by SLICC. The patch removes the files
*_Profiler.{cc,hh} and *_ProfileDumper.{cc,hh} which were earlier used for
collecting and dumping statistics respectively.
This patch adds a new flag to specify if the data values for a given vector
should be printed in one line in the stats.txt file. The default behavior
will be to print the data in multiple lines. It makes changes to print
functions to enforce this behavior.
There is some problem with the way listing cpu options right now.
Since Ruby uses the options variable, this variable has to be created in the
config file that runs the fs test for x86 and mesi cmp directory combination.
While creating the variable, some error is occurs due to the way list of cpu
types is now created. Hence, we need to compile all the cpu models.
in the TLB
Some architectures (currently only x86) require some fixing-up of
physical addresses after a normal address translation. This is usually
to remap devices such as the APIC, but could be used for other memory
mapped devices as well. When running the CPU in a using hardware
virtualization, we still need to do these address fix-ups before
inserting the request into the memory system. This patch moves this
patch allows that code to be used by such CPUs without doing full
address translations.
The custom Python loader didn't comply with PEP302 for two reasons:
* Previously, we would overwrite old modules on name
conflicts. PEP302 explicitly states that: "If there is an existing
module object named 'fullname' in sys.modules, the loader must use
that existing module".
* The "__package__" attribute wasn't set. PEP302: "The __package__
attribute must be set."
This changeset addresses both of these issues.
The --restore-with-cpu option didn't use CpuConfig.cpu_names() to
determine which CPU names are valid, instead it used a static list of
known CPU names. This changeset makes the option parsing code use the
CPU list from the CpuConfig module instead.
Some architectures have special registers in the guest that can be
used to do cycle accounting. This is generally preferrable since the
prevents the guest from seeing a non-monotonic clock. This changeset
adds a virtual method, getHostCycles(), that the architecture-specific
code can override to implement this functionallity. The default
implementation uses the hwCycles counter.
timer_create can apparently return -1 and set errno to EAGAIN if the
kernel suffered a temporary failure when allocating a timer. This
happens from time to time, so we need to handle it.
It is now required to initialize the thread context by calling
startup() on it. Failing to do so currently causes decoder in
x86-based CPUs to get very confused when restoring from checkpoints.
Some Linux versions disable updates (regB.set = 1) to prevent the chip
from updating its internal state while the OS is updating it. Support
for this was already there, this patch merely disables the check in
writeReg that prevented it from being enabled. The patch also includes
support for disabling the divider, which is used to control when clock
updates should start after setting the internal RTC state.
These changes are required to boot most vanilla Linux distributions
that update the RTC settings at boot.
Rewrite reg A & B handling to use the bitunion stuff instead of bit
masking. Add better error messages when the kernel tries to enable
unsupported stuff.
This patch updates the stats to reflect the addition of the bus stats,
and changes to the bus layers. In addition it updates the stats to
match the addition of the static pipeline latency of the memory
conotroller and the addition of a stat tracking the bytes per
activate.
This patch changes the class names of the variuos DRAM configurations
to better reflect what memory they are based on. The speed and
interface width is now part of the name, and also the alias that is
used to select them on the command line.
Some minor changes are done to the actual parameters, to better
reflect the named configurations. As a result of these changes the
regressions change slightly and the stats will be bumped in a separate
patch.
This patch adds a histogram to track how many bytes are accessed in an
open row before it is closed. This metric is useful in characterising
a workload and the efficiency of the DRAM scheduler. For example, a
DDR3-1600 device requires 44 cycles (tRC) before it can activate
another row in the same bank. For a x32 interface (8 bytes per cycle)
that means 8 x 44 = 352 bytes must be transferred to hide the
preparation time.
This patch adds a frontend and backend static latency to the DRAM
controller by delaying the responses. Two parameters expressing the
frontend and backend contributions in absolute time are added to the
controller, and the appropriate latency is added to the responses when
adding them to the (infinite) queued port for sending.
For writes and reads that hit in the write buffer, only the frontend
latency is added. For reads that are serviced by the DRAM, the static
latency is the sum of the pipeline latencies of the entire frontend,
backend and PHY. The default values are chosen based on having roughly
10 pipeline stages in total at 500 MHz.
In the future, it would be sensible to make the controller use its
clock and convert these latencies (and a few of the DRAM timings) to
cycles.
This patch does some minor tidying up of the MSHR and MSHRQueue. The
clean up started as part of some ad-hoc tracing and debugging, but
seems worthwhile enough to go in as a separate patch.
The highlights of the changes are reduced scoping (private) members
where possible, avoiding redundant new/delete, and constructor
initialisation to please static code analyzers.
This patch prunes the TraceCPU as the code is stale and the
functionality that it provided can now be achieved with the TrafficGen
using its trace playback mode.
The TraceCPU was able to play back pre-recorded memory traces of a few
different formats, and to achieve this level of flexibility with the
TrafficGen, use the util/encode_packet_trace (with suitable
modifications) to create a protobuf trace off-line.
Add a check which ensures that the minumum period for the LINEAR and
RANDOM traffic generator states is less than or equal to the maximum
period. If the minimum period is greater than the maximum period a
fatal is triggered.
This patch fixes a bug with the traffic generator which occured when
reading in the state transitions from the configuration
file. Previously, the size of the vector which stored the transitions
was used to get the size of the transitions matrix, rather than using
the number of states. Therefore, if there were more transitions than
states, i.e. some transitions has a probability of less than 1, then
the traffic generator would fatal when trying to check the
transitions.
This issue has been addressed by using the number of input states,
rather then the number of transitions.
This patch adds an optional request elasticity to the traffic
generator, effectievly compensating for it in the case of the linear
and random generators, and adding it in the case of the trace
generator. The accounting is left with the top-level traffic
generator, and the individual generators do the necessary math as part
of determining the next packet tick.
Note that in the linear and random generators we have to compensate
for the blocked time to not be elastic, i.e. without this patch the
aforementioned generators will slow down in the case of back-pressure.
This patch changes the queued port for a conventional master port and
stalls the traffic generator when requests are not immediately
accepted. This is a first step to allowing elasticity in the injection
of requests.
The patch also adds stats for the sent packets and retries, and
slightly changes how the nextPacketTick and getNextPacket
interact. The advancing of the trace is now moved to getNextPacket and
nextPacketTick is only responsible for answering the question when the
next packet should be sent.
This patch moves the responsibility for sending packets out of the
generator states and leaves it with the top-level traffic
generator. The main aim of this patch is to enable a transition to
non-queued ports, i.e. with send/retry flow control, and to do so it
is much more convenient to not wrap the port interactions and instead
leave it all local to the traffic generator.
The generator states now only govern when they are ready to send
something new, and the generation of the packets to send. They thus
have no knowledge of the port that is used.