Moving the definition of NoFault into fault.hh doesn't bring any new
dependencies with it, and allows some files to include just fault.hh which has
less baggage. NoFault will still be available to everything that includes
faults.hh because it includes fault.hh.
M5 skips over any simulated time where it doesn't have any work to do. When
the simulation is active, the time skipped is short and the work done at any
point in time is relatively substantial. If the time between events is long
and/or the work to do at each event is small, it's possible for simulated time
to pass faster than real time. When running a benchmark that can be good
because it means the simulation will finish sooner in real time. When
interacting with the real world through, for instance, a serial terminal or
bridge to a real network, this can be a problem. Human or network response time
could be greatly exagerated from the perspective of the simulation and make
simulated events happen "too soon" from an external perspective.
This change adds the capability to force the simulation to run no faster than
real time. It does so by scheduling a periodic event that checks to see if
its simulated period is shorter than its real period. If it is, it stalls the
simulation until they're equal. This is called time syncing.
A future change could add pseudo instructions which turn time syncing on and
off from within the simulation. That would allow time syncing to be used for
the interactive parts of a session but then turned off when running a
benchmark using the m5 utility program inside a script. Time syncing would
probably not happen anyway while running a benchmark because there would be
plenty of work for M5 to do, but the event overhead could be avoided.
There's no reason for it to derive from SimLoopExitEvent.
This whole drain thing needs to be redone eventually,
but this is a stopgap to make later changes to
SimLoopExitEvent feasible.
Avoid direct references to mainEventQueue in pseudo-insts
by indirecting through associated CPU object.
Made exitSimLoop() more flexible to enable some of these.
Ran all the source files through 'perl -pi' with this script:
s|\s*(};?\s*)?/\*\s*(end\s*)?namespace\s*(\S+)\s*\*/(\s*})?|} // namespace $3|;
s|\s*};?\s*//\s*(end\s*)?namespace\s*(\S+)\s*|} // namespace $2\n|;
s|\s*};?\s*//\s*(\S+)\s*namespace\s*|} // namespace $1\n|;
Also did a little manual editing on some of the arch/*/isa_traits.hh files
and src/SConscript.
This change is a low level and pervasive reorganization of how PCs are managed
in M5. Back when Alpha was the only ISA, there were only 2 PCs to worry about,
the PC and the NPC, and the lsb of the PC signaled whether or not you were in
PAL mode. As other ISAs were added, we had to add an NNPC, micro PC and next
micropc, x86 and ARM introduced variable length instruction sets, and ARM
started to keep track of mode bits in the PC. Each CPU model handled PCs in
its own custom way that needed to be updated individually to handle the new
dimensions of variability, or, in the case of ARMs mode-bit-in-the-pc hack,
the complexity could be hidden in the ISA at the ISA implementation's expense.
Areas like the branch predictor hadn't been updated to handle branch delay
slots or micropcs, and it turns out that had introduced a significant (10s of
percent) performance bug in SPARC and to a lesser extend MIPS. Rather than
perpetuate the problem by reworking O3 again to handle the PC features needed
by x86, this change was introduced to rework PC handling in a more modular,
transparent, and hopefully efficient way.
PC type:
Rather than having the superset of all possible elements of PC state declared
in each of the CPU models, each ISA defines its own PCState type which has
exactly the elements it needs. A cross product of canned PCState classes are
defined in the new "generic" ISA directory for ISAs with/without delay slots
and microcode. These are either typedef-ed or subclassed by each ISA. To read
or write this structure through a *Context, you use the new pcState() accessor
which reads or writes depending on whether it has an argument. If you just
want the address of the current or next instruction or the current micro PC,
you can get those through read-only accessors on either the PCState type or
the *Contexts. These are instAddr(), nextInstAddr(), and microPC(). Note the
move away from readPC. That name is ambiguous since it's not clear whether or
not it should be the actual address to fetch from, or if it should have extra
bits in it like the PAL mode bit. Each class is free to define its own
functions to get at whatever values it needs however it needs to to be used in
ISA specific code. Eventually Alpha's PAL mode bit could be moved out of the
PC and into a separate field like ARM.
These types can be reset to a particular pc (where npc = pc +
sizeof(MachInst), nnpc = npc + sizeof(MachInst), upc = 0, nupc = 1 as
appropriate), printed, serialized, and compared. There is a branching()
function which encapsulates code in the CPU models that checked if an
instruction branched or not. Exactly what that means in the context of branch
delay slots which can skip an instruction when not taken is ambiguous, and
ideally this function and its uses can be eliminated. PCStates also generally
know how to advance themselves in various ways depending on if they point at
an instruction, a microop, or the last microop of a macroop. More on that
later.
Ideally, accessing all the PCs at once when setting them will improve
performance of M5 even though more data needs to be moved around. This is
because often all the PCs need to be manipulated together, and by getting them
all at once you avoid multiple function calls. Also, the PCs of a particular
thread will have spatial locality in the cache. Previously they were grouped
by element in arrays which spread out accesses.
Advancing the PC:
The PCs were previously managed entirely by the CPU which had to know about PC
semantics, try to figure out which dimension to increment the PC in, what to
set NPC/NNPC, etc. These decisions are best left to the ISA in conjunction
with the PC type itself. Because most of the information about how to
increment the PC (mainly what type of instruction it refers to) is contained
in the instruction object, a new advancePC virtual function was added to the
StaticInst class. Subclasses provide an implementation that moves around the
right element of the PC with a minimal amount of decision making. In ISAs like
Alpha, the instructions always simply assign NPC to PC without having to worry
about micropcs, nnpcs, etc. The added cost of a virtual function call should
be outweighed by not having to figure out as much about what to do with the
PCs and mucking around with the extra elements.
One drawback of making the StaticInsts advance the PC is that you have to
actually have one to advance the PC. This would, superficially, seem to
require decoding an instruction before fetch could advance. This is, as far as
I can tell, realistic. fetch would advance through memory addresses, not PCs,
perhaps predicting new memory addresses using existing ones. More
sophisticated decisions about control flow would be made later on, after the
instruction was decoded, and handed back to fetch. If branching needs to
happen, some amount of decoding needs to happen to see that it's a branch,
what the target is, etc. This could get a little more complicated if that gets
done by the predecoder, but I'm choosing to ignore that for now.
Variable length instructions:
To handle variable length instructions in x86 and ARM, the predecoder now
takes in the current PC by reference to the getExtMachInst function. It can
modify the PC however it needs to (by setting NPC to be the PC + instruction
length, for instance). This could be improved since the CPU doesn't know if
the PC was modified and always has to write it back.
ISA parser:
To support the new API, all PC related operand types were removed from the
parser and replaced with a PCState type. There are two warts on this
implementation. First, as with all the other operand types, the PCState still
has to have a valid operand type even though it doesn't use it. Second, using
syntax like PCS.npc(target) doesn't work for two reasons, this looks like the
syntax for operand type overriding, and the parser can't figure out if you're
reading or writing. Instructions that use the PCS operand (which I've
consistently called it) need to first read it into a local variable,
manipulate it, and then write it back out.
Return address stack:
The return address stack needed a little extra help because, in the presence
of branch delay slots, it has to merge together elements of the return PC and
the call PC. To handle that, a buildRetPC utility function was added. There
are basically only two versions in all the ISAs, but it didn't seem short
enough to put into the generic ISA directory. Also, the branch predictor code
in O3 and InOrder were adjusted so that they always store the PC of the actual
call instruction in the RAS, not the next PC. If the call instruction is a
microop, the next PC refers to the next microop in the same macroop which is
probably not desirable. The buildRetPC function advances the PC intelligently
to the next macroop (in an ISA specific way) so that that case works.
Change in stats:
There were no change in stats except in MIPS and SPARC in the O3 model. MIPS
runs in about 9% fewer ticks. SPARC runs with 30%-50% fewer ticks, which could
likely be improved further by setting call/return instruction flags and taking
advantage of the RAS.
TODO:
Add != operators to the PCState classes, defined trivially to be !(a==b).
Smooth out places where PCs are split apart, passed around, and put back
together later. I think this might happen in SPARC's fault code. Add ISA
specific constructors that allow setting PC elements without calling a bunch
of accessors. Try to eliminate the need for the branching() function. Factor
out Alpha's PAL mode pc bit into a separate flag field, and eliminate places
where it's blindly masked out or tested in the PC.
In the process make add skipFuction() to handle isa specific function skipping
instead of ifdefs and other ugliness. For almost all ABIs, 64 bit arguments can
only start in even registers. Size is now passed to getArgument() so that 32
bit systems can make decisions about register selection for 64 bit arguments.
The number argument is now passed by reference because getArgument() will need
to change it based on the size of the argument and the current argument number.
For ARM, if the argument number is odd and a 64-bit register is requested the
number must first be incremented to because all 64 bit arguments are passed
in an even argument register. Then the number will be incremented again to
access both halves of the argument.
This reduces the scope of those includes and makes it less likely for there to
be a dependency loop. This also moves the hashing functions associated with
ExtMachInst objects to be with the ExtMachInst definitions and out of
utility.hh.
Also move the "Fault" reference counted pointer type into a separate file,
sim/fault.hh. It would be better to name this less similarly to sim/faults.hh
to reduce confusion, but fault.hh matches the name of the type. We could change
Fault to FaultPtr to match other pointer types, and then changing the name of
the file would make more sense.
Instead of putting all object files into m5/object/__init__.py, interrogate
the importer to find out what should be imported.
Instead of creating a single file that lists all of the embedded python
modules, use static object construction to put those objects onto a list.
Do something similar for embedded swig (C++) code.
This allows one two different OS requirements for the same ISA to be handled.
Some OSes are compiled for a virtual address and need to be loaded into physical
memory that starts at address 0, while other bare metal tools generate
images that start at address 0.
Replace direct call to unserialize() on each SimObject with a pair of
calls for better control over initialization in both ckpt and non-ckpt
cases.
If restoring from a checkpoint, loadState(ckpt) is called on each
SimObject. The default implementation simply calls unserialize() if
there is a corresponding checkpoint section, so we get backward
compatibility for existing objects. However, objects can override
loadState() to get other behaviors, e.g., doing other programmed
initializations after unserialize(), or complaining if no checkpoint
section is found. (Note that the default warning for a missing
checkpoint section is now gone.)
If not restoring from a checkpoint, we call the new initState() method
on each SimObject instead. This provides a hook for state
initializations that are only required when *not* restoring from a
checkpoint.
Given this new framework, do some cleanup of LiveProcess subclasses
and X86System, which were (in some cases) emulating initState()
behavior in startup via a local flag or (in other cases) erroneously
doing initializations in startup() that clobbered state loaded earlier
by unserialize().
Enforce that the Python Root SimObject is instantiated only
once. The C++ Root object already panics if more than one is
created. This change avoids the need to track what the root
object is, since it's available from Root.getInstance() (if it
exists). It's now redundant to have the user pass the root
object to functions like instantiate(), checkpoint(), and
restoreCheckpoint(), so that arg is gone. Users who use
configs/common/Simulate.py should not notice.
If the user sets the environment variable M5_OVERRIDE_PY_SOURCE to
True, then imports that would normally find python code compiled into
the executable will instead first check in the absolute location where
the code was found during the build of the executable. This only
works for files in the src (or extras) directories, not automatically
generated files.
This is a developer feature!
Expand the help text on the --remote-gdb-port option so
people know you can use it to disable remote gdb without
reading the source code, and thus don't waste any time
trying to add a separate option to do that.
Clean up some gdb-related cruft I found while looking
for where one would add a gdb disable option, before
I found the comment that told me that I didn't need
to do that.
Time from base/time.hh has a name clash with Time from Ruby's
TypeDefines.hh. Eventually Ruby's Time should go away, so instead of
fixing this properly just try to avoid the clash.
- Make the initialized flag always available, not just in debug mode.
- Make the Initialized flag actually use several bits so it is very
unlikely that something that's uninitialized accidentally looks
initialized.
- Add an initialized() function that tells you if the current event is
indeed initialized.
- Clear the flags on delete so it can't be accidentally thought of as
initialized.
- Fix getFlags assert statement. "How did this ever work?"
Symbolic names should still be used, but this makes it easier to do
things like:
Event::Priority MyObject_Pri = Event::Default_Pri + 1
Remember that higher numbers are lower priority (should we fix this?)
1) Move alpha-specific code out of page_table.cc:serialize().
2) Begin serializing M5_pid and unserializing it, but adding an function to do optional paramIn so that old checkpoints don't need to be fixed up.
3) Fix up alpha startup code so that the unserialized M5_pid value is properly written to DTB_IPR_ASN.
4) Fix the memory unserialize that I forgot somehow in the last changeset.
5) Add in an agg_se.py to handle aggregated checkpoints. --bench foo-bar plus positional arguments foo bar are the only changes in usage from se.py.
Note this aggregation stuff has only been tested for Alpha and nothing else, though it should take a very minimal amount of work to get it to work with another ISA.
When accessing arguments for a syscall, the position of an argument depends on
the policies of the ISA, how much space preceding arguments took up, and the
"alignment" of the index for this particular argument into the number of
possible storate locations. This change adjusts getSyscallArg to take its
index parameter by reference instead of value and to adjust it to point to the
possible location of the next argument on the stack, basically just after the
current one. This way, the rules for the new argument can be applied locally
without knowing about other arguments since those have already been taken into
account implicitly.
All system calls have also been changed to reflect the new interface. In a
number of cases this made the implementation clearer since it encourages
arguments to be collected in one place in order and then used as necessary
later, as opposed to scattering them throughout the function or using them in
place in long expressions. It also discourages using getSyscallArg over and
over to retrieve the same value when a temporary would do the job.
This adds support for the 32-bit, big endian Power ISA. This supports both
integer and floating point instructions based on the Power ISA Book I v2.06.
Glibc often assumes that memory it receives from the kernel after a brk
system call will contain only zeros. This is important during a calloc,
because it won't clear the new memory itself. In the simulator, if the
new page exists, it will be cleared using this patch, to mimic the kernel's
functionality.
Get rid of misc.py and just stick misc things in __init__.py
Move utility functions out of SCons files and into m5.util
Move utility type stuff from m5/__init__.py to m5/util/__init__.py
Remove buildEnv from m5 and allow access only from m5.defines
Rename AddToPath to addToPath while we're moving it to m5.util
Rename read_command to readCommand while we're moving it
Rename compare_versions to compareVersions while we're moving it.
--HG--
rename : src/python/m5/convert.py => src/python/m5/util/convert.py
rename : src/python/m5/smartdict.py => src/python/m5/util/smartdict.py
Using a look up table changed the run time of the SPARC_FS solaris boot
regression from:
real 14m45.951s
user 13m57.528s
sys 0m3.452s
to:
real 12m19.777s
user 12m2.685s
sys 0m2.420s
Start by turning all of the *Source functions into classes
so we can do more calculations and more easily collect the data we need.
Add parameters to the new classes for indicating what sorts of flags the
objects should be compiled with so we can allow certain files to be compiled
without Werror for example.
This patch adds limited multithreading support in syscall-emulation
mode, by using the clone system call. The clone system call works
for Alpha, SPARC and x86, and multithreaded applications run
correctly in Alpha and SPARC.
Basically merge it in with Halted.
Also had to get rid of a few other functions that
called ThreadContext::deallocate(), including:
- InOrderCPU's setThreadRescheduleCondition.
- ThreadContext::exit(). This function was there to avoid terminating
simulation when one thread out of a multi-thread workload exits, but we
need to find a better (non-cpu-centric) way.
This is mainly to allow the unit test to run without requiring the standard
M5 stats from being initialized (e.g. sim_seconds, sim_ticks, host_seconds)
Bogus calls to ChunkGenerator with negative size were triggering
a new assertion that was added there.
Also did a little renaming and cleanup in the process.
We need to add a reference when an object is put on the C++ queue, and remove
a reference when the object is removed from the queue. This was not happening
before and caused a memory problem.
the primary identifier for a hardware context should be contextId(). The
concept of threads within a CPU remains, in the form of threadId() because
sometimes you need to know which context within a cpu to manipulate.
SE. Process still keeps track of the tc's it owns, but registration occurs
with the System, this eases the way for system-wide context Ids based on
registration.
across the subclasses. generally make it so that member data is _cpuId and
accessor functions are cpuId(). The ID val comes from the python (default -1 if
none provided), and if it is -1, the index of cpuList will be given. this has
passed util/regress quick and se.py -n4 and fs.py -n4 as well as standard
switch.
Since I never implemented a proper solution, put it back to something that
at least works for now. Once I add more event queues, I'll have to really
fix this though
The major thrust of this change is to limit the amount of code
duplication surrounding the code for these functions. This code also
adds two new message types called info and hack. Info is meant to be
less harsh than warn so people don't get confused and start thinking
that the simulator is broken. Hack is a way for people to add runtime
messages indicating that the simulator just executed a code "hack"
that should probably be fixed. The benefit of knowing about these
code hacks is that it will let people know what sorts of inaccuracies
or potential bugs might be entering their experiments. Finally, I've
added some flags to turn on and off these message types so command
line options can change them.
Make them easier to express by only having the cxx_type parameter which
has the full namespace name, and drop the cxx_namespace thing.
Add support for multiple levels of namespace.
Since the early days of M5, an event needed to know which event queue
it was on, and that data was required at the time of construction of
the event object. In the future parallelized M5, this sort of
requirement does not work well since the proper event queue will not
always be known at the time of construction of an event. Now, events
are created, and the EventQueue itself has the schedule function,
e.g. eventq->schedule(event, when). To simplify the syntax, I created
a class called EventManager which holds a pointer to an EventQueue and
provides the schedule interface that is a proxy for the EventQueue.
The intent is that objects that frequently schedule events can be
derived from EventManager and then they have the schedule interface.
SimObject and Port are examples of objects that will become
EventManagers. The end result is that any SimObject can just call
schedule(event, when) and it will just call that SimObject's
eventq->schedule function. Of course, some objects may have more than
one EventQueue, so this interface might not be perfect for those, but
they should be relatively few.
Targets look like libm5_debug.so. This target can be dynamically
linked into another C++ program and provide just about all of the M5
features. Additionally, this library is a standalone module that can
be imported into python with an "import libm5_debug" type command
line.