In the process make add skipFuction() to handle isa specific function skipping
instead of ifdefs and other ugliness. For almost all ABIs, 64 bit arguments can
only start in even registers. Size is now passed to getArgument() so that 32
bit systems can make decisions about register selection for 64 bit arguments.
The number argument is now passed by reference because getArgument() will need
to change it based on the size of the argument and the current argument number.
For ARM, if the argument number is odd and a 64-bit register is requested the
number must first be incremented to because all 64 bit arguments are passed
in an even argument register. Then the number will be incremented again to
access both halves of the argument.
This reduces the scope of those includes and makes it less likely for there to
be a dependency loop. This also moves the hashing functions associated with
ExtMachInst objects to be with the ExtMachInst definitions and out of
utility.hh.
Also move the "Fault" reference counted pointer type into a separate file,
sim/fault.hh. It would be better to name this less similarly to sim/faults.hh
to reduce confusion, but fault.hh matches the name of the type. We could change
Fault to FaultPtr to match other pointer types, and then changing the name of
the file would make more sense.
When decoding a srs instruction, invalid mode encoding returns invalid instruction.
This can happen when garbage instructions are fetched from mispredicted path
This is to help tidy up arch/x86. These files should not be used external to
the ISA.
--HG--
rename : src/arch/x86/apicregs.hh => src/arch/x86/regs/apic.hh
rename : src/arch/x86/floatregs.hh => src/arch/x86/regs/float.hh
rename : src/arch/x86/intregs.hh => src/arch/x86/regs/int.hh
rename : src/arch/x86/miscregs.hh => src/arch/x86/regs/misc.hh
rename : src/arch/x86/segmentregs.hh => src/arch/x86/regs/segment.hh
This single parameter replaces the collection of bools that set up various
flavors of microops. A flag parameter also allows other flags to be set like
the serialize before/after flags, etc., without having to change the
constructor.
This allows one two different OS requirements for the same ISA to be handled.
Some OSes are compiled for a virtual address and need to be loaded into physical
memory that starts at address 0, while other bare metal tools generate
images that start at address 0.
Replace direct call to unserialize() on each SimObject with a pair of
calls for better control over initialization in both ckpt and non-ckpt
cases.
If restoring from a checkpoint, loadState(ckpt) is called on each
SimObject. The default implementation simply calls unserialize() if
there is a corresponding checkpoint section, so we get backward
compatibility for existing objects. However, objects can override
loadState() to get other behaviors, e.g., doing other programmed
initializations after unserialize(), or complaining if no checkpoint
section is found. (Note that the default warning for a missing
checkpoint section is now gone.)
If not restoring from a checkpoint, we call the new initState() method
on each SimObject instead. This provides a hook for state
initializations that are only required when *not* restoring from a
checkpoint.
Given this new framework, do some cleanup of LiveProcess subclasses
and X86System, which were (in some cases) emulating initState()
behavior in startup via a local flag or (in other cases) erroneously
doing initializations in startup() that clobbered state loaded earlier
by unserialize().
When doing an unsigned 64 bit division with a divisor that has its most
significant bit set, the division code would spill a bit off of the end of a
uint64_t trying to shift the dividend into position. This change adds code
that handles that case specially by purposefully letting it spill and then
going ahead assuming there was a 65th one bit.
When each load or store is sent to the LSQ, we check whether it will cross a
cache line boundary and, if so, split it in two. This creates two TLB
translations and two memory requests. Care has to be taken if the first
packet of a split load is sent but the second blocks the cache. Similarly,
for a store, if the first packet cannot be sent, we must store the second
one somewhere to retry later.
This modifies the LSQSenderState class to record both packets in a split
load or store.
Finally, a new const variable, HasUnalignedMemAcc, is added to each ISA
to indicate whether unaligned memory accesses are allowed. This is used
throughout the changed code so that compiler can optimise away code dealing
with split requests for ISAs that don't need them.
Some of the micro-ops weren't casting 1 to ULL before shifting,
which can cause problems. On the perl makerand input this
caused some values to be negative that shouldn't have been.
The casts are done as ULL(1) instead of 1ULL to match others
in the m5 code base.
Unfortunately my implementation of the movd instruction had two bugs.
In one case, when moving a 32-bit value into an xmm register, the
lower half of the xmm register was not zero extended.
The other case is that xmm was used instead of xmmlm as the source
for a register move. My test case didn't notice this at first
as it moved xmm0 to eax, which both have the same register
number.
This double cast led to rounding errors which caused
some benchmarks to get the wrong values, most notably lucas
which failed spectacularly due to CVTTSD2SI returning an
off-by-one value. equake was also broken.
This problem is like the one fixed with movhpd a few weeks ago.
A +8 displacement is used to access memory when there should
be none.
This fix is needed for the perlbmk spec2k benchmark to run.
64-bit vsyscall is different than 32-bit.
There are only two syscalls, time and gettimeofday.
On a real system, there is complicated code that implements these
without entering the kernel. That would be complicated to implement in m5.
Instead we just place code that calls the regular syscalls (this is how
tools such as valgrind handle this case).
This is needed for the perlbmk spec2k benchmark.
These are complicated instructions and the micro-code might be suboptimal.
This has been tested with some small sample programs (attached)
The psrldq instruction is needed by various spec2k programs.
This patch implements the movd_Vo_Edp series of instructions.
It addresses various concerns by Gabe Black about which file the
instruction belonged in, as well as supporting REX prefixed
instructions properly.
This instruction is needed for some of the spec2k benchmarks, most
notably bzip2.
This patch implements the haddpd instruction.
It fixes the problem in the previous version (pointed out by Gabe Black)
where an incorrect result would happen if you issue the instruction
with the same argument twice, i.e. "haddpd %xmm0,%xmm0"
This instruction is used by many spec2k benchmarks.
This patch hooks up the truncate, ftruncate, truncate64 and ftruncate64
system calls on 32-bit and 64-bit X86.
These have been tested on both architectures.
ftruncate/ftruncate64 is needed for the f90 spec2k benchmarks.
When accessing arguments for a syscall, the position of an argument depends on
the policies of the ISA, how much space preceding arguments took up, and the
"alignment" of the index for this particular argument into the number of
possible storate locations. This change adjusts getSyscallArg to take its
index parameter by reference instead of value and to adjust it to point to the
possible location of the next argument on the stack, basically just after the
current one. This way, the rules for the new argument can be applied locally
without knowing about other arguments since those have already been taken into
account implicitly.
All system calls have also been changed to reflect the new interface. In a
number of cases this made the implementation clearer since it encourages
arguments to be collected in one place in order and then used as necessary
later, as opposed to scattering them throughout the function or using them in
place in long expressions. It also discourages using getSyscallArg over and
over to retrieve the same value when a temporary would do the job.
The movdqa instruction should enforce 16-byte alignment.
This implementation does not do that.
These instructions are needed for most of x86_64 spec2k to run.
The st_size entry was in the wrong place
(see linux-2.6.29/arch/x86/include/asm/stat.h )
Also, the packed attribute is needed when compiling on a
64-bit machine, otherwise gcc adds extra padding that
break the layout of the structure.
I've tested these on x86 and they work as expected.
In theory for 32-bit x86 we should have some sort of special
handling for the legacy 16-bit uid/gid syscalls, but in practice
modern toolchains don't use the 16-bit versions, and m5 sets the uid
and gid values to be less than 16-bits anyway.
This fix is needed for the perl spec2k benchmarks to run.
Get rid of misc.py and just stick misc things in __init__.py
Move utility functions out of SCons files and into m5.util
Move utility type stuff from m5/__init__.py to m5/util/__init__.py
Remove buildEnv from m5 and allow access only from m5.defines
Rename AddToPath to addToPath while we're moving it to m5.util
Rename read_command to readCommand while we're moving it
Rename compare_versions to compareVersions while we're moving it.
--HG--
rename : src/python/m5/convert.py => src/python/m5/util/convert.py
rename : src/python/m5/smartdict.py => src/python/m5/util/smartdict.py
The manuals from both AMD and Intel say that when writing to a 32 bit
destination in 64 bit mode, the upper 32 bits of the register are filled with
zeros. They also both say that the CMOV instructions leave their destination
alone when their condition fails. Unfortunately, it seems that CMOV will zero
extend its destination register whether or not it was supposed to actually do
a move on both platforms. This seems to be the only case where this happens,
but it would be hard to say for sure.
This is my best guess as far as what these should do. Other existing microops
use implicit registers, mul1s and mul1u for instance, so this should be ok.
The microop that loads the implicit DoubleBits register would fall into one
of the microop slots for moving to/from special registers.
Register values will be "picked" which will assure they don't have junk beyond
the part we're using. Immediate values don't go through a similar process, so
we should truncate them explicitly.