we are removing doGmReturn from the GM pipe, and adding completeAcc()
implementations for the HSAIL mem ops. the behavior in doGmReturn is
dependent on HSAIL and HSAIL mem ops, however the completion phase
of memory ops in machine ISA can be very different, even amongst individual
machine ISA mem ops. so we remove this functionality from the pipeline and
allow it to be implemented by the individual instructions.
this patch removes the GPUStaticInst enums that were defined in GPU.py.
instead, a simple set of attribute flags that can be set in the base
instruction class are used. this will help unify the attributes of HSAIL
and machine ISA instructions within the model itself.
because the static instrution now carries the attributes, a GPUDynInst
must carry a pointer to a valid GPUStaticInst so a new static kernel launch
instruction is added, which carries the attributes needed to perform a
the kernel launch.
Modify the opClass assigned to AArch64 FP instructions from SimdFloat* to
Float*. Also create the FloatMemRead and FloatMemWrite opClasses, which
distinguishes writes to the INT and FP register banks.
Change the latency of (Simd)FloatMultAcc to 5, based on the Cortex-A72,
where the "latency" of FMADD is 3 if the next instruction is a FMADD and
has only the augend to destination dependency, otherwise it's 7 cycles.
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
Normal MMAPPED_IPR requests are allowed to execute speculatively under the
assumption that they have no side effects. The special case of m5ops that are
treated like MMAPPED_IPR should not be allowed to execute speculatively, since
they can have side-effects. Adding the STRICT_ORDER flag to these requests
blocks execution until the associated instruction hits the ROB head.
this patch fixes issues with changeset 11593
use the host's pwrite() syscall for pwrite64Func(),
as opposed to pwrite64(), because pwrite64() does
not work well on all distros.
undo the enabling of fstatfs, as we will add this
in a separate pate.
Introduce and use a lookup table.
Using fetchDescriptor() rather than DMA cleanly handles nested paging.
Change-Id: I69ec762f176bd752ba1040890e731826b58d15a6
During host bootup, KVM reads/writes to CNTHCTL_EL2. Because this
miscreg has not been implemented, the simulation would end there. This
patch causes the simulation to warn about the read/write instead of fail.
Change-Id: If034bfd0818a9a5e50c5fe86609e945258c96fa3
This fixes a bug where stage 2 lookups used the AArch32
permissions rules even if we were executing in AArch64 mode.
Change-Id: Ia40758f0599667ca7ca15268bd3bf051342c24c1
This patch restricts trapping to hypervisor only if we are in the
correct exception level for the trap to happen.
Change-Id: I0a382b6a572ef835ea36d2702b8a81b633bd3df0
Faults that could potentially be routed to the hypervisor checked
whether or not they were in a secure state without checking if security
was enabled or not. This caused faults not to be routed correctly. This
patch causes secure state checking to first ask if security is enabled.
Change-Id: I179e9b181b27f552734c9bab2b18d05ac579a119
We recompute if we are doing a stage 2 walk inside of the table walker
but we have already figured it out in the tlb. Pass the information in
to the walk instead of recomputing it.
Change-Id: I39637ce99309b2ddbc30344d45ac9ebf6a203401
The functional case is already handled within the fetchDescriptor()
function. We can thus use that function for both atomic and functional
mode when we start the table walk.
Change-Id: Iacaed28cd9024d259fd37a58150efd00ff94d86e
This patch adds the option for faults to be routed to the hypervisor
using the pre-existing routeToHyp() functions that are present in each
fault type.
Change-Id: I9735512c094457636b9870456a5be5432288e004
During address translation instructions (such as AT S1E1R_Xt) the exception
level can be different than the current exception level. This patch fixes
how the TLB determines what EL to use during these instructions.
Change-Id: Ia9ce229404de9e284bc1f7479fd2c580efd55f8f
This patch adds the AArch64 instruction hvc which raises an exception
from EL1 into EL2. The host OS uses this instruction to world switch
into the guest.
Change-Id: I930ee43f4f0abd4b35a68eb2a72e44e3ea6570be
Make it so that getInterrupt *always* returns an interrupt if
checkInterrupts() returns true. This fixes/simplifies handling
of interrupts on the SMT FS CPUs (currently minor).
Don't consult the TLB test interface for PA's returned by functional
translations by the AT instruction. We implement this by chaning the
ISA code to synthesize 0-length functional reads for the TLB lookup.
The TLB then bypasses the final PA check in the tester if the size is
zero.
Change-Id: I2487b7f829cea88c37e229e9fc7a4543aced961b
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Previously when we initialized the TLB we would allocate a number of
TLB entries which would be marked as valid. As a result the TLB
contained an entry which would be considered a valid entry for the 0
page.
Change-Id: I23ace86426a171a4f6200ebeb29ad57c21647036
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Add helper functions to dump the guest kernel's dmesg buffer to a text
file in m5out. This functionality is split into two parts. First, a
dmesg dump function that can be used in other places:
void Linux::dumpDmesg(ThreadContext *, std::ostream &)
This function is used to implement two PCEvents: DmesgDumpEvent and
KernelPanic event. The only difference between the two is that the
latter produces a gem5 panic instead of a warning in addition to
dumping the kernel log.
Change-Id: I6d2af1d666ace57124089648ea906f6c787ac63c
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Gabor Dozsa <gabor.dozsa@arm.com>
There is a mismatch between DataType and SrcDataType in constructing
Atomic ST instruction. The mismatch causes atomic_store and
atomic_store_explicit function to store incorrect value in memory.
Eliminate the VSZ constant that defined the Wavefront size (in numbers of work
items); replaced it with a parameter in the GPU.py configuration script.
Changed all data structures dependent on the Wavefront size to be dynamically
sized. Legal values of Wavefront size are 16, 32, 64 for now and checked at
initialization time.
Fixing an issue with regStats not calling the parent class method
for most SimObjects in Gem5. This causes issues if one adds new
stats in the base class (since they are never initialized properly!).
Change-Id: Iebc5aa66f58816ef4295dc8e48a357558d76a77c
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
We want to extend the stats of objects hierarchically and thus it is necessary
to register the statistics of the base-class(es), as well. For now, these are
empty, but generic stats will be added there.
Patch originally provided by Akash Bagdia at ARM Ltd.
In particular, when EL0 is in AArch32 but EL1 is AArch64, AArch64
memory translation must be used. This is essential for typical
AArch64/32 interworking use cases.
The ERET instruction doesn't set PSTATE correctly in some cases
(particularly when returning to aarch32 code). Among other things,
this breaks EL0 thumb code when using a 64-bit kernel. This changeset
updates the ERET implementation to match the ARM ARM.
Change-Id: I408e7c69a23cce437859313dfe84e68744b07c98
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nathanael Premillieu <nathanael.premillieu@arm.com>
The current implementation of aarch32 FP/SIMD in gem5 assumes that EL1
and higher are all 32-bit. This breaks interprocessing since an
aarch64 EL1 uses different enable/disable bits. This change updates
the permission checks to according to what is prescribed by the ARM
ARM.
Change-Id: Icdcef31b00644cfeebec00216b3993aa1de12b88
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Mitch Hayenga <mitch.hayenga@arm.com>
Reviewed-by: Nathanael Premillieu <nathanael.premillieu@arm.com>
LPAE has been tested with Linux 4.4 and seems to work just fine. Let's
enable it by default.
Change-Id: Id88c6e3c91ae9c353279d42f2aa1f8a78485bd32
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Gabor Dozsa <gabor.dozsa@arm.com>
According to the ARM ARM (see AArch32.TranslateAddress in the
pseudocode library), the TLB should be operating in aarch64 mode if
the EL0 is aarch32 and EL1 is aarch64. This is currently not the case
in gem5, which breaks 64/32 interprocessing. Update the check to match
the reference manual.
Change-Id: I6f1444d57c0e2eb5f8880f513f33a9197b7cb2ce
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Mitch Hayenga <mitch.hayenga@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
We currently check the current state instead of the state of the
target EL when determining how we report a fault. This breaks
interprocessing since EL0 in aarch32 would report its fault status
using the aarch32 registers even if EL1 is in aarch64. Fix this to
report the fault using the format of the target EL.
Change-Id: Ic080267ac210783d1e01c722a4ddaa687dce280e
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Mitch Hayenga <mitch.hayenga@arm.com>
The TLB currently assumes that the pxn bit in an LPAE page descriptor
disables execution from unprivileged mode. However, according to the
architecture manual, this bit should disable execution from privileged
modes. Update the TLB implementation to reflect this behavior.
Change-Id: I7f1bb232d7a94a93fd601a9230223195ac952947
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
A lot of code assumes that it is possible to test what the highest EL
is and if it is 64 bit. These calls currently don't work in SE mode
since they rely on an instance of an ArmSystem.
Change-Id: I0d1f261926a66ce3dc4fa116845ffb2a081446f2
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Reviewed-by: Nathanael Premillieu <nathanael.premillieu@arm.com>
This patch fixes an issue identified by ASAN where the Neon64Load
operation assumes the packet always contains 16 bytes.
Change-Id: If24a7e461d60cb80970dfbe61d923d7d56926698
Reviewed-by: Giacomo Gabrielli <giacomo.gabrielli@arm.com>
Reviewed-by: Curtis Dunham <curtis.dunham@arm.com>
According to the Intel Multi Processor Specification rev 1.4 (-006) (*),
section 4.3.2 Bus Entries, Bus type strings are >>6-character ASCII
(blank-filled) strings<<.
This patch properly pads the entries with the missing spaces at the end.
(*) http://www.intel.com/design/pentium/datashts/24201606.pdf
Committed by Jason Lowe-Power <power.jg@gmail.com>
Registers are 0x10 and not 0x8 apart. The latter leads to invalid
calculations of index in array which in turn means that we will not
find the interrupt we were looking (been notified) for in the OS.
Committed by Jason Lowe-Power <power.jg@gmail.com>
The LinuxArmSystem class normally provides support for panicing gem5
if the simulated kernel panics. When this is turned off (default),
gem5 uses a BreakPCEvent to provide a debugger hook into the simulator
when the kernel crashes. This hook unconditionally kills gem5 with a
SIGTRAP unless gem5 is compiled in fast mode. This is undesirable
since the panic_on_panic param already provides similar functionality.
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Add support for overriding the number of interrupt lines in the ARM
KvmGic.
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Factor out the kernel device wrapper from the KvmGIC and put it in a
separate class. This will simplify a future kernel/gem5 hybrid GIC.
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
Reviewed-by: Radhika Jagtap <radhika.jagtap@arm.com>
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com>
Remve the assertion that we always need to add a delta larger than
zero as that does not seem to be true when we hit it in the
'PMU reset cycle counter to zero' case.
Committed by Jason Lowe-Power <power.jg@gmail.com>
A few warnings (and thus errors) pop up after being added to -Wall:
1. -Wmisleading-indentation
In the auto-generated code there were instances of if/else blocks that
were not indented to gcc's liking. This is addressed by adding braces.
2. -Wshift-negative-value
gcc is clever enougn to consider ~0 a negative constant, and
rightfully complains. This is addressed by using mask() which
explicitly casts to unsigned before shifting.
That is all. Porting done.
In general, the ThreadID parameter is unnecessary in the memory system
as the ContextID is what is used for the purposes of locks/wakeups.
Since we allocate sequential ContextIDs for each thread on MT-enabled
CPUs, ThreadID is unnecessary as the CPUs can identify the requesting
thread through sideband info (SenderState / LSQ entries) or ContextID
offset from the base ContextID for a cpu.
This is a re-spin of 20264eb after the revert (bd1c6789) and includes
some fixes of that commit.
The following patches had unexpected interactions with the current
upstream code and have been reverted for now:
e07fd01651f3: power: Add support for power models
831c7f2f9e39: power: Low-power idle power state for idle CPUs
4f749e00b667: power: Add power states to ClockedObject
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
--HG--
extra : amend_source : 0b6fb073c6bbc24be533ec431eb51fbf1b269508
In general, the ThreadID parameter is unnecessary in the memory system
as the ContextID is what is used for the purposes of locks/wakeups.
Since we allocate sequential ContextIDs for each thread on MT-enabled
CPUs, ThreadID is unnecessary as the CPUs can identify the requesting
thread through sideband info (SenderState / LSQ entries) or ContextID
offset from the base ContextID for a cpu.
Add 4 power states to the ClockedObject, provides necessary access functions
to check and update the power state. Default power state is UNDEFINED, it is
responsibility of the respective simulation model to provide the startup state
and any other logic for state change.
Add number of transition stat.
Add distribution of time spent in clock gated state.
Add power state residency stat.
Add dump call back function to allow stats update of distribution and residency
stats.
The openFlagTable and mmapFlagTables for emulated Linux
platforms are basically identical, but are specified
repetitively for every platform. Use a common file
that gets included for each platform so that we only
have one copy, making them more consistent and simplifying
changes (like adding #ifdefs).
In the process, made some minor fixes that slipped through
due to previous inconsistencies, and added more #ifdefs
to try to fix building on alternative hosts.
Refactor the TLB and page table walker test interface to use a dynamic
registration mechanism. Instead of patching a couple of empty methods
to wire up a TLB tester, this change allows such testers to register
themselves using the setTestInterface() method.
Libraries are loaded into the process address space using the
mmap system call. Conveniently, this happens to be a good
time to update the process symbol table with the library's
incoming symbols so we handle the table update from within the
system call.
This works just like an application's normal symbols. The only
difference between a dynamic library and a main executable is
when the symbol table update occurs. The symbol table update for
an executable happens at program load time and is finished before
the process ever begins executing. Since dynamic linking happens
at runtime, the symbol loading happens after the library is
first loaded into the process address space. The library binary
is examined at this time for a symbol section and that section
is parsed for symbol types with specific bindings (global,
local, weak). Subsequently, these symbols are added to the table
and are available for use by gem5 for things like trace
generation.
Checkpointing should work just as it did previously. The address
space (and therefore the library) will be recorded and the symbol
table will be entirely recorded. (It's not possible to do anything
clever like checkpoint a program and then load the program back
with different libraries with LD_LIBRARY_PATH, because the
library becomes part of the address space after being loaded.)
The mmapGrowsDown() method was a static method on the OperatingSystem
class (and derived classes), which worked OK for the templated syscall
emulation methods, but made it hard to access elsewhere. This patch
moves the method to be a virtual function on the LiveProcess method,
where it can be overridden for specific platforms (for now, Alpha).
This patch also changes the value of mmapGrowsDown() from being false
by default and true only on X86Linux32 to being true by default and
false only on Alpha, which seems closer to reality (though in reality
most people use ASLR and this doesn't really matter anymore).
In the process, also got rid of the unused mmap_start field on
LiveProcess and OperatingSystem mmapGrowsUp variable.
For O3, which has a stat that counts reg reads, there is an additional
reg read per mmap() call since there's an arg we no longer ignore.
Otherwise, stats should not be affected.
The structure definition only had the open system call flag set in mind when
it was named, so we rename it here with the intention of using it to define
additional tables to translate flags for other system calls in the future.
This changeset adds support for changing the simulator output
directory. This can be useful when the simulation goes through several
stages (e.g., a warming phase, a simulation phase, and a verification
phase) since it allows the output from each stage to be located in a
different directory. Relocation is done by calling core.setOutputDir()
from Python or simout.setOutputDirectory() from C++.
This change affects several parts of the design of the gem5's output
subsystem. First, files returned by an OutputDirectory instance (e.g.,
simout) are of the type OutputStream instead of a std::ostream. This
allows us to do some more book keeping and control re-opening of files
when the output directory is changed. Second, new subdirectories are
OutputDirectory instances, which should be used to create files in
that sub-directory.
Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
[sascha.bischoff@arm.com: Rebased patches onto a newer gem5 version]
Signed-off-by: Sascha Bischoff <sascha.bischoff@arm.com>
Signed-off-by: Andreas Sandberg <andreas.sandberg@arm.com>
This patch adds assertions that enforce that only invalidating snoops
will ever reach into the logic that tracks in-order load completion and
also invalidation of LL/SC (and MONITOR / MWAIT) monitors. Also adds
some comments to MSHR::replaceUpgrades().
Properly done for the ERET instruction in v8, but not for v7.
Many control register changes are only visible after explicit
instruction synchronization barriers or exception entry/exit.
This means mode changing instructions should squash any
younger in-flight speculative instructions.
The previous implementation did a pair of nested RMW operations,
which isn't compatible with the way that locked RMW operations are
implemented in the cache models. It was convenient though in that
it didn't require any new micro-ops, and supported cmpxchg16b using
64-bit memory ops. It also worked in AtomicSimpleCPU where
atomicity was guaranteed by the core and not by the memory system.
It did not work with timing CPU models though.
This new implementation defines new 'split' load and store micro-ops
which allow a single memory operation to use a pair of registers as
the source or destination, then uses a single ldsplit/stsplit RMW
pair to implement cmpxchg. This patch requires support for 128-bit
memory accesses in the ISA (added via a separate patch) to support
cmpxchg16b.
Although the cache models support wider accesses, the ISA descriptions
assume that (for the most part) memory operands are integer types,
which makes it difficult to define instructions that do memory accesses
larger than 64 bits.
This patch adds some generic support for memory operands that are arrays
of uint64_t, and specifically a 'u2qw' operand type for x86 that is an
array of 2 uint64_ts (128 bits). This support is unused at this point,
but will be needed shortly for cmpxchg16b. Ideally the 128-bit SSE
memory accesses will also be rewritten to use this support.
Support for 128-bit accesses could also have been added using the gcc
__int128_t extension, which would have been less disruptive. However,
although clang also supports __int128_t, it's still non-standard.
Also, more importantly, this approach creates a path to defining
256- and 512-byte operands as well, which will be useful for eventual
AVX support.
MemOperand variables were being initialized to 0
"to avoid 'uninitialized variable' errors" but these
no longer seem to be a problem (with the exception of
one use case in POWER that is arguably broken and
easily fixed here).
Getting rid of the initialization is necessary to
set up a subsequent patch which extends memory
operands to possibly not be scalars, making the
'= 0' initialization no longer feasible.
Writing 16 bytes from an 8-byte source value is a bad idea.
This doesn't appear to have broken anything, but showed up
as spurious differences when tracediffing runs.
Result of running 'hg m5style --skip-all --fix-control -a' to get
rid of '== true' comparisons, plus trivial manual edits to get
rid of '== false'/'== False' comparisons.
Left a couple of explicit comparisons in where they didn't seem
unreasonable:
invalid boolean comparison in src/arch/mips/interrupts.cc:155
>> DPRINTF(Interrupt, "Interrupts OnCpuTimerINterrupt(tc) == true\n");<<
invalid boolean comparison in src/unittest/unittest.hh:110
>> "EXPECT_FALSE(" #expr ")", (expr) == false)<<
In the process of trying to get rid of an '== false' comparison,
it became apparent that a slightly more involved solution was
needed. Split this out into its own changeset since it's not
a totally trivial local change like the others.
For historical reasons, the ExecContext interface had a single
function, readMem(), that did two different things depending on
whether the ExecContext supported atomic memory mode (i.e.,
AtomicSimpleCPU) or timing memory mode (all the other models).
In the former case, it actually performed a memory read; in the
latter case, it merely initiated a read access, and the read
completion did not happen until later when a response packet
arrived from the memory system.
This led to some confusing things, including timing accesses
being required to provide a pointer for the return data even
though that pointer was only used in atomic mode.
This patch splits this interface, adding a new initiateMemRead()
function to the ExecContext interface to replace the timing-mode
use of readMem().
For consistency and clarity, the readMemTiming() helper function
in the ISA definitions is renamed to initiateMemRead() as well.
For x86, where the access size is passed in explicitly, we can
also get rid of the data parameter at this level. For other ISAs,
where the access size is determined from the type of the data
parameter, we have to keep the parameter for that purpose.
The readMemAtomic/writeMemAtomic helper functions were calling
readMemTiming/writeMemTiming respectively. This is functionally
correct, since the *Timing functions are doing the same access
initiation operation as the *Atomic functions (just that the
*Atomic versions also complete the access in line). It also
provides for some (very minimal) code reuse. Unfortunately,
it's potentially pretty confusing, since it makes it look like
the atomic accesses are somehow being converted to timing
accesses. It also gets in the way of specializing the timing
interface (as will be done in a future patch).
By ignoring SIG_TRAP, using --debug-break <N> when not connected to
a debugger becomes a no-op. Apparently this was intended to be a
feature, though the rationale is not clear.
If we don't ignore SIG_TRAP, then using --debug-break <N> when not
connected to a debugger causes the simulation process to terminate
at tick N. This is occasionally useful, e.g., if you just want to
collect a trace for a specific window of execution then you can combine
this with --debug-start to do exactly that.
In addition to not ignoring the signal, this patch also updates
the --debug-break help message and deletes a handful of unprotected
calls to Debug::breakpoint() that relied on the prior behavior.
Currently, the wire format of register values in g- and G-packets is
modelled using a union of uint8/16/32/64 arrays. The offset positions
of each register are expressed as a "register count" scaled according
to the width of the register in question. This results in counter-
intuitive and error-prone "register count arithmetic", and some
formats would even be altogether unrepresentable in such model, e.g.
a 64-bit register following a 32-bit one would have a fractional index
in the regs64 array.
Another difficulty is that the array is allocated before the actual
architecture of the workload is known (and therefore before the correct
size for the array can be calculated).
With this patch I propose a simpler mechanism for expressing the
register set structure. In the new code, GdbRegCache is an abstract
class; its subclasses contain straightforward structs reflecting the
register representation. The determination whether to use e.g. the
AArch32 vs. AArch64 register set (or SPARCv8 vs SPARCv9, etc.) is made
by polymorphically dispatching getregs() to the concrete subclass.
The subclass is not instantiated until it is needed for actual
g-/G-packet processing, when the mode is already known.
This patch is not meant to be merged in on its own, because it changes
the contract between src/base/remote_gdb.* and src/arch/*/remote_gdb.*,
so as it stands right now, it would break the other architectures.
In this patch only the base and the ARM code are provided for review;
once we agree on the structure, I will provide src/arch/*/remote_gdb.*
for the other architectures; those patches could then be merged in
together.
Review Request: http://reviews.gem5.org/r/3207/
Pushed by Joel Hestness <jthestness@gmail.com>
Add support for automatically discover available platforms. The
Python-side uses functionality similar to what we use when
auto-detecting available CPU models. The machine IDs have been updated
to match the platform configurations. If there isn't a matching
machine ID, the configuration scripts default to -1 which Linux uses
for device tree only platforms.
Add support for automatically selecting a boot loader that matches the
guest system's kernel. Instead of accepting a single boot loader, the
ArmSystem class now accepts a vector of boot loaders. When
initializing a system, the we now look for the first boot loader with
an architecture that matches the kernel.
This changeset makes it possible to use the same system for both
64-bit and 32-bit kernels.
As per the x86 architecture specification, matching TLB entries need to be
invalidated on a page fault. For instance, after a page fault due to inadequate
protection bits on a TLB hit, the TLB entry needs to be invalidated. This
behavior is clearly specified in the x86 architecture manuals from both AMD and
Intel. This invalidation is missing currently in gem5, due to which linux
kernel versions 3.8 and up cannot be simulated efficiently. This is exposed by
a linux optimisation in commit e4a1cc56e4d728eb87072c71c07581524e5160b1, which
removes a tlb flush on updating page table entries in x86.
Testing: Linux kernel versions 3.8 onwards were booting very slowly in FS mode,
due to repeated page faults (~300000 before the first print statement in a
bash file). Ensured that page fault rate drops drastically and observed
reduction in boot time from order of hours to minutes for linux kernel v3.8
and v3.11
doCpuid() has to identical warn messages about unimplemented functions. Add
the family to the log message to make them distinguishable.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>