The ISAR registers describe which features the processor supports.
Transcribe the values listed in section B5.2.5 of the ARM ARM
into the registers as read-only values
This change speeds up booting, especially in MP cases, by not executing
udelay() on the core but instead skipping ahead tha amount of time that is being
delayed.
This patch prevents not executed conditional instructions marked as
IsQuiesce from stalling the pipeline indefinitely. If the instruction
is not executed the quiesceSkip psuedoinst is called which schedules a
wakes up call to the fetch stage.
This changes the RFE macroop into 3 microops:
URa = [sp]; URb = [sp+4]; // load CPSR,PC values from stack
sp = sp + offset; // optionally auto-increment
PC = URa; CPSR = URb; // write to the PC and CPSR.
Importantly:
- writing to PC is handled in the last micro-op.
- loading occurs prior to state changes.
This change fixes the problem for all the cases we actively use. If you want to try
more creative I/O device attachments (E.g. sharing an L2), this won't work. You
would need another level of caching between the I/O device and the cache
(which you actually need anyway with our current code to make sure writes
propagate). This is required so that you can mark the cache in between as
top level and it won't try to send ownership of a block to the I/O device.
Asserts have been added that should catch any issues.
Without this change the a store can be issued to the cache multiple times.
If this case occurs when the l1 cache is out of mshrs (and thus blocked)
the processor will never make forward progress because each cycle it will
send a single request using the recently freed mshr and not completing the
multipart store. This will continue forever.
This causes a lot of rebuilds that could have otherwise possibly been
avoided, and, more annoyingly, a lot of unnecessary rerunning of the
regressions. The benefits of having the revision in the output haven't
materialized, so this change removes it.
None of the code in the ruby tester directory is compiled or referred to
outside of that directory. This change eliminates it. If it's needed in the
future, it can be revived from the history. In the mean time, this removes
clutter and the only use of the GEMS_ROOT scons variable.
MIPS_FS doesn't build and presumably doesn't work right now. Users might see
the MIP_FS file in build_opts and expect it to work. To avoid confusion, this
change deletes that file.
The internet says this instruction was created by accident when an Intel CPU
failed to decode x87 instructions properly. It's been documented on a few rare
occasions and has generally worked to ensure backwards compatability. One
source claims that the gcc toolchain is basically the only thing that emits
it, and that emulators/binary translators like qemu and bochs implement it.
We won't actually implement it here since we're hardly implementing any other
x87 instructions either. If we were to implement it, it would behave the same
as ffree but then also pop the register stack.
http://www.pagetable.com/?p=16
There may not be a formally correct spelling for the past tense of mmap, but
mmapped is the spelling Google doesn't try to autocorrect. This makes sense
because it mirrors the past tense of map->mapped and not the past tense of
cape->caped.
--HG--
rename : src/arch/alpha/mmaped_ipr.hh => src/arch/alpha/mmapped_ipr.hh
rename : src/arch/arm/mmaped_ipr.hh => src/arch/arm/mmapped_ipr.hh
rename : src/arch/mips/mmaped_ipr.hh => src/arch/mips/mmapped_ipr.hh
rename : src/arch/power/mmaped_ipr.hh => src/arch/power/mmapped_ipr.hh
rename : src/arch/sparc/mmaped_ipr.hh => src/arch/sparc/mmapped_ipr.hh
rename : src/arch/x86/mmaped_ipr.hh => src/arch/x86/mmapped_ipr.hh
At a couple of places in PerfectSwitch.cc and MessageBuffer.cc, DPRINTF()
has not been provided with correct number of arguments. The patch fixes these
bugs.