This patch adds methods in KvmCPU model to handle KVM exits caused by syscall
instructions and page faults. These types of exits will be encountered if
KvmCPU is run in SE mode.
There was already a stub device at 0x80, the port traditionally used for an IO
delay. 0x80 is also the port used for POST codes sent by firmware, and that
may have prompted adding this port as a second option.
In fs.py the io port controller was being attached to the iobus multiple
times. This should be done only once. In se.py, the the option use_map
was being set which no longer exists.
The data size used for actually writing the base value for the segment was the
default size, but really it should set the entire value without any possible
truncation.
The far pointer should be shifted right to get the selector value, not left.
Also, when calculating the width of the offset, the wrong register was used in
one spot.
The getRegArrayBit function extracts a bit from a series of registers which
are treated as a single large bit array. A previous change had modified the
logic which figured out which bit to extract from ">> 5" to "% 5" which seems
wrong, especially when other, similar functions were changed to use "% 32".
The value in EAX has an 8 bit field for the linear address size and one for
the physical address size when calling that function. A recent change
implemented it but returned 0xff for both of those fields. That implies that
linear and physical addresses are 255 bits wide which is wrong. When using the
KVM CPU model this causes an error, presumably because some of those bits are
actually reserved, or the CPU or kernel realizes 255 bits is a bad value.
This change makes those values 48.
This patch fixes the checkpoint restore option in the example of C++
configuration (util/cxx_config).
The fix introduces a call to config_manager->startup() (which calls startup
on all SimObjects managed by that manager) to replicate the loop of
SimObject::startup calls in src/python/m5/simulate.py::simulate guarded by
need_startup. As util/cxx_config/main.cc is a C++ analogue of
src/python/mt/simulate.py, it should make a similar set of calls.
Another churn to clean up undefined behaviour, mostly ARM, but some
parts also touching the generic part of the code base.
Most of the fixes are simply ensuring that proper intialisation. One
of the more subtle changes is the return type of the sign-extension,
which is changed to uint64_t. This is to avoid shifting negative
values (undefined behaviour) in the ISA code.
This patch reverts changeset 9277177eccff which does not do what it
was intended to do. In essence, we go back to implementing mkutctime
much like the non-standard timegm extension.
Mwait works as follows:
1. A cpu monitors an address of interest (monitor instruction)
2. A cpu calls mwait - this loads the cache line into that cpu's cache.
3. The cpu goes to sleep.
4. When another processor requests write permission for the line, it is
evicted from the sleeping cpu's cache. This eviction is forwarded to the
sleeping cpu, which then wakes up.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This is a simple test program for the new mwait implemenation. It is uses
m5threads to create to threads of execution in syscall emulation mode that
interact using the mwait instruction.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Fixes a bug where Minor drains in the midst of committing a
conditional store.
While committing a conditional store, lastCommitWasEndOfMacroop is true
(from the previous instruction) as we still haven't finished the conditional
store. If a drain occurs before the cache response, Minor would check just
lastCommitWasEndOfMacroop, which was true, and set drainState=DrainHaltFetch,
which increases the streamSeqNum. This caused the conditional store to be
squashed when the memory responded and it completed. However, to the memory
the store succeeded, while to the instruction sequence it never occurred.
In the case of an LLSC, the instruction sequence will replay the squashed
STREX, which will fail as the cache is no longer in LLSC. Then the
instruction sequence will loop back to a LDREX, which receives the updated
(incorrect) value.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Ruby's functional accesses are not guaranteed to succeed as of now. While
this is not a problem for the protocols that are currently in the mainline
repo, it seems that coherence protocols for gpus rely on a backing store to
supply the correct data. The aim of this patch is to make this backing store
configurable i.e. it comes into play only when a particular option:
--access-backing-store is invoked.
The backing store has been there since M5 and GEMS were integrated. The only
difference is that earlier the system used to maintain the backing store and
ruby's copy was write-only. Sometime last year, we moved to data being
supplied supplied by ruby in SE mode simulations. And now we have patches on
the reviewboard, which remove ruby's copy of memory altogether and rely
completely on the system's memory to supply data. This patch adds back a
SimpleMemory member to RubySystem. This member is used only if the option:
access-backing-store is set to true. By default, the memory would not be
accessed.
This patch is the final in the series. The whole series and this patch in
particular were written with the aim of interfacing ruby's directory controller
with the memory controller in the classic memory system. This is being done
since ruby's memory controller has not being kept up to date with the changes
going on in DRAMs. Classic's memory controller is more up to date and
supports multiple different types of DRAM. This also brings classic and
ruby ever more close. The patch also changes ruby's memory controller to
expose the same interface.
This function was added when I had incorrectly arrived at the conclusion
that such a function can improve the chances of a functional read succeeding.
As was later realized, this is not possible in the current setup. While the
code using this function was dropped long back, this function was not. Hence
the patch.
This patch removes the data block present in the directory entry structure
of each protocol in gem5's mainline. Firstly, this is required for moving
towards common set of memory controllers for classic and ruby memory systems.
Secondly, the data block was being misused in several places. It was being
used for having free access to the physical memory instead of calling on the
memory controller.
From now on, the directory controller will not have a direct visibility into
the physical memory. The Memory Vector object now resides in the
Memory Controller class. This also means that some significant changes are
being made to the functional accesses in ruby.
In my opinion, it creates needless complications in rest of the code.
Also, this structure hinders the move towards common set of code for
physical memory controllers.
Both ruby and the system used to maintain memory copies. With the changes
carried for programmed io accesses, only one single memory is required for
fs simulations. This patch sets the copy of memory that used to reside
with the system to null, so that no space is allocated, but address checks
can still be carried out. All the memory accesses now source and sink values
to the memory maintained by ruby.
As of now DMASequencer inherits from the RubyPort class. But the code in
RubyPort class is heavily tailored for the CPU Sequencer. There are parts of
the code that are not required at all for the DMA sequencer. Moreover, the
next patch uses the dma sequencer for carrying out memory accesses for all the
io devices. Hence, it is better to have a leaner dma sequencer.
This changes the default ARM system to a Versatile Express-like system that supports
2GB of memory and PCI devices and updates the default kernels/file-systems for
AArch64 ARM systems (64-bit) to support up to 32GB of memory and PCI devices. Some
platforms that are no longer supported have been pruned from the configuration files.
In addition a set of 64-bit ARM regressions have been added to the regression system.
It is possible for the O3 CPU to consider itself drained and
later have a squashed instruction perform a writeback. This
patch re-adds tracking of in-flight instructions to prevent
falsely signaling a drained event.
IEW did not check the instQueue and memDepUnit to ensure
they were drained. This caused issues when drainSanityCheck()
did check those structures after asserting IEW was drained.
The bare-metal configuration option still configured memory with the old scheme
that no-longer works. This change unifies the code so there aren't any differences.
The checker can't verify timer registers, so it should just grab the version
from the executing CPU, otherwise it could get a larger value and diverge
execution.
This patch fixes a bug where a completing load or store which is also a
barrier can push a barrier into the store buffer without first checking
that there is a free slot.
The bug was not fatal but would print a warning that the store buffer
was full when inserting.