Commit graph

8596 commits

Author SHA1 Message Date
Korey Sewell
89335118a5 inorder: recognize isSerializeAfter flag
keep track of when an instruction needs the execution
behind it to be serialized. Without this, in SE Mode
instructions can execute behind a system call exit().
2011-02-18 14:29:48 -05:00
Korey Sewell
bbffd9419d inorder: update default thread size(=1)
a lot of structures get allocated based off that MaxThreads parameter so this is an
effort to not abuse it
2011-02-18 14:29:44 -05:00
Korey Sewell
a278df0b95 inorder: don't overuse getLatency()
resources don't need to call getLatency because the latency is already a member
in the class. If there is some type of special case where different instructions
impose a different latency inside a resource then we can revisit this and
add getLatency() back in
2011-02-18 14:29:40 -05:00
Korey Sewell
37df925953 inorder: update max. resource bandwidths
each resource has a certain # of requests it can take per cycle. update the #s here
to be more realistic based off of the pipeline width and if the resource needs to
be accessed on multiple cycles
2011-02-18 14:29:31 -05:00
Korey Sewell
91c48b1c3b inorder: cleanup in destructors
cleanup hanging pointers and other cruft in the destructors
2011-02-18 14:29:26 -05:00
Korey Sewell
8b4b4a1ba5 inorder: fix cache/fetch unit memory leaks
---
need to delete the cache request's data on clearRequest() now that we are recycling
requests
---
fetch unit needs to deallocate the fetch buffer blocks when they are replaced or
squashed.
2011-02-18 14:29:17 -05:00
Korey Sewell
72b5233112 inorder: remove events for zero-cycle resources
if a resource has a zero cycle latency (e.g. RegFile write), then dont allocate an event
for it to use
2011-02-18 14:29:02 -05:00
Korey Sewell
d5961b2b20 inorder: update pipeline interface for handling finished resource reqs
formerly, to free up bandwidth in a resource, we could just change the pointer in that resource
but at the same time the pipeline stages had visibility to see what happened to a resource request.
Now that we are recycling these requests (to avoid too much dynamic allocation), we can't throw
away the request too early or the pipeline stage gets bad information. Instead, mark when a request
is done with the resource all together and then let the pipeline stage call back to the resource
that it's time to free up the bandwidth for more instructions
*** inteface notes ***
- When an instruction completes and is done in a resource for that cycle, call done()
- When an instruction fails and is done with a resource for that cycle, call done(false)
- When an instruction completes, but isnt finished with a resource, call completed()
- When an instruction fails, but isnt finished with a resource, call completed(false)
* * *
inorder: tlbmiss wakeup bug fix
2011-02-18 14:28:37 -05:00
Korey Sewell
d64226750e inorder: remove request map, use request vector
take away all instances of reqMap in the code and make all references use the built-in
request vectors inside of each resource. The request map was dynamically allocating
a request per instruction. The request vector just allocates N number of requests
during instantiation and then the surrounding code is fixed up to reuse those N requests
***
setRequest() and clearRequest() are the new accessors needed to define a new
request in a resource
2011-02-18 14:28:30 -05:00
Korey Sewell
c883729025 inorder: add valid bit for resource requests
this will allow us to reuse resource requests within a resource instead
of always dynamically allocating
2011-02-18 14:28:22 -05:00
Korey Sewell
ff48afcf4f inorder: remove reqRemoveList
we are going to be getting away from creating new resource requests for every
instruction so no more need to keep track of a reqRemoveList and clean it up
every tick
2011-02-18 14:28:10 -05:00
Korey Sewell
991d0185c6 inorder: initialize res. req. vectors based on resource bandwidth
first change in an optimization that will stop InOrder from allocating new memory for every instruction's
request to a resource. This gets expensive since every instruction needs to access ~10 requests before
graduation. Instead, the plan is to allocate just enough resource request objects to satisfy each resource's
bandwidth (e.g. the execution unit would need to allocate 3 resource request objects for a 1-issue pipeline
since on any given cycle it could have 2 read requests and 1 write request) and then let the instructions
contend and reuse those allocated requests. The end result is a smaller memory footprint for the InOrder model
and increased simulation performance
2011-02-18 14:27:52 -05:00
Nathan Binkert
e3d8d43b17 merge alpha system files into tree 2011-02-16 10:57:04 -05:00
Gabe Black
9836972a13 Util: Get rid of the make_release.py script.
Since we're not doing releases any more we don't really need this script. If
we need it in the future, we can resurrect it from the history.
2011-02-15 23:22:32 -08:00
Nathan Binkert
dfd4f6ad93 Cleanup system directory to fit into modern M5 tree 2011-02-16 00:34:02 -06:00
Nathan Binkert
99e7e5e7ef copyright: update copyright on alpha system files 2011-02-16 00:34:01 -06:00
Gabe Black
fde8b5c387 X86: Get rid of "inline" on the MicroPanic constructor in decoder.cc.
This was making certain versions of gcc omit the function from the object file
which would break the build.
2011-02-15 15:58:16 -08:00
Gabe Black
989138970e Info: Clean up some info files.
Get rid of RELEASE_NOTES since we no longer do releases, update some of the
information in README, and update the date in LICENSE.
2011-02-14 21:36:37 -08:00
Nilay Vaish
343e94a257 Ruby: Improve Change PerfectSwitch's wakeup function
Currently the wakeup function for the PerfectSwitch contains three loops -

loop on number of virtual networks
  loop on number of incoming links
	    loop till all messages for this (link, network) have been routed

With an 8 processor mesh network and Hammer protocol, about 11-12% of the
was observed to have been spent in this function, which is the highest
amongst all the functions. It was found that the innermost loop is executed
about 45 times per invocation of the wakeup function, when each invocation
of the wakeup function processes just about one message.

The patch tries to do away with the redundant executions of the innermost
loop. Counters have been added for each virtual network that record the
number of messages that need to be routed for that virtual network. The
inner loops are only executed when the number of messages for that particular
virtual network > 0. This does away with almost 80% of the executions of the
innermost loop. The function now consumes about 5-6% of the total execution
time.
2011-02-14 16:14:54 -06:00
Gabe Black
5ec5794456 X86: Update stats for the improved branch detection/prediction. 2011-02-13 17:46:04 -08:00
Gabe Black
77b4a37067 X86: Detect branches taking into account instruction size.
The size of the current instruction determines what the npc should be if
there's no branching.
2011-02-13 17:45:47 -08:00
Gabe Black
44306e8114 X86: Update stats now that the dest reg isn't read unnecessarily to set flags. 2011-02-13 17:45:30 -08:00
Gabe Black
bce2be525d X86: Put the result used for flags in an intermediate variable.
Using the destination register directly causes the ISA parser to treat it as a
source even if none of the original bits are used.
2011-02-13 17:45:12 -08:00
Gabe Black
b046f3feb6 X86: Update stats for the reduced register reads. 2011-02-13 17:44:32 -08:00
Gabe Black
4e1adf85f7 X86: Don't read in dest regs if all bits are replaced.
In x86, 32 and 64 bit writes to registers in which registers appear to be 32 or
64 bits wide overwrite all bits of the destination register. This change
removes false dependencies in these cases where the previous value of a
register doesn't need to be read to write a new value. New versions of most
microops are created that have a "Big" suffix which simply overwrite their
destination, and the right version to use is selected during microop
allocation based on the selected data size.

This does not change the performance of the O3 CPU model significantly, I
assume because there are other false dependencies from the condition code bits
in the flags register.
2011-02-13 17:44:24 -08:00
Gabe Black
399e095510 X86: On a bad microopc, return a microop that returns a fault that panics.
This way a bad micropc will have to get all the way to commit before killing
the simulation. This accounts for misspeculated branches.
2011-02-13 17:42:56 -08:00
Gabe Black
1aa9698fa0 X86: Define fault objects to carry debug messages.
These faults can panic/warn/warn_once, etc., instead of instructions doing
that themselves directly. That way, instructions can be speculatively
executed, and only if they're actually going to commit will their fault be
invoked and the panic, etc., happen.
2011-02-13 17:42:05 -08:00
Gabe Black
5ee94f4a3d X86: Only reset npc to reflect instruction length once.
When redirecting fetch to handle branches, the npc of the current pc state
needs to be left alone. This change makes the pc state record whether or not
the npc already reflects a real value by making it keep track of the current
instruction size, or if no size has been set.
2011-02-13 17:41:10 -08:00
Gabe Black
f036fd9748 O3: Fetch from the microcode ROM when needed. 2011-02-13 17:40:07 -08:00
Ali Saidi
7c763b34c9 O3: Fix GCC 4.2.4 complaint 2011-02-13 16:51:15 -05:00
Nilay Vaish
0cede15d6c Ruby: Reorder Cache Lookup in Protocol Files
The patch changes the order in which L1 dcache and icache are looked up when
a request comes in. Earlier, if a request came in for instruction fetch, the
dcache was looked up before the icache, to correctly handle self-modifying
code. But, in the common case, dcache is going to report a miss and the
subsequent icache lookup is going to report a hit. Given the invariant -
caches under the same controller keep track of disjoint sets of cache blocks,
we can move the icache lookup before the dcache lookup. In case of a hit in
the icache, using our invariant, we know that the dcache would have reported
a miss. In  case of a miss in the icache, we know that icache would have
missed even if the dcache was looked up before looking up the icache.
Effectively, we are doing the same thing as before, though in the common case,
we expect reduction in the number of lookups. This was empirically confirmed
for MOESI hammer. The ratio lookups to access requests is now about 1.1 to 1.
2011-02-12 11:41:20 -06:00
Korey Sewell
2971b8401a inorder:regress: host-inst-rate improved ~58%
there are still only a few inorder benchmark but for the lengthier benchmarks (twolf and vortext)
the latest changes to how instruction scheduling (how instructions figure out what they want to
do on each pipeline stage in the inorder model) were able to improve performance by a nice
amount... The latest results for the inorder model process about 100k insts/second
(note: 58% is over the last time run on 64-bit pool machines at UM)
2011-02-12 10:14:52 -05:00
Korey Sewell
470aa289da inorder: clean up the old way of inst. scheduling
remove remnants of old way of instruction scheduling which dynamically allocated
a new resource schedule for every instruction
2011-02-12 10:14:48 -05:00
Korey Sewell
e26aee514d inorder: utilize cached skeds in pipeline
allow the pipeline and resources to use the cached instruction schedule and resource
sked iterator
2011-02-12 10:14:45 -05:00
Korey Sewell
516b611462 inorder: define iterator for resource schedules
resource skeds are divided into two parts: front end (all insts) and back end (inst. specific)
each of those are implemented as separate lists,  so this iterator wraps around
the traditional list iterator so that an instruction can walk it's schedule but seamlessly
transfer from front end to back end when necessary
2011-02-12 10:14:43 -05:00
Korey Sewell
ec9b2ec251 inorder: stage scheduler for front/back end schedule creation
add a stage scheduler class to replace InstStage in pipeline_traits.cc
use that class to define a default front-end, resource schedule that all
instructions will follow. This will also replace the back end schedule in
pipeline_traits.cc. The reason for adding this is so that we can cache
instruction schedules in the future instead of calling the same function
over/over again as well as constantly dynamically alllocating memory on
every instruction to try to figure out it's schedule
2011-02-12 10:14:40 -05:00
Korey Sewell
6713dbfe08 inorder: cache instruction schedules
first step in a optimization to not dynamically allocate an instruction schedule
for every instruction but rather used cached schedules
2011-02-12 10:14:36 -05:00
Korey Sewell
af67631790 inorder: comments for resource sked class 2011-02-12 10:14:34 -05:00
Korey Sewell
800e93f358 inorder: remove unused file
inst_buffer file isn't used , so remove it
2011-02-12 10:14:32 -05:00
Korey Sewell
e65c15e931 inorder: remove unused isa ops
pass/fail ops were used for testing but arent part of isa
2011-02-12 10:14:26 -05:00
Ali Saidi
2055df8322 Stats: Update the statistics for vnc patch. 2011-02-11 18:29:36 -06:00
Ali Saidi
d4df9e763c VNC/ARM: Use VNC server and add support to boot into X11 2011-02-11 18:29:36 -06:00
Ali Saidi
d33c1d9592 VNC: Add VNC server to M5 2011-02-11 18:29:35 -06:00
Ali Saidi
ded4d319f2 Serialization: Allow serialization of stl lists 2011-02-11 18:29:35 -06:00
Giacomo Gabrielli
a05032f4df O3: Fix pipeline restart when a table walk completes in the fetch stage.
When a table walk is initiated by the fetch stage, the CPU can
potentially move to the idle state and never wake up.

The fetch stage must call cpu->wakeCPU() when a translation completes
(in finishTranslation()).
2011-02-11 18:29:35 -06:00
Giacomo Gabrielli
74eff1b71b O3: Fix a few bugs in the TableWalker object.
Uncacheable requests were set as such only in atomic mode.
currState->delayed is checked in place of currState->timing for resetting
currState in atomic mode.
2011-02-11 18:29:35 -06:00
Ali Saidi
1411cb0b0f SimpleCPU: Fix a case where a DTLB fault redirects fetch and an I-side walk occurs.
This change fixes an issue where a DTLB fault occurs and redirects fetch to
handle the fault and the ITLB requires a walk which delays translation. In this
case the status of the cpu isn't updated appropriately, and an additional
instruction fetch occurs. Eventually this hits an assert as multiple instruction
fetches are occuring in the system and when the second one returns the
processor is in the wrong state.

Some asserts below are removed because it was always true (typo) and the state
after the initiateAcc() the processor could be in any valid state when a
d-side fault occurs.
2011-02-11 18:29:35 -06:00
Giacomo Gabrielli
e2507407b1 O3: Enhance data address translation by supporting hardware page table walkers.
Some ISAs (like ARM) relies on hardware page table walkers.  For those ISAs,
when a TLB miss occurs, initiateTranslation() can return with NoFault but with
the translation unfinished.

Instructions experiencing a delayed translation due to a hardware page table
walk are deferred until the translation completes and kept into the IQ.  In
order to keep track of them, the IQ has been augmented with a queue of the
outstanding delayed memory instructions.  When their translation completes,
instructions are re-executed (only their initiateAccess() was already
executed; their DTB translation is now skipped).  The IEW stage has been
modified to support such a 2-pass execution.
2011-02-11 18:29:35 -06:00
Ali Saidi
453dbc772d ARM: Fix timer calculations.
The timer calculations were a bit off so time would run faster than
it otherwise should
2011-02-11 18:29:35 -06:00
Ali Saidi
59bf0e7eb4 Timesync: Make sure timesync event is setup after curTick is unserialized
Setup initial timesync event in initState or loadState so that curTick has
been updated to the new value, otherwise the event is scheduled in the past.
2011-02-11 18:29:35 -06:00