6f4bd2c1da
This change is a low level and pervasive reorganization of how PCs are managed in M5. Back when Alpha was the only ISA, there were only 2 PCs to worry about, the PC and the NPC, and the lsb of the PC signaled whether or not you were in PAL mode. As other ISAs were added, we had to add an NNPC, micro PC and next micropc, x86 and ARM introduced variable length instruction sets, and ARM started to keep track of mode bits in the PC. Each CPU model handled PCs in its own custom way that needed to be updated individually to handle the new dimensions of variability, or, in the case of ARMs mode-bit-in-the-pc hack, the complexity could be hidden in the ISA at the ISA implementation's expense. Areas like the branch predictor hadn't been updated to handle branch delay slots or micropcs, and it turns out that had introduced a significant (10s of percent) performance bug in SPARC and to a lesser extend MIPS. Rather than perpetuate the problem by reworking O3 again to handle the PC features needed by x86, this change was introduced to rework PC handling in a more modular, transparent, and hopefully efficient way. PC type: Rather than having the superset of all possible elements of PC state declared in each of the CPU models, each ISA defines its own PCState type which has exactly the elements it needs. A cross product of canned PCState classes are defined in the new "generic" ISA directory for ISAs with/without delay slots and microcode. These are either typedef-ed or subclassed by each ISA. To read or write this structure through a *Context, you use the new pcState() accessor which reads or writes depending on whether it has an argument. If you just want the address of the current or next instruction or the current micro PC, you can get those through read-only accessors on either the PCState type or the *Contexts. These are instAddr(), nextInstAddr(), and microPC(). Note the move away from readPC. That name is ambiguous since it's not clear whether or not it should be the actual address to fetch from, or if it should have extra bits in it like the PAL mode bit. Each class is free to define its own functions to get at whatever values it needs however it needs to to be used in ISA specific code. Eventually Alpha's PAL mode bit could be moved out of the PC and into a separate field like ARM. These types can be reset to a particular pc (where npc = pc + sizeof(MachInst), nnpc = npc + sizeof(MachInst), upc = 0, nupc = 1 as appropriate), printed, serialized, and compared. There is a branching() function which encapsulates code in the CPU models that checked if an instruction branched or not. Exactly what that means in the context of branch delay slots which can skip an instruction when not taken is ambiguous, and ideally this function and its uses can be eliminated. PCStates also generally know how to advance themselves in various ways depending on if they point at an instruction, a microop, or the last microop of a macroop. More on that later. Ideally, accessing all the PCs at once when setting them will improve performance of M5 even though more data needs to be moved around. This is because often all the PCs need to be manipulated together, and by getting them all at once you avoid multiple function calls. Also, the PCs of a particular thread will have spatial locality in the cache. Previously they were grouped by element in arrays which spread out accesses. Advancing the PC: The PCs were previously managed entirely by the CPU which had to know about PC semantics, try to figure out which dimension to increment the PC in, what to set NPC/NNPC, etc. These decisions are best left to the ISA in conjunction with the PC type itself. Because most of the information about how to increment the PC (mainly what type of instruction it refers to) is contained in the instruction object, a new advancePC virtual function was added to the StaticInst class. Subclasses provide an implementation that moves around the right element of the PC with a minimal amount of decision making. In ISAs like Alpha, the instructions always simply assign NPC to PC without having to worry about micropcs, nnpcs, etc. The added cost of a virtual function call should be outweighed by not having to figure out as much about what to do with the PCs and mucking around with the extra elements. One drawback of making the StaticInsts advance the PC is that you have to actually have one to advance the PC. This would, superficially, seem to require decoding an instruction before fetch could advance. This is, as far as I can tell, realistic. fetch would advance through memory addresses, not PCs, perhaps predicting new memory addresses using existing ones. More sophisticated decisions about control flow would be made later on, after the instruction was decoded, and handed back to fetch. If branching needs to happen, some amount of decoding needs to happen to see that it's a branch, what the target is, etc. This could get a little more complicated if that gets done by the predecoder, but I'm choosing to ignore that for now. Variable length instructions: To handle variable length instructions in x86 and ARM, the predecoder now takes in the current PC by reference to the getExtMachInst function. It can modify the PC however it needs to (by setting NPC to be the PC + instruction length, for instance). This could be improved since the CPU doesn't know if the PC was modified and always has to write it back. ISA parser: To support the new API, all PC related operand types were removed from the parser and replaced with a PCState type. There are two warts on this implementation. First, as with all the other operand types, the PCState still has to have a valid operand type even though it doesn't use it. Second, using syntax like PCS.npc(target) doesn't work for two reasons, this looks like the syntax for operand type overriding, and the parser can't figure out if you're reading or writing. Instructions that use the PCS operand (which I've consistently called it) need to first read it into a local variable, manipulate it, and then write it back out. Return address stack: The return address stack needed a little extra help because, in the presence of branch delay slots, it has to merge together elements of the return PC and the call PC. To handle that, a buildRetPC utility function was added. There are basically only two versions in all the ISAs, but it didn't seem short enough to put into the generic ISA directory. Also, the branch predictor code in O3 and InOrder were adjusted so that they always store the PC of the actual call instruction in the RAS, not the next PC. If the call instruction is a microop, the next PC refers to the next microop in the same macroop which is probably not desirable. The buildRetPC function advances the PC intelligently to the next macroop (in an ISA specific way) so that that case works. Change in stats: There were no change in stats except in MIPS and SPARC in the O3 model. MIPS runs in about 9% fewer ticks. SPARC runs with 30%-50% fewer ticks, which could likely be improved further by setting call/return instruction flags and taking advantage of the RAS. TODO: Add != operators to the PCState classes, defined trivially to be !(a==b). Smooth out places where PCs are split apart, passed around, and put back together later. I think this might happen in SPARC's fault code. Add ISA specific constructors that allow setting PC elements without calling a bunch of accessors. Try to eliminate the need for the branching() function. Factor out Alpha's PAL mode pc bit into a separate flag field, and eliminate places where it's blindly masked out or tested in the PC.
749 lines
21 KiB
C++
749 lines
21 KiB
C++
/*
|
|
* Copyright (c) 2004-2006 The Regents of The University of Michigan
|
|
* All rights reserved.
|
|
*
|
|
* Redistribution and use in source and binary forms, with or without
|
|
* modification, are permitted provided that the following conditions are
|
|
* met: redistributions of source code must retain the above copyright
|
|
* notice, this list of conditions and the following disclaimer;
|
|
* redistributions in binary form must reproduce the above copyright
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
* documentation and/or other materials provided with the distribution;
|
|
* neither the name of the copyright holders nor the names of its
|
|
* contributors may be used to endorse or promote products derived from
|
|
* this software without specific prior written permission.
|
|
*
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
|
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
|
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
|
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
|
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
|
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
|
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
|
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
|
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
|
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
*
|
|
* Authors: Kevin Lim
|
|
*/
|
|
|
|
#include "config/the_isa.hh"
|
|
#include "cpu/o3/decode.hh"
|
|
#include "params/DerivO3CPU.hh"
|
|
|
|
using namespace std;
|
|
|
|
template<class Impl>
|
|
DefaultDecode<Impl>::DefaultDecode(O3CPU *_cpu, DerivO3CPUParams *params)
|
|
: cpu(_cpu),
|
|
renameToDecodeDelay(params->renameToDecodeDelay),
|
|
iewToDecodeDelay(params->iewToDecodeDelay),
|
|
commitToDecodeDelay(params->commitToDecodeDelay),
|
|
fetchToDecodeDelay(params->fetchToDecodeDelay),
|
|
decodeWidth(params->decodeWidth),
|
|
numThreads(params->numThreads)
|
|
{
|
|
_status = Inactive;
|
|
|
|
// Setup status, make sure stall signals are clear.
|
|
for (ThreadID tid = 0; tid < numThreads; ++tid) {
|
|
decodeStatus[tid] = Idle;
|
|
|
|
stalls[tid].rename = false;
|
|
stalls[tid].iew = false;
|
|
stalls[tid].commit = false;
|
|
}
|
|
|
|
// @todo: Make into a parameter
|
|
skidBufferMax = (fetchToDecodeDelay * params->fetchWidth) + decodeWidth;
|
|
}
|
|
|
|
template <class Impl>
|
|
std::string
|
|
DefaultDecode<Impl>::name() const
|
|
{
|
|
return cpu->name() + ".decode";
|
|
}
|
|
|
|
template <class Impl>
|
|
void
|
|
DefaultDecode<Impl>::regStats()
|
|
{
|
|
decodeIdleCycles
|
|
.name(name() + ".DECODE:IdleCycles")
|
|
.desc("Number of cycles decode is idle")
|
|
.prereq(decodeIdleCycles);
|
|
decodeBlockedCycles
|
|
.name(name() + ".DECODE:BlockedCycles")
|
|
.desc("Number of cycles decode is blocked")
|
|
.prereq(decodeBlockedCycles);
|
|
decodeRunCycles
|
|
.name(name() + ".DECODE:RunCycles")
|
|
.desc("Number of cycles decode is running")
|
|
.prereq(decodeRunCycles);
|
|
decodeUnblockCycles
|
|
.name(name() + ".DECODE:UnblockCycles")
|
|
.desc("Number of cycles decode is unblocking")
|
|
.prereq(decodeUnblockCycles);
|
|
decodeSquashCycles
|
|
.name(name() + ".DECODE:SquashCycles")
|
|
.desc("Number of cycles decode is squashing")
|
|
.prereq(decodeSquashCycles);
|
|
decodeBranchResolved
|
|
.name(name() + ".DECODE:BranchResolved")
|
|
.desc("Number of times decode resolved a branch")
|
|
.prereq(decodeBranchResolved);
|
|
decodeBranchMispred
|
|
.name(name() + ".DECODE:BranchMispred")
|
|
.desc("Number of times decode detected a branch misprediction")
|
|
.prereq(decodeBranchMispred);
|
|
decodeControlMispred
|
|
.name(name() + ".DECODE:ControlMispred")
|
|
.desc("Number of times decode detected an instruction incorrectly"
|
|
" predicted as a control")
|
|
.prereq(decodeControlMispred);
|
|
decodeDecodedInsts
|
|
.name(name() + ".DECODE:DecodedInsts")
|
|
.desc("Number of instructions handled by decode")
|
|
.prereq(decodeDecodedInsts);
|
|
decodeSquashedInsts
|
|
.name(name() + ".DECODE:SquashedInsts")
|
|
.desc("Number of squashed instructions handled by decode")
|
|
.prereq(decodeSquashedInsts);
|
|
}
|
|
|
|
template<class Impl>
|
|
void
|
|
DefaultDecode<Impl>::setTimeBuffer(TimeBuffer<TimeStruct> *tb_ptr)
|
|
{
|
|
timeBuffer = tb_ptr;
|
|
|
|
// Setup wire to write information back to fetch.
|
|
toFetch = timeBuffer->getWire(0);
|
|
|
|
// Create wires to get information from proper places in time buffer.
|
|
fromRename = timeBuffer->getWire(-renameToDecodeDelay);
|
|
fromIEW = timeBuffer->getWire(-iewToDecodeDelay);
|
|
fromCommit = timeBuffer->getWire(-commitToDecodeDelay);
|
|
}
|
|
|
|
template<class Impl>
|
|
void
|
|
DefaultDecode<Impl>::setDecodeQueue(TimeBuffer<DecodeStruct> *dq_ptr)
|
|
{
|
|
decodeQueue = dq_ptr;
|
|
|
|
// Setup wire to write information to proper place in decode queue.
|
|
toRename = decodeQueue->getWire(0);
|
|
}
|
|
|
|
template<class Impl>
|
|
void
|
|
DefaultDecode<Impl>::setFetchQueue(TimeBuffer<FetchStruct> *fq_ptr)
|
|
{
|
|
fetchQueue = fq_ptr;
|
|
|
|
// Setup wire to read information from fetch queue.
|
|
fromFetch = fetchQueue->getWire(-fetchToDecodeDelay);
|
|
}
|
|
|
|
template<class Impl>
|
|
void
|
|
DefaultDecode<Impl>::setActiveThreads(std::list<ThreadID> *at_ptr)
|
|
{
|
|
activeThreads = at_ptr;
|
|
}
|
|
|
|
template <class Impl>
|
|
bool
|
|
DefaultDecode<Impl>::drain()
|
|
{
|
|
// Decode is done draining at any time.
|
|
cpu->signalDrained();
|
|
return true;
|
|
}
|
|
|
|
template <class Impl>
|
|
void
|
|
DefaultDecode<Impl>::takeOverFrom()
|
|
{
|
|
_status = Inactive;
|
|
|
|
// Be sure to reset state and clear out any old instructions.
|
|
for (ThreadID tid = 0; tid < numThreads; ++tid) {
|
|
decodeStatus[tid] = Idle;
|
|
|
|
stalls[tid].rename = false;
|
|
stalls[tid].iew = false;
|
|
stalls[tid].commit = false;
|
|
while (!insts[tid].empty())
|
|
insts[tid].pop();
|
|
while (!skidBuffer[tid].empty())
|
|
skidBuffer[tid].pop();
|
|
branchCount[tid] = 0;
|
|
}
|
|
wroteToTimeBuffer = false;
|
|
}
|
|
|
|
template<class Impl>
|
|
bool
|
|
DefaultDecode<Impl>::checkStall(ThreadID tid) const
|
|
{
|
|
bool ret_val = false;
|
|
|
|
if (stalls[tid].rename) {
|
|
DPRINTF(Decode,"[tid:%i]: Stall fom Rename stage detected.\n", tid);
|
|
ret_val = true;
|
|
} else if (stalls[tid].iew) {
|
|
DPRINTF(Decode,"[tid:%i]: Stall fom IEW stage detected.\n", tid);
|
|
ret_val = true;
|
|
} else if (stalls[tid].commit) {
|
|
DPRINTF(Decode,"[tid:%i]: Stall fom Commit stage detected.\n", tid);
|
|
ret_val = true;
|
|
}
|
|
|
|
return ret_val;
|
|
}
|
|
|
|
template<class Impl>
|
|
inline bool
|
|
DefaultDecode<Impl>::fetchInstsValid()
|
|
{
|
|
return fromFetch->size > 0;
|
|
}
|
|
|
|
template<class Impl>
|
|
bool
|
|
DefaultDecode<Impl>::block(ThreadID tid)
|
|
{
|
|
DPRINTF(Decode, "[tid:%u]: Blocking.\n", tid);
|
|
|
|
// Add the current inputs to the skid buffer so they can be
|
|
// reprocessed when this stage unblocks.
|
|
skidInsert(tid);
|
|
|
|
// If the decode status is blocked or unblocking then decode has not yet
|
|
// signalled fetch to unblock. In that case, there is no need to tell
|
|
// fetch to block.
|
|
if (decodeStatus[tid] != Blocked) {
|
|
// Set the status to Blocked.
|
|
decodeStatus[tid] = Blocked;
|
|
|
|
if (decodeStatus[tid] != Unblocking) {
|
|
toFetch->decodeBlock[tid] = true;
|
|
wroteToTimeBuffer = true;
|
|
}
|
|
|
|
return true;
|
|
}
|
|
|
|
return false;
|
|
}
|
|
|
|
template<class Impl>
|
|
bool
|
|
DefaultDecode<Impl>::unblock(ThreadID tid)
|
|
{
|
|
// Decode is done unblocking only if the skid buffer is empty.
|
|
if (skidBuffer[tid].empty()) {
|
|
DPRINTF(Decode, "[tid:%u]: Done unblocking.\n", tid);
|
|
toFetch->decodeUnblock[tid] = true;
|
|
wroteToTimeBuffer = true;
|
|
|
|
decodeStatus[tid] = Running;
|
|
return true;
|
|
}
|
|
|
|
DPRINTF(Decode, "[tid:%u]: Currently unblocking.\n", tid);
|
|
|
|
return false;
|
|
}
|
|
|
|
template<class Impl>
|
|
void
|
|
DefaultDecode<Impl>::squash(DynInstPtr &inst, ThreadID tid)
|
|
{
|
|
DPRINTF(Decode, "[tid:%i]: [sn:%i] Squashing due to incorrect branch "
|
|
"prediction detected at decode.\n", tid, inst->seqNum);
|
|
|
|
// Send back mispredict information.
|
|
toFetch->decodeInfo[tid].branchMispredict = true;
|
|
toFetch->decodeInfo[tid].predIncorrect = true;
|
|
toFetch->decodeInfo[tid].squash = true;
|
|
toFetch->decodeInfo[tid].doneSeqNum = inst->seqNum;
|
|
toFetch->decodeInfo[tid].nextPC = inst->branchTarget();
|
|
toFetch->decodeInfo[tid].branchTaken = inst->pcState().branching();
|
|
|
|
InstSeqNum squash_seq_num = inst->seqNum;
|
|
|
|
// Might have to tell fetch to unblock.
|
|
if (decodeStatus[tid] == Blocked ||
|
|
decodeStatus[tid] == Unblocking) {
|
|
toFetch->decodeUnblock[tid] = 1;
|
|
}
|
|
|
|
// Set status to squashing.
|
|
decodeStatus[tid] = Squashing;
|
|
|
|
for (int i=0; i<fromFetch->size; i++) {
|
|
if (fromFetch->insts[i]->threadNumber == tid &&
|
|
fromFetch->insts[i]->seqNum > squash_seq_num) {
|
|
fromFetch->insts[i]->setSquashed();
|
|
}
|
|
}
|
|
|
|
// Clear the instruction list and skid buffer in case they have any
|
|
// insts in them.
|
|
while (!insts[tid].empty()) {
|
|
insts[tid].pop();
|
|
}
|
|
|
|
while (!skidBuffer[tid].empty()) {
|
|
skidBuffer[tid].pop();
|
|
}
|
|
|
|
// Squash instructions up until this one
|
|
cpu->removeInstsUntil(squash_seq_num, tid);
|
|
}
|
|
|
|
template<class Impl>
|
|
unsigned
|
|
DefaultDecode<Impl>::squash(ThreadID tid)
|
|
{
|
|
DPRINTF(Decode, "[tid:%i]: Squashing.\n",tid);
|
|
|
|
if (decodeStatus[tid] == Blocked ||
|
|
decodeStatus[tid] == Unblocking) {
|
|
#if !FULL_SYSTEM
|
|
// In syscall emulation, we can have both a block and a squash due
|
|
// to a syscall in the same cycle. This would cause both signals to
|
|
// be high. This shouldn't happen in full system.
|
|
// @todo: Determine if this still happens.
|
|
if (toFetch->decodeBlock[tid]) {
|
|
toFetch->decodeBlock[tid] = 0;
|
|
} else {
|
|
toFetch->decodeUnblock[tid] = 1;
|
|
}
|
|
#else
|
|
toFetch->decodeUnblock[tid] = 1;
|
|
#endif
|
|
}
|
|
|
|
// Set status to squashing.
|
|
decodeStatus[tid] = Squashing;
|
|
|
|
// Go through incoming instructions from fetch and squash them.
|
|
unsigned squash_count = 0;
|
|
|
|
for (int i=0; i<fromFetch->size; i++) {
|
|
if (fromFetch->insts[i]->threadNumber == tid) {
|
|
fromFetch->insts[i]->setSquashed();
|
|
squash_count++;
|
|
}
|
|
}
|
|
|
|
// Clear the instruction list and skid buffer in case they have any
|
|
// insts in them.
|
|
while (!insts[tid].empty()) {
|
|
insts[tid].pop();
|
|
}
|
|
|
|
while (!skidBuffer[tid].empty()) {
|
|
skidBuffer[tid].pop();
|
|
}
|
|
|
|
return squash_count;
|
|
}
|
|
|
|
template<class Impl>
|
|
void
|
|
DefaultDecode<Impl>::skidInsert(ThreadID tid)
|
|
{
|
|
DynInstPtr inst = NULL;
|
|
|
|
while (!insts[tid].empty()) {
|
|
inst = insts[tid].front();
|
|
|
|
insts[tid].pop();
|
|
|
|
assert(tid == inst->threadNumber);
|
|
|
|
DPRINTF(Decode,"Inserting [sn:%lli] PC: %s into decode skidBuffer %i\n",
|
|
inst->seqNum, inst->pcState(), inst->threadNumber);
|
|
|
|
skidBuffer[tid].push(inst);
|
|
}
|
|
|
|
// @todo: Eventually need to enforce this by not letting a thread
|
|
// fetch past its skidbuffer
|
|
assert(skidBuffer[tid].size() <= skidBufferMax);
|
|
}
|
|
|
|
template<class Impl>
|
|
bool
|
|
DefaultDecode<Impl>::skidsEmpty()
|
|
{
|
|
list<ThreadID>::iterator threads = activeThreads->begin();
|
|
list<ThreadID>::iterator end = activeThreads->end();
|
|
|
|
while (threads != end) {
|
|
ThreadID tid = *threads++;
|
|
if (!skidBuffer[tid].empty())
|
|
return false;
|
|
}
|
|
|
|
return true;
|
|
}
|
|
|
|
template<class Impl>
|
|
void
|
|
DefaultDecode<Impl>::updateStatus()
|
|
{
|
|
bool any_unblocking = false;
|
|
|
|
list<ThreadID>::iterator threads = activeThreads->begin();
|
|
list<ThreadID>::iterator end = activeThreads->end();
|
|
|
|
while (threads != end) {
|
|
ThreadID tid = *threads++;
|
|
|
|
if (decodeStatus[tid] == Unblocking) {
|
|
any_unblocking = true;
|
|
break;
|
|
}
|
|
}
|
|
|
|
// Decode will have activity if it's unblocking.
|
|
if (any_unblocking) {
|
|
if (_status == Inactive) {
|
|
_status = Active;
|
|
|
|
DPRINTF(Activity, "Activating stage.\n");
|
|
|
|
cpu->activateStage(O3CPU::DecodeIdx);
|
|
}
|
|
} else {
|
|
// If it's not unblocking, then decode will not have any internal
|
|
// activity. Switch it to inactive.
|
|
if (_status == Active) {
|
|
_status = Inactive;
|
|
DPRINTF(Activity, "Deactivating stage.\n");
|
|
|
|
cpu->deactivateStage(O3CPU::DecodeIdx);
|
|
}
|
|
}
|
|
}
|
|
|
|
template <class Impl>
|
|
void
|
|
DefaultDecode<Impl>::sortInsts()
|
|
{
|
|
int insts_from_fetch = fromFetch->size;
|
|
#ifdef DEBUG
|
|
for (ThreadID tid = 0; tid < numThreads; tid++)
|
|
assert(insts[tid].empty());
|
|
#endif
|
|
for (int i = 0; i < insts_from_fetch; ++i) {
|
|
insts[fromFetch->insts[i]->threadNumber].push(fromFetch->insts[i]);
|
|
}
|
|
}
|
|
|
|
template<class Impl>
|
|
void
|
|
DefaultDecode<Impl>::readStallSignals(ThreadID tid)
|
|
{
|
|
if (fromRename->renameBlock[tid]) {
|
|
stalls[tid].rename = true;
|
|
}
|
|
|
|
if (fromRename->renameUnblock[tid]) {
|
|
assert(stalls[tid].rename);
|
|
stalls[tid].rename = false;
|
|
}
|
|
|
|
if (fromIEW->iewBlock[tid]) {
|
|
stalls[tid].iew = true;
|
|
}
|
|
|
|
if (fromIEW->iewUnblock[tid]) {
|
|
assert(stalls[tid].iew);
|
|
stalls[tid].iew = false;
|
|
}
|
|
|
|
if (fromCommit->commitBlock[tid]) {
|
|
stalls[tid].commit = true;
|
|
}
|
|
|
|
if (fromCommit->commitUnblock[tid]) {
|
|
assert(stalls[tid].commit);
|
|
stalls[tid].commit = false;
|
|
}
|
|
}
|
|
|
|
template <class Impl>
|
|
bool
|
|
DefaultDecode<Impl>::checkSignalsAndUpdate(ThreadID tid)
|
|
{
|
|
// Check if there's a squash signal, squash if there is.
|
|
// Check stall signals, block if necessary.
|
|
// If status was blocked
|
|
// Check if stall conditions have passed
|
|
// if so then go to unblocking
|
|
// If status was Squashing
|
|
// check if squashing is not high. Switch to running this cycle.
|
|
|
|
// Update the per thread stall statuses.
|
|
readStallSignals(tid);
|
|
|
|
// Check squash signals from commit.
|
|
if (fromCommit->commitInfo[tid].squash) {
|
|
|
|
DPRINTF(Decode, "[tid:%u]: Squashing instructions due to squash "
|
|
"from commit.\n", tid);
|
|
|
|
squash(tid);
|
|
|
|
return true;
|
|
}
|
|
|
|
// Check ROB squash signals from commit.
|
|
if (fromCommit->commitInfo[tid].robSquashing) {
|
|
DPRINTF(Decode, "[tid:%u]: ROB is still squashing.\n", tid);
|
|
|
|
// Continue to squash.
|
|
decodeStatus[tid] = Squashing;
|
|
|
|
return true;
|
|
}
|
|
|
|
if (checkStall(tid)) {
|
|
return block(tid);
|
|
}
|
|
|
|
if (decodeStatus[tid] == Blocked) {
|
|
DPRINTF(Decode, "[tid:%u]: Done blocking, switching to unblocking.\n",
|
|
tid);
|
|
|
|
decodeStatus[tid] = Unblocking;
|
|
|
|
unblock(tid);
|
|
|
|
return true;
|
|
}
|
|
|
|
if (decodeStatus[tid] == Squashing) {
|
|
// Switch status to running if decode isn't being told to block or
|
|
// squash this cycle.
|
|
DPRINTF(Decode, "[tid:%u]: Done squashing, switching to running.\n",
|
|
tid);
|
|
|
|
decodeStatus[tid] = Running;
|
|
|
|
return false;
|
|
}
|
|
|
|
// If we've reached this point, we have not gotten any signals that
|
|
// cause decode to change its status. Decode remains the same as before.
|
|
return false;
|
|
}
|
|
|
|
template<class Impl>
|
|
void
|
|
DefaultDecode<Impl>::tick()
|
|
{
|
|
wroteToTimeBuffer = false;
|
|
|
|
bool status_change = false;
|
|
|
|
toRenameIndex = 0;
|
|
|
|
list<ThreadID>::iterator threads = activeThreads->begin();
|
|
list<ThreadID>::iterator end = activeThreads->end();
|
|
|
|
sortInsts();
|
|
|
|
//Check stall and squash signals.
|
|
while (threads != end) {
|
|
ThreadID tid = *threads++;
|
|
|
|
DPRINTF(Decode,"Processing [tid:%i]\n",tid);
|
|
status_change = checkSignalsAndUpdate(tid) || status_change;
|
|
|
|
decode(status_change, tid);
|
|
}
|
|
|
|
if (status_change) {
|
|
updateStatus();
|
|
}
|
|
|
|
if (wroteToTimeBuffer) {
|
|
DPRINTF(Activity, "Activity this cycle.\n");
|
|
|
|
cpu->activityThisCycle();
|
|
}
|
|
}
|
|
|
|
template<class Impl>
|
|
void
|
|
DefaultDecode<Impl>::decode(bool &status_change, ThreadID tid)
|
|
{
|
|
// If status is Running or idle,
|
|
// call decodeInsts()
|
|
// If status is Unblocking,
|
|
// buffer any instructions coming from fetch
|
|
// continue trying to empty skid buffer
|
|
// check if stall conditions have passed
|
|
|
|
if (decodeStatus[tid] == Blocked) {
|
|
++decodeBlockedCycles;
|
|
} else if (decodeStatus[tid] == Squashing) {
|
|
++decodeSquashCycles;
|
|
}
|
|
|
|
// Decode should try to decode as many instructions as its bandwidth
|
|
// will allow, as long as it is not currently blocked.
|
|
if (decodeStatus[tid] == Running ||
|
|
decodeStatus[tid] == Idle) {
|
|
DPRINTF(Decode, "[tid:%u]: Not blocked, so attempting to run "
|
|
"stage.\n",tid);
|
|
|
|
decodeInsts(tid);
|
|
} else if (decodeStatus[tid] == Unblocking) {
|
|
// Make sure that the skid buffer has something in it if the
|
|
// status is unblocking.
|
|
assert(!skidsEmpty());
|
|
|
|
// If the status was unblocking, then instructions from the skid
|
|
// buffer were used. Remove those instructions and handle
|
|
// the rest of unblocking.
|
|
decodeInsts(tid);
|
|
|
|
if (fetchInstsValid()) {
|
|
// Add the current inputs to the skid buffer so they can be
|
|
// reprocessed when this stage unblocks.
|
|
skidInsert(tid);
|
|
}
|
|
|
|
status_change = unblock(tid) || status_change;
|
|
}
|
|
}
|
|
|
|
template <class Impl>
|
|
void
|
|
DefaultDecode<Impl>::decodeInsts(ThreadID tid)
|
|
{
|
|
// Instructions can come either from the skid buffer or the list of
|
|
// instructions coming from fetch, depending on decode's status.
|
|
int insts_available = decodeStatus[tid] == Unblocking ?
|
|
skidBuffer[tid].size() : insts[tid].size();
|
|
|
|
if (insts_available == 0) {
|
|
DPRINTF(Decode, "[tid:%u] Nothing to do, breaking out"
|
|
" early.\n",tid);
|
|
// Should I change the status to idle?
|
|
++decodeIdleCycles;
|
|
return;
|
|
} else if (decodeStatus[tid] == Unblocking) {
|
|
DPRINTF(Decode, "[tid:%u] Unblocking, removing insts from skid "
|
|
"buffer.\n",tid);
|
|
++decodeUnblockCycles;
|
|
} else if (decodeStatus[tid] == Running) {
|
|
++decodeRunCycles;
|
|
}
|
|
|
|
DynInstPtr inst;
|
|
|
|
std::queue<DynInstPtr>
|
|
&insts_to_decode = decodeStatus[tid] == Unblocking ?
|
|
skidBuffer[tid] : insts[tid];
|
|
|
|
DPRINTF(Decode, "[tid:%u]: Sending instruction to rename.\n",tid);
|
|
|
|
while (insts_available > 0 && toRenameIndex < decodeWidth) {
|
|
assert(!insts_to_decode.empty());
|
|
|
|
inst = insts_to_decode.front();
|
|
|
|
insts_to_decode.pop();
|
|
|
|
DPRINTF(Decode, "[tid:%u]: Processing instruction [sn:%lli] with "
|
|
"PC %s\n", tid, inst->seqNum, inst->pcState());
|
|
|
|
if (inst->isSquashed()) {
|
|
DPRINTF(Decode, "[tid:%u]: Instruction %i with PC %s is "
|
|
"squashed, skipping.\n",
|
|
tid, inst->seqNum, inst->pcState());
|
|
|
|
++decodeSquashedInsts;
|
|
|
|
--insts_available;
|
|
|
|
continue;
|
|
}
|
|
|
|
// Also check if instructions have no source registers. Mark
|
|
// them as ready to issue at any time. Not sure if this check
|
|
// should exist here or at a later stage; however it doesn't matter
|
|
// too much for function correctness.
|
|
if (inst->numSrcRegs() == 0) {
|
|
inst->setCanIssue();
|
|
}
|
|
|
|
// This current instruction is valid, so add it into the decode
|
|
// queue. The next instruction may not be valid, so check to
|
|
// see if branches were predicted correctly.
|
|
toRename->insts[toRenameIndex] = inst;
|
|
|
|
++(toRename->size);
|
|
++toRenameIndex;
|
|
++decodeDecodedInsts;
|
|
--insts_available;
|
|
|
|
// Ensure that if it was predicted as a branch, it really is a
|
|
// branch.
|
|
if (inst->readPredTaken() && !inst->isControl()) {
|
|
panic("Instruction predicted as a branch!");
|
|
|
|
++decodeControlMispred;
|
|
|
|
// Might want to set some sort of boolean and just do
|
|
// a check at the end
|
|
squash(inst, inst->threadNumber);
|
|
|
|
break;
|
|
}
|
|
|
|
// Go ahead and compute any PC-relative branches.
|
|
if (inst->isDirectCtrl() && inst->isUncondCtrl()) {
|
|
++decodeBranchResolved;
|
|
|
|
if (!(inst->branchTarget() == inst->readPredTarg())) {
|
|
++decodeBranchMispred;
|
|
|
|
// Might want to set some sort of boolean and just do
|
|
// a check at the end
|
|
squash(inst, inst->threadNumber);
|
|
TheISA::PCState target = inst->branchTarget();
|
|
|
|
DPRINTF(Decode, "[sn:%i]: Updating predictions: PredPC: %s\n",
|
|
inst->seqNum, target);
|
|
//The micro pc after an instruction level branch should be 0
|
|
inst->setPredTarg(target);
|
|
break;
|
|
}
|
|
}
|
|
}
|
|
|
|
// If we didn't process all instructions, then we will need to block
|
|
// and put all those instructions into the skid buffer.
|
|
if (!insts_to_decode.empty()) {
|
|
block(tid);
|
|
}
|
|
|
|
// Record that decode has written to the time buffer for activity
|
|
// tracking.
|
|
if (toRenameIndex) {
|
|
wroteToTimeBuffer = true;
|
|
}
|
|
}
|