Commit graph

38 commits

Author SHA1 Message Date
Brandon Potter
35ba103009 hsail: remove the panic guarding function directives
HSA functions calls are still not supported properly with HSAIL, but
the recent AMP runtime modifications rely on being able to parse the
BRIG/HSAIL files that are extracted from the application binaries.
We need to parse the function call HSAIL definitions, but we do not
actually need to make the function calls.

The reason that this happens is that HCC appends a set of routines
to every HSAIL binary that it creates. These extra, unnecessary
routines exist in the HCC source as a file; this file is cat'd onto
everything that the compiler outputs before being assembled into the
application's binary. HCC does this because it might call these helper
functions. However, it doesn't actually appear to do so in the AMP
codes so we just parse these functions with the HSAIL parser and
then ignore them.
2016-12-02 18:01:42 -05:00
Tony Gutierrez
14deacf86e gpu-compute: fix segfault when constructing GPUExecContext
the GPUExecContext context currently stores a reference to its parent WF's
GPUISA object, however there are some special instructions that do not have
an associated WF. when these objects are constructed they set their WF pointer
to null, which causes the GPUExecContext to segfault when trying to
dereference
the WF pointer to get at the WF's GPUISA object. here we change the GPUISA
reference in the GPUExecContext class to a pointer so that it may be set to
null.
2016-11-21 15:40:03 -05:00
Tony Gutierrez
a0d4019abd gpu-compute: init valid field of GpuTlbEntry in default ctor
valid field for GpuTlbEntry is not set in the default ctor, which can
lead to strange behavior, and is also flagged by UBSAN.
2016-11-21 15:38:30 -05:00
Tony Gutierrez
74249f80df hsail,gpu-compute: fixes to appease clang++
fixes to appease clang++. tested on:

Ubuntu clang version 3.5.0-4ubuntu2~trusty2
(tags/RELEASE_350/final) (based on LLVM 3.5.0)

Ubuntu clang version 3.6.0-2ubuntu1~trusty1
(tags/RELEASE_360/final) (based on LLVM 3.6.0)

the fixes address the following five issues:

1) the exec continuations in gpu_static_inst.hh were marked
   as protected when they should be public. here we mark
   them as public

2) the Abs instruction uses std::abs() in its execute method.
   because Abs is templated, it can also operate on U32 and U64,
   types, which cause Abs::execute() to pass uint32_t and uint64_t
   types to std::abs() respectively. this triggers a warning
   because std::abs() has no effect in this case. to rememdy this
   we add template specialization for the execute() method of Abs
   when its template paramter is U32 or U64.

3) Some potocols that utilize the code in cprintf.hh were missing
   includes to BoolVec.hh, which defines operator<< for the BoolVec
   type. This would cause issues when the generated code would try
   to pass a BoolVec type to a method in cprintf.hh that used
   operator<< on an instance of a BoolVec.

4) Surprise, clang doesn't like it when you clobber all the bits
   in a newly allocated object. I.e., this code:

   tlb = new GpuTlbEntry\[size\];
   std::memset(tlb, 0, sizeof(GpuTlbEntry) \* size);

   Let's use std::vector to track the TLB entries in the GpuTlb now...

5) There were a few variables used only in DPRINTFs, so we mark them
   with M5_VAR_USED.
2016-10-26 22:48:45 -04:00
Tony Gutierrez
de72e36619 gpu-compute: support in-order data delivery in GM pipe
this patch adds an ordered response buffer to the GM pipeline
to ensure in-order data delivery. the buffer is implemented as
a stl ordered map, which sorts the request in program order by
using their sequence ID. when requests return to the GM pipeline
they are marked as done. only the oldest request may be serviced
from the ordered buffer, and only if is marked as done.

the FIFO response buffers are kept and used in OoO delivery mode
2016-10-26 22:48:28 -04:00
Tony Gutierrez
b63eb1302b gpu-compute, hsail: pass GPUDynInstPtr to getRegisterIndex()
for HSAIL an operand's indices into the register files may be calculated
trivially, because the operands are always read from a register file, or are
an immediate.

for machine ISA, however, an op selector may specify special registers, or
may specify special SGPRs with an alias op selector value. the location of
some of the special registers values are dependent on the size of the RF
in some cases. here we add a way for the underlying getRegisterIndex()
method to know about the size of the RFs, so that it may find the relative
positions of the special register values.
2016-10-26 22:47:49 -04:00
Tony Gutierrez
aa7364276f gpu-compute: use System cache line size in the GPU 2016-10-26 22:47:47 -04:00
Tony Gutierrez
844fb845a5 gpu-compute, hsail: make the PC a byte address, not an instruction index
currently the PC is incremented on an instruction granularity, and not as an
instruction's byte address. machine ISA instructions assume the PC is a byte
address, and is incremented accordingly. here we make the GPU model, and the
HSAIL instructions treat the PC as a byte address as well.
2016-10-26 22:47:43 -04:00
Tony Gutierrez
d327cdba07 gpu-compute: add gpu_isa.hh to switch hdrs, add GPUISA to WF
the GPUISA class is meant to encapsulate any ISA-specific behavior - special
register accesses, isa-specific WF/kernel state, etc. - in a generic enough
way so that it may be used in ISA-agnostic code.

gpu-compute: use the GPUISA object to advance the PC

the GPU model treats the PC as a pointer to individual instruction objects -
which are store in a contiguous array - and not a byte address to be fetched
from the real memory system. this is ok for HSAIL because all instructions
are considered by the model to be the same size.

in machine ISA, however, instructions may be 32b or 64b, and branches are
calculated by advancing the PC by the number of words (4 byte chunks) it
needs to advance in the real instruction stream. because of this there is
a mismatch between the PC we use to index into the instruction array, and
the actual byte address PC the ISA expects. here we move the PC advance
calculation to the ISA so that differences in the instrucion sizes may be
accounted for in generic way.
2016-10-26 22:47:38 -04:00
Tony Gutierrez
98d8a7051d gpu-compute: add instruction mix stats for the gpu 2016-10-26 22:47:30 -04:00
Tony Gutierrez
c7a79c9a42 gpu-compute, hsail: call discardFetch() from the WF
because every taken branch causes fetch to be discarded, we move the call
to the WF to avoid to have to call it from each and every branch instruction
type.
2016-10-26 22:47:27 -04:00
Tony Gutierrez
00a6346c91 hsail, gpu-compute: remove doGm/SmReturn add completeAcc
we are removing doGmReturn from the GM pipe, and adding completeAcc()
implementations for the HSAIL mem ops. the behavior in doGmReturn is
dependent on HSAIL and HSAIL mem ops, however the completion phase
of memory ops in machine ISA can be very different, even amongst individual
machine ISA mem ops. so we remove this functionality from the pipeline and
allow it to be implemented by the individual instructions.
2016-10-26 22:47:19 -04:00
Tony Gutierrez
7ac38849ab gpu-compute: remove inst enums and use bit flag for attributes
this patch removes the GPUStaticInst enums that were defined in GPU.py.
instead, a simple set of attribute flags that can be set in the base
instruction class are used. this will help unify the attributes of HSAIL
and machine ISA instructions within the model itself.

because the static instrution now carries the attributes, a GPUDynInst
must carry a pointer to a valid GPUStaticInst so a new static kernel launch
instruction is added, which carries the attributes needed to perform a
the kernel launch.
2016-10-26 22:47:11 -04:00
Tony Gutierrez
e1ad8035a3 gpu-compute: move disassemle() implementation to GPUStaticInst 2016-10-26 22:47:05 -04:00
Tony Gutierrez
0a6cdff176 gpu-compute, arch: add some methods to the base inst classes for ISA support 2016-10-26 22:47:01 -04:00
Alexandru Dutu
c8cf71f1a0 gpu-compute: Added method to compute the actual workgroup size
This patch adds a method to the Wavefront class to compute the actual workgroup
size. This can be different from the maximum workgroup size specified when
launching the kernel through the NDRange object. Current solution is still not
optimal, as we are computing these for each wavefront and the dispatcher also
needs to have this information and can't actually call
Wavefront::computeActuallWgSz before the wavefronts are being created. A long
term solution would be to have a Workgroup class that deals with all these
details.
2016-10-04 13:03:52 -04:00
Tony Gutierrez
84f9747688 gpu-compute: fix typo in GPUDispatcher 2016-09-16 14:47:19 -04:00
Alexandru Dutu
bd65ec0744 gpu-compute: Adding context serialization methods to Wavefront
This patch adds methods to serialize the context of a particular wavefront
to the simulated system memory. Context serialization is used when a wavefront
is preempeted (i.e. context switch).
2016-09-16 12:32:36 -04:00
Alexandru Dutu
e9b14d5111 gpu-compute: Refactoring Wavefront::dynWaveId 2016-09-16 12:31:46 -04:00
Alexandru Dutu
498d0e63e5 gpu-compute: Adding vector register file debug messages
This patch introduces DPRINTFs for reading and writing to and from the vector
register file.
2016-09-16 12:30:05 -04:00
Alexandru Dutu
7918376450 gpu-compute: Changing reconvergenceStack type
std::stack has no iterators, therefore the reconvergence stack can't be
iterated without poping elements off. We will be using std::list instead to be
able to iterate for saving and restoring purposes.
2016-09-16 12:29:01 -04:00
Alexandru Dutu
d5c8c5d3db gpu-compute: Adding ioctl for HW context size
Adding runtime support for determining the memory required by a SIMD engine
when executing a particular wavefront.
2016-09-16 12:27:56 -04:00
Alexandru Dutu
589e13a23b gpu-compute: Wavefront refactoring
Renaming members of the Wavefront class in accordance with the style guide.
2016-09-16 12:26:52 -04:00
Alexandru Dutu
e9fe1b838b gpu-compute: Remove WFContext
WFContext struct is currently unused and it has been rendered not useful in
saving and restoring the context of a Wavefront. Wavefront class should be
sufficient for that purpose and the runtime can figure out the memory size
it will need to allocate for a Wavefront through an IOCTL.
2016-09-16 12:26:03 -04:00
Michael LeBeane
6a668d0c0c gpu-compute: Fix bug with return in cfg
Connecting basic blocks would stop too early in kernels where ret was not the
last instruction.  This patch allows basic blocks after the ret instruction
to be properly connected.
2016-09-13 23:11:20 -04:00
jkalamat
3724fb15fa gpu-compute: parametrize Wavefront size
Eliminate the VSZ constant that defined the Wavefront size (in numbers of work
items); replaced it with a parameter in the GPU.py configuration script.
Changed all data structures dependent on the Wavefront size to be dynamically
sized. Legal values of Wavefront size are 16, 32, 64 for now and checked at
initialization time.
2016-06-09 11:24:55 -04:00
David Guillen Fandos
70798b1ba0 stats: Fixing regStats function for some SimObjects
Fixing an issue with regStats not calling the parent class method
for most SimObjects in Gem5. This causes issues if one adds new
stats in the base class (since they are never initialized properly!).

Change-Id: Iebc5aa66f58816ef4295dc8e48a357558d76a77c
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
2016-06-06 17:16:43 +01:00
Tuan Ta
b6d20c25c3 gpu-compute: Fixed a bug in global memory pipeline
Added a condition when inflightStores is incremented to prevent a deadlock
caused by many memory fence requests generated by a CU
2016-06-03 16:20:08 -04:00
Tony Gutierrez
7dad4377ec gpu-compute: fix bug in GPUDynInst::isScalarRegister() 2016-05-16 15:36:24 -04:00
Tony Gutierrez
bb83fa2051 gpu-compute: fix spacing in GPUDynInst ctor 2016-05-06 17:00:54 -04:00
Tony Gutierrez
4f3139e696 gpu-compute: fix uninitialized member bug in GPUDynInst
the n_reg field in the GPUDynInst is not currently set in the constructor.
if it is not set externally, there are assertion failures that may occur
if the random value it gets is just right. here we set it to 0 by default.
2016-05-06 16:44:38 -04:00
Mitch Hayenga
c75ff71139 mem: Remove threadId from memory request class
In general, the ThreadID parameter is unnecessary in the memory system
as the ContextID is what is used for the purposes of locks/wakeups.
Since we allocate sequential ContextIDs for each thread on MT-enabled
CPUs, ThreadID is unnecessary as the CPUs can identify the requesting
thread through sideband info (SenderState / LSQ entries) or ContextID
offset from the base ContextID for a cpu.

This is a re-spin of 20264eb after the revert (bd1c6789) and includes
some fixes of that commit.
2016-04-07 09:30:20 -05:00
jkalamat
1ab75c3ee2 gpu-compute: remove unused variable from scoreboard check stage
appease clang by removing the unused private member variable,
'numGlbMemPipes', from the scoreboard check stage
2016-03-21 11:26:23 -04:00
Steve Reinhardt
f6cd7a4bb7 syscall_emul: move mmapGrowsDown() to LiveProcess
The mmapGrowsDown() method was a static method on the OperatingSystem
class (and derived classes), which worked OK for the templated syscall
emulation methods, but made it hard to access elsewhere.  This patch
moves the method to be a virtual function on the LiveProcess method,
where it can be overridden for specific platforms (for now, Alpha).

This patch also changes the value of mmapGrowsDown() from being false
by default and true only on X86Linux32 to being true by default and
false only on Alpha, which seems closer to reality (though in reality
most people use ASLR and this doesn't really matter anymore).

In the process, also got rid of the unused mmap_start field on
LiveProcess and OperatingSystem mmapGrowsUp variable.
2016-03-17 10:29:32 -07:00
Andreas Hansson
8faeec44a6 base: Fix gpu-compute output stream creation
Match changes in output stream.
2016-03-04 20:14:10 -05:00
John Kalamatianos
a28a234069 gpu: fix bugs with MemFence, Flat Instrs and Resource utilization
Both Memory Fence is now flagged as Global Memory only to avoid resource
oversubscribing.
Flat instructions now check for Shared Memory resource busy to avoid
oversubscribing resources.
All WaitClass resources now use cycles (not ticks) to register the number
of pipe stages between Scoreboard and Execute to be consistent with
instruction scheduling logic which always used clock cycles.
2016-02-18 10:42:03 -05:00
Tony Gutierrez
9a0f1be21f gpu-compute: remove brig_object.hh from hsa_object.cc
brig_object.hh is specific to the HSAIL ISA, and hence should not be
included in ISA-agnostic code.
2016-02-17 11:46:02 -05:00
Tony Gutierrez
1a7d3f9fcb gpu-compute: AMD's baseline GPU model 2016-01-19 14:28:22 -05:00