options, making existing options more visible and dealing with
holes in data better.
util/stats/barchart.py:
- move the options for BarChart to a base class ChartOptions so
they can be more easily set and copied.
- add an option to set the chart size (so you can adjust the aspect ratio)
- don't do the add_subplot thing, use add_axes directly so we can
affect the size of the figure itself to make room for the legend
- make the initial array bottom floating point so we don't lose precision
- add an option to set the limits on the y axis
- use a figure legend instead of an axes legend so we can put the legend
outside of the actual chart. Also add an option to set the fontsize of
the legend.
- initial hack at outputting csv files
util/stats/db.py:
don't print out an error when the run is missing from the database
just return None, the error will be print elsewhere.
util/stats/output.py:
- make StatOutput derive from ChartOptions so that it's easier to
set default chart options.
- make the various output functions (graph, display, etc.) take the
name of the data as a parameter instead of making it a parameter to
__init__. This allows me to create the StatOutput object with
generic parameters while still being able to specialize the name
after the fact
- add support for graph_group and graph_bars to be applied to multiple
configuration groups. This results in a cross product of the groups
to be generated and used.
- flush the html file output as we go so that we can load the file
while graphs are still being generated.
- make the proxy a parameter to the graph function so the proper system's
data can be graphed
- for any groups or bars that are completely missing, remove them from
the graph. This way, if we decide not to do a set of runs, there won't
be holes in the data.
- output eps and ps by default in addition to the png.
util/stats/profile.py:
- clean up the data structures that are used to store the function
profile information and try our best to avoid keeping extra data
around that isn't used.
- make get() return None if a job is missing so we know it was
missing rather than the all zeroes thing.
- make the function profile categorization stuff total up to 100%
- Fixup the x-axis and y-axis labels.
- fix the dot file output stuff.
util/stats/stats.py:
support the new options stuff for StatOutput
--HG--
extra : convert_revision : fae35df8c57a36257ea93bc3e0a0e617edc46bb7
on the test system.
add an option for pio_delay_write to run.py
util/stats/stats.py:
full0 -> run0 due to run.py change
sim_ticks doesn't make sense with tick = ps, so use
one of the cpu's numCycles paramter
--HG--
extra : convert_revision : db9dbe014549d823edc10395f5241db5e907df01
util/stats/info.py:
If an operation results in a divide by zero, just return None
--HG--
extra : convert_revision : 19cb4319734a3a9cf02bb1966fed42eb0c8a8ade
for retransmissions, out of order packets, lost packets, duplicate
ack, window full, etc. Easy way to see if you have a problem with a
run.
--HG--
extra : convert_revision : 95d8e8650b0fb3d120df107cd5281c56fefc3a1d
on a timeout.
util/qdo:
Qsub needs a kill -9 to die; kill -15 doesn't cut it.
--HG--
extra : convert_revision : 7696b3ecf1a084b68dd909b138ab6aa1b380b5a7
util/pbs/pbs.py:
Change the default so that we do not get mail under any circumstances
from pbs.
util/pbs/send.py:
Add a -n flag to send.py that causes the Base directory to *not*
sync with the Link directory
--HG--
extra : convert_revision : 6e872153b6b2c34b61ec2ddbf3e5536876f4b43b
util/qdo:
Don't automatically set qsub job name, as this causes qsub to fail
if the job name is too long or otherwise unsuitable.
--HG--
extra : convert_revision : 5ba48767574efaaff2c328549adee295780f7f70
util/stats/db.py:
need to import the values function
util/stats/info.py:
it's just run
--HG--
extra : convert_revision : 3cb67d8112a1a5fdf761b73732859a71f585bd1f
util/stats/profile.py:
Pass around the number of symbols limit
deal with categorization a bit better.
--HG--
extra : convert_revision : 908410e296efd4514f2dfc0eb9e6e42834585560
util/stats/db.py:
Build a result object as the result of a query operation so it is
easier to populate and contains a bit more information than just
a big dict. Also change the next level data into a matrix instead
of a dict of dicts.
Move the "get" function into the Database object. (The get function
is used by the output parsing function as the interface for accessing
backend storage, same interface for profile stuff.)
Change the old get variable to the method variable, it describes how
the get works, (whether using sum, stdev, etc.)
util/stats/display.py:
Clean up the display functions, mostly formatting.
Handle values the way they should be now.
util/stats/info.py:
Totally re-work how values are accessed from their data store.
Access individual values on demand instead of calculating everything
and passing up a huge result from the bottom.
This impacts the way that proxying works, and in general, everything
is now esentially a proxy for the lower level database. Provide new
operators: unproxy, scalar, vector, value, values, total, and len which
retrieve the proper result from the object they are called on.
Move the ProxyGroup stuff (proxies of proxies!) here from the now gone
proxy.py file and integrate the shared parts of the code. The ProxyGroup
stuff allows you to write formulas without specifying the statistics
until evaluation time.
Get rid of global variables!
util/stats/output.py:
Move the dbinfo stuff into the Database itself. Each source should
have it's own get() function for accessing it's data.
This get() function behaves a bit differently than before in that it
can return vectors as well, deal with these vectors and with no result
conditions better.
util/stats/stats.py:
the info module no longer has the source global variable, just
create the database source and pass it around as necessary
--HG--
extra : convert_revision : 8e5aa228e5d3ae8068ef9c40f65b3a2f9e7c0cff
util/stats/stats.py:
Make the default jobfile Test.py in the current directory
add the -J flag to tell it not to use a jobfile
--HG--
extra : convert_revision : 5cf5bb2f32ed9c9701a94eabc9b2a538581acf94
SConscript:
Get rid of the pc_sample stuff and move to the new profiling stuff
base/traceflags.py:
DPRINTF Stack stuff
cpu/base.cc:
cpu/base.hh:
cpu/exec_context.cc:
cpu/exec_context.hh:
cpu/simple/cpu.cc:
Add profiling stuff
kern/kernel_stats.hh:
Use a smart pointer
sim/system.cc:
sim/system.hh:
Create a new symbol table that has all of the symbols for a
particular system
util/stats/categories.py:
change around the categories, add categories for function
profiling stuff
util/stats/profile.py:
No profile parsing and display code to deal with function
profiling stuff, graph, dot, and text outputs.
--HG--
extra : convert_revision : b3de0cdc8bd468e42647966e2640ae009bda9eb8
util/pbs/job.py:
the default jobfile is now Test.py in the root of the jobs directory
util/pbs/pbs.py:
Clean up the qsub options handling and add job dependencies
util/pbs/send.py:
the default jobfile is now Test.py in the root of the jobs directory
add a flag to depend on your checkpoint
add a flag to specify your node type
create the base directory if it doesn't exist
--HG--
extra : convert_revision : dfffa4a5b0e68b2550a28fbb06b9d6a208ea1f2e
util/stats/output.py:
Create the graph directory if it doesn't exist
Don't write out a graph if all of the jobs for that graph are missing
--HG--
extra : convert_revision : 7993baf1a4be33a062f86a4f09791f01eaafa43c
what it is sooner
Don't handle sigstop since you're not allowed to.
util/pbs/send.py:
write the pbs jobid here in send.py so we know what it is sooner
--HG--
extra : convert_revision : 93292d046cb4b628031e0e57e39eb4470b598ed8
home directory (/z/m5/regression), so for now any modifications
should be manually copied there as well.
Note that this script is designed to be useful for running full
regressions outside of the cron job as well.
--HG--
extra : convert_revision : 052ec5d58b5ff765d8f3a9b50849ef34d62c8d66
For this to work qdo must be on your path. I've copied it into
/usr/local/bin on zizzer.
build/SConstruct:
Add BATCH and BATCH_CMD options to support compiling/testing
on pool via qdo.
--HG--
extra : convert_revision : b7fc46465e897f7f15ed4a67f6735886917a6c4b
information that can be used for other aspects of sending jobs.
New graphing output stuff with matplotlib.
util/pbs/job.py:
Shuffle code around and create the JobDir class which encapsulates
all of the functionality needed for making, organizing, and cleaning
a job directory.
Better status output
util/pbs/jobfile.py:
Majory re-working of the jobfile code.
A job file now consists of several objects that describe how
jobs should be run, it includes information about checkpoints,
and graphing.
util/pbs/send.py:
use the new jobfile code.
deal with the 15 character limit of pbs by truncating the name and
using the raj hack.
util/stats/db.py:
fix the __str__ function for nodes
provide __getitem__ for the Database class
util/stats/stats.py:
use the jobfile stuff to figure out what the proper naming
and organziation of the graphs should be.
move all output code to output.py, get rid of ploticus and use
matplotlib
--HG--
rename : util/categories.py => util/stats/categories.py
extra : convert_revision : 0d793cbf6ad9492290e8ec875ce001c84095e1f7
Make the Link directory even more useful by working with
sub-directories.
util/pbs/job.py:
Expose JOBNAME as a separate parameter from PBS_JOBNAME. If the
former exists, it is used as the jobname for starting the job, if
it doesn't exist, PBS_JOBNAME is used. This is to get around the 15
character maximum pbs job name length. While we're at it, shuffle
things around to hopefully make things a bit more clear.
util/pbs/send.py:
Make the Link directory functionality more sophisticated, copy
sub-directories and links to directories. (we still don't copy
dotfiles though)
Add the setname() function to contact pbs and use raj's hack to
tell the webpage about longer jobnames. (it's gross, don't look)
truncate the pbs job name to 15 characters so that it works.
--HG--
extra : convert_revision : 4a76b1a1c33721c7ca93e2fbb761f95bc3a2ac69
for accessing physical packets.
Add support for tap devices found on linux and bsd.
--HG--
extra : convert_revision : 198b082f2e847da8471c3f22d6a55beb9f4b592e
kern/linux/sched.hh:
kern/linux/thread_info.hh:
got rid of everything but exactly what we needed
util/categories.py:
newest version from one of my repositories
--HG--
extra : convert_revision : c4328e5938d421d60493c0da07022bfa9e92c404
util/stats/stats.py:
Changed some stuff for graphing purposes:
full_cpu is now full0
frequencies are now s,m,f,q not s,6,8,q
L2 is now l2
etherdev is now etherdev0
May want to consider fact that NAT box should be the sum of etherdev0 and etherdev1 (not in script yet)
--HG--
extra : convert_revision : 39a7d0bcf1b9354a77c12de5981e8277408ba791
util/tracediff:
Fix to work with new parameter and output directory structure.
--HG--
extra : convert_revision : 421ed14fa02df7c9e95eb93f4d36b9ff046f1e39
outside of the loop so we get all of the jobs, not just the
last one.
util/pbs/send.py:
fix indent
--HG--
extra : convert_revision : eee9546b4945ff949fdfdf339fc95a23603b47d3
sim/pyconfig/SConscript:
Embed the jobfile.py script into the binary so that we don't
need to copy it into the Base directory every time.
test/genini.py:
Add the util/pbs directory to the path so we can get to
jobfile.py
Add a -I argument to set to add to the path.
util/pbs/pbs.py:
Create a MyPOpen class. This is a lot like the popen2.Popen3 class
in the python library except that my version allows redirection of
standard in and standard out to a file instead of a pipe.
Use this popen class to execute qsub or ssh qsub. This was important
for the ssh version of qsub because we need to pipe the script into
standard in of ssh so that the script can get to the qsub command.
(Otherwise we have a problem discovering the path.)
util/pbs/send.py:
Tweak the script so it figures out paths in NFS correctly.
Use the new system for running qsub.
--HG--
extra : convert_revision : 1289915ba99cec6fd464b71215c32d2197ff2824