break it up into reading one page at a time. Also, avoid redoing a aggregating a checkpoint that's
already done.
--HG--
rename : util/checkpoint-aggregator.py => util/checkpoint_aggregator.py
add -n/--no-exec which doesn't execute scons, but just prints the command line
add -j0 which tries to calculate how many cpus you have
add -D/--build-dir to specify a build directory other than ./build
This patch includes the necessary regression updates to test the new ruby
configuration system. The patch includes support for multiple ruby protocols
and adds the ruby random tester. The patch removes atomic mode test for
ruby since ruby does not support atomic mode acceses. These tests can be
added back in when ruby supports atomic mode for real.
--HG--
rename : tests/quick/50.memtest/test.py => tests/quick/60.rubytest/test.py
1) Move alpha-specific code out of page_table.cc:serialize().
2) Begin serializing M5_pid and unserializing it, but adding an function to do optional paramIn so that old checkpoints don't need to be fixed up.
3) Fix up alpha startup code so that the unserialized M5_pid value is properly written to DTB_IPR_ASN.
4) Fix the memory unserialize that I forgot somehow in the last changeset.
5) Add in an agg_se.py to handle aggregated checkpoints. --bench foo-bar plus positional arguments foo bar are the only changes in usage from se.py.
Note this aggregation stuff has only been tested for Alpha and nothing else, though it should take a very minimal amount of work to get it to work with another ISA.
This is simply a translation of the C++ slicc into python with very minimal
reorganization of the code. The output can be verified as nearly identical
by doing a "diff -wBur".
Slicc can easily be run manually by using util/slicc
The early call to child->step() was removed earlier because it confused the
new differences-only protocol ARM sendState() was using. It's necessary that
that gets called at least once before attempting to print the initial stack
frame, though, because otherwise statetrace doesn't know what the stack
pointer is. By putting the first call to child->step() in a common spot, both
needs are met.
- insert warnings for deprecated m5ops
- reserve opcodes for Ali's stuff
- remove code for stuff that has been deprecated forever
- simplify m5op_alpha
they're all in the same place. This also involves having just one
jobfile.py and moving it into the utils directory to avoid
duplication. Lots of improvements to the utility as well.
--HG--
rename : src/python/m5/attrdict.py => src/python/m5/util/attrdict.py
rename : util/pbs/jobfile.py => src/python/m5/util/jobfile.py
rename : src/python/m5/util.py => src/python/m5/util/misc.py
rename : src/python/m5/multidict.py => src/python/m5/util/multidict.py
rename : util/stats/orderdict.py => src/python/m5/util/orderdict.py
Before this fix, the style hook would blow up when you did a qrefresh to add
a new file, but executed the qrefresh from a repository sub directory.
--HG--
extra : convert_revision : 851b0421dfa5c5b23d0f49441c4ba2e0ac579c5d
Because of peculiarities in how system calls are returned from, single stepping executes some system calls and the instruction following them in a single step. Statetrace now patches the executable image when it detects a system call to force "correct" behavior, aka the appearance of stepping one instruction every single time.
--HG--
extra : convert_revision : ac6243a2e00ff98f827b005efd27b4dc5be4f774
The address of the stackpointer proceeding the vector minus 8 should be minus 16.
--HG--
extra : convert_revision : 648f01e9753e28391fc8d282bd9fe2bd47a0193f
creation and initialization now happens in python. Parameter objects
are generated and initialized by python. The .ini file is now solely for
debugging purposes and is not used in construction of the objects in any
way.
--HG--
extra : convert_revision : 7e722873e417cb3d696f2e34c35ff488b7bff4ed
Nag the user during compile if they have an hg cloned copy of M5, have
mercurial installed, but don't have the style hook enabled.
--HG--
extra : convert_revision : 6bcbb67f1a3fcd36db7d3ef16a9ff19680f126f2
src/dev/sparc/iob.cc:
don't warn on cpu restart/idle/halt stuff
tests/SConscript:
add sparc target in test Sconscript
util/regress:
Add SPARC_FS target in regress
--HG--
extra : convert_revision : 37fa21700ec4c350d87ca9723bc3359feb81c50a
src/arch/sparc/isa/decoder.isa:
add readfile and break to sparc decoder
src/arch/sparc/isa/operands.isa:
fix O0-O5 operands registers
util/m5/Makefile.sparc:
Make sparc makefile compile a 64bit binary
util/m5/m5.c:
readfile was in here twice, once will be sufficient I think
util/m5/m5op_sparc.S:
implement readfile and debugbreak
--HG--
extra : convert_revision : 139b3f480ee6342b37b5642e072c8486d91a3944
util/m5/Makefile.alpha:
Clean up to make it a bit easier to muck with
util/m5/Makefile.alpha:
Make the makefile more reasonable
util/m5/Makefile.alpha:
Remove authors from copyright.
util/m5/Makefile.alpha:
Updated Authors from bk prs info
util/m5/Makefile.alpha:
bk cp Makefile Makefile.alpha
src/arch/sparc/tlb.cc:
Clean up the cache code a little bit and make sure the uncacbale bit is set when appropriate
src/arch/alpha/isa/decoder.isa:
src/sim/pseudo_inst.cc:
src/sim/pseudo_inst.hh:
Rename AlphaPseudo -> PseudoInst since it's all generic
src/arch/sparc/isa/bitfields.isa:
src/arch/sparc/isa/decoder.isa:
src/arch/sparc/isa/includes.isa:
src/arch/sparc/isa/operands.isa:
Add support for pseudo instructions in sparc
util/m5/Makefile.alpha:
util/m5/Makefile.sparc:
split off alpha make file and sparc make file for m5 app
util/m5/m5.c:
ivle and ivlb aren't used anymore
util/m5/m5op.h:
stdint seems like a more generic better fit here
util/m5/m5op_alpha.S:
move the op ids into their own header file since we can share them between sparc and alpha
--HG--
rename : util/m5/Makefile => util/m5/Makefile.sparc
rename : util/m5/m5op.S => util/m5/m5op_alpha.S
extra : convert_revision : 490ba2e8b8bc6e28bfc009cedec6b686b28e7834
m5 style and fixing whitespace. For whitespace, any tabs in
leading whitespace on a line are converted to spaces, and any
trailing whitespace is removed.
--HG--
extra : convert_revision : d0591663c028a388635fc71c6c1d31f700748cf6
which takes care of almost everything needed for putting together
a release.
--HG--
extra : convert_revision : b05d418a1002633b1286591eb8a8588ba33f5df1
Write directly to 'cscope.files' and run 'cscope -b' .
Now this script does everything automatically.
cscope-index.py:
Rename: util/cscope-find.py -> util/cscope-index.py
util/cscope-find.py:
Write directly to 'cscope.files' and run 'cscope -b' .
Now this script does everything automatically.
--HG--
rename : util/cscope-find.py => util/cscope-index.py
extra : convert_revision : cd6fa5cc0c2146f7184c9213956aff67c7cb9341
Note that command line syntax has totally changed as a result.
See comments for more details.
--HG--
extra : convert_revision : bdb6e27abd2da83c7468dfe2a95e8bf54757ac6c
util/term/term.c:
Reindent.
util/term/term.c:
Assume localhost if only port number is given on command line.
--HG--
extra : convert_revision : 768e61a56339a0795ca258cca788e9a2c20cbaae
src/cpu/exetrace.cc:
Fixed up to deal with microcode, and to make floating point register numbers correlate to the numbers used in SPARC.
util/statetrace/arch/tracechild_sparc.cc:
util/statetrace/arch/tracechild_sparc.hh:
Make floating point register numbers correlate to the numbers used in SPARC.
--HG--
extra : convert_revision : 878897292f696092453cf61d6eac2d1c407ca13b
util/statetrace/Makefile:
Makefile to build statetrace. Targets are:
statetrace: alias to build using the "native" compiler
statetrace-native: use the native compiler
statetrace-sparc: use the sparc cross compiler
I'll make this a little more fancy and capable later.
util/statetrace/arch/tracechild_i386.cc:
Implementation of i386 support
util/statetrace/arch/tracechild_i386.hh:
Declaration of i386 support
util/statetrace/arch/tracechild_sparc.cc:
implementation of SPARC support
util/statetrace/arch/tracechild_sparc.hh:
declaration of SPARC support
util/statetrace/printer.cc:
Implementation of the "Printer" objects which parse and output the state of the process after each instruction. There are currently two types of printers, nested ones and register ones. These are called NestingPrinter and RegPrinter respectively.
util/statetrace/printer.hh:
Declaration of "Printer" objects
util/statetrace/refcnt.hh:
This is copied from m5. I should use the one already in the tree, but I'll do that later.
util/statetrace/regstate.hh:
Interface for accessing registers.
util/statetrace/statetrace.cc:
Main file with argument parsing and the "main" function which contains the tracing loop.
util/statetrace/tracechild.cc:
Implementation of the base tracechild class.
util/statetrace/tracechild.hh:
Declaration of the base tracechild class.
util/statetrace/tracechild_arch.cc:
This file hooks in support for the appropriate architecture. Just the implementation is brought in, since the main program should ideally not have to know anything at all about an architecture other than it's interface.
util/statetrace/x86.format:
An example output template for x86. A few example SPARC templates will be added later.
--HG--
extra : convert_revision : 7c8bf8230907aba42ed1e707b9ca2d6da0d4e6d4
to make it more usable by regular folks.
util/regress:
Get rid of extra stuff only needed by cron job,
to make it more usable by regular folks.
--HG--
extra : convert_revision : e113c05af5eec846db526d734cce8ff66aa95d72
arch/alpha/freebsd/system.cc:
arch/alpha/isa/decoder.isa:
arch/alpha/linux/system.cc:
arch/alpha/system.cc:
arch/alpha/tru64/system.cc:
Let symbol files be read in so that profiling can happen on the binaries as well.
python/m5/objects/System.py:
Add in symbol files.
sim/pseudo_inst.cc:
Load in a specified symbol file.
sim/pseudo_inst.hh:
Allow for symbols to be loaded.
sim/system.hh:
Support symbol file.
util/m5/m5.c:
util/m5/m5op.S:
Add support to m5 util for loading symbols (and readfile).
--HG--
extra : convert_revision : f10c1049bcd7b22b98c73052c0666b964aff222b
configs/boot/micro_memlat.rcS:
Update these scripts so they work (not sure why they broke)
configs/boot/micro_tlblat.rcS:
Update this script to use a different test.
--HG--
extra : convert_revision : 6e8692540a9fac6ae8f2d9975c70d4135354b849
src/cpu/o3/alpha_cpu.hh:
Fix #define in header.
util/rundiff:
Fix file comments to be more correct.
util/tracediff:
Update comments to be more correct.
--HG--
extra : convert_revision : a28030ce8979de3d9361191c6af23743460dc53e
into zeep.pool:/z/saidi/work/m5.nm_m5_pull
SConscript:
dram memory needs to be converted to newmem before we can use it
dev/ide_ctrl.cc:
don't need this printing in newmem
dev/ide_disk.cc:
will read stats in next commit
dev/sinic.cc:
merge sinic from head, still needs work
--HG--
extra : convert_revision : b9aabd8c7814d07d54ce6f971aad3ec349fa24e1
util/stats/barchart.py:
- there is no self.inner_axes
- don't append an empty value to self.xsubticks, otherwise
subsequent calls will get extra empty ticks
- rotate labels 30 degrees instead of 90 so it looks better
--HG--
extra : convert_revision : 1cbac6d1f92bfc6b2c1e886ad5f9d4c78a2b3820
problems in pool regressions).
util/qdo:
Bump up hardcoded NFS wait time from 45 sec to 90 sec (and
print threshold from 10 sec to 30 sec). Would be even
nicer to make these cmd-line params, but nobody would use
them anyway.
--HG--
extra : convert_revision : 1e9b3ad43a5dbf5e30758069e5a8cde3749cc1a6
This changeset removes a check that prevents quiescing when an
interrupt is pending. *** You should only call quiesce if that
isn't a problem. ***
arch/alpha/isa/decoder.isa:
sim/pseudo_inst.cc:
sim/pseudo_inst.hh:
Add quiesceNs, quiesceCycles, quisceTime and m5panic pseudo ops.
These quiesce for a number of ns, cycles, report how long
we were quiesced for, and panic the simulator respectively.
The latter is added to the panic() function in the console and linux
kernel instead of executing an infinite loop until someone notices.
cpu/exec_context.cc:
cpu/exec_context.hh:
Add a quiesce end event to the execution contexted which upon
executing wakes up a CPU for quiesceCycles/quiesceNs.
util/m5/Makefile:
Make the makefile more reasonable
util/m5/m5.c:
update the m5op executable to use the files from the linux tree
util/m5/m5op.S:
update m5op.S from linux tree
util/m5/m5op.h:
update m5op.h from linux tree
--HG--
rename : util/m5/m5op.s => util/m5/m5op.S
extra : convert_revision : 3be18525e811405b112e33f24a8c4e772d15462d
util/stats/barchart.py:
clean up some of lisa's messy code
remove trailing whitespace while I'm at it.
--HG--
extra : convert_revision : f2fe6777fb4b458fa1d5b5b743f6274014c229ad
util/stats/chart.py:
add a bool config option for determining
if the legend is inside or outside the figure
--HG--
extra : convert_revision : e862d1832a0cc3c1837758cc247bc77c0a02ec12
util/stats/barchart.py:
Add support for error bars
util/stats/barchart.py:
add support to choose between a legend inside or
outside the figure.
--HG--
extra : convert_revision : 14273e385c106bf27a2013991f9f34ca6551b96c
util/stats/barchart.py:
If there are fewer than 5 colors, pick from a subset of
5 so there is more consistency in colors between graphs
--HG--
extra : convert_revision : 6cf64c2f8ed81e714e24a3ebe5a7a60ca168b231
2) make subticks vertical so they can be longer
3) make inner and outer axes farther apart to make room for subtick's vertical labels
--HG--
extra : convert_revision : 91a1aab3f1078921edd53428e6712744210c9f1b
1) cosmetic - removing visibility of meta axes except for the tick labels.
2) unless subticklabels defined, don't do meta axes. (instead of assuming if you have 3D graph, do meta axes)
--HG--
extra : convert_revision : 396011ffaa51ea4066b34257f6fd5b3faac9d242
util/stats/barchart.py:
oop forgot this for 1D graph cases.
util/stats/chart.py:
need to add default param to chart.
--HG--
extra : convert_revision : f4e6c6c614d584e7928ed905e97608716455ab6c
of tabs so using different editors is consistent
util/emacs/m5-c-style.el:
Default to inserting spaces instead of tabs so using different
editors is consistent
--HG--
extra : convert_revision : 719e5e980e088b0f4787b198de18cddceabd0140
without using the jobfile.
util/stats/db.py:
util/stats/profile.py:
Make it possible to send job as a string and to set the system
separately from the job.
--HG--
extra : convert_revision : 08aaebd3f9a1643bd41953b43f3b80dc97e6592f
options, making existing options more visible and dealing with
holes in data better.
util/stats/barchart.py:
- move the options for BarChart to a base class ChartOptions so
they can be more easily set and copied.
- add an option to set the chart size (so you can adjust the aspect ratio)
- don't do the add_subplot thing, use add_axes directly so we can
affect the size of the figure itself to make room for the legend
- make the initial array bottom floating point so we don't lose precision
- add an option to set the limits on the y axis
- use a figure legend instead of an axes legend so we can put the legend
outside of the actual chart. Also add an option to set the fontsize of
the legend.
- initial hack at outputting csv files
util/stats/db.py:
don't print out an error when the run is missing from the database
just return None, the error will be print elsewhere.
util/stats/output.py:
- make StatOutput derive from ChartOptions so that it's easier to
set default chart options.
- make the various output functions (graph, display, etc.) take the
name of the data as a parameter instead of making it a parameter to
__init__. This allows me to create the StatOutput object with
generic parameters while still being able to specialize the name
after the fact
- add support for graph_group and graph_bars to be applied to multiple
configuration groups. This results in a cross product of the groups
to be generated and used.
- flush the html file output as we go so that we can load the file
while graphs are still being generated.
- make the proxy a parameter to the graph function so the proper system's
data can be graphed
- for any groups or bars that are completely missing, remove them from
the graph. This way, if we decide not to do a set of runs, there won't
be holes in the data.
- output eps and ps by default in addition to the png.
util/stats/profile.py:
- clean up the data structures that are used to store the function
profile information and try our best to avoid keeping extra data
around that isn't used.
- make get() return None if a job is missing so we know it was
missing rather than the all zeroes thing.
- make the function profile categorization stuff total up to 100%
- Fixup the x-axis and y-axis labels.
- fix the dot file output stuff.
util/stats/stats.py:
support the new options stuff for StatOutput
--HG--
extra : convert_revision : fae35df8c57a36257ea93bc3e0a0e617edc46bb7
on the test system.
add an option for pio_delay_write to run.py
util/stats/stats.py:
full0 -> run0 due to run.py change
sim_ticks doesn't make sense with tick = ps, so use
one of the cpu's numCycles paramter
--HG--
extra : convert_revision : db9dbe014549d823edc10395f5241db5e907df01
util/stats/info.py:
If an operation results in a divide by zero, just return None
--HG--
extra : convert_revision : 19cb4319734a3a9cf02bb1966fed42eb0c8a8ade
for retransmissions, out of order packets, lost packets, duplicate
ack, window full, etc. Easy way to see if you have a problem with a
run.
--HG--
extra : convert_revision : 95d8e8650b0fb3d120df107cd5281c56fefc3a1d
on a timeout.
util/qdo:
Qsub needs a kill -9 to die; kill -15 doesn't cut it.
--HG--
extra : convert_revision : 7696b3ecf1a084b68dd909b138ab6aa1b380b5a7
util/pbs/pbs.py:
Change the default so that we do not get mail under any circumstances
from pbs.
util/pbs/send.py:
Add a -n flag to send.py that causes the Base directory to *not*
sync with the Link directory
--HG--
extra : convert_revision : 6e872153b6b2c34b61ec2ddbf3e5536876f4b43b
util/qdo:
Don't automatically set qsub job name, as this causes qsub to fail
if the job name is too long or otherwise unsuitable.
--HG--
extra : convert_revision : 5ba48767574efaaff2c328549adee295780f7f70
util/stats/db.py:
need to import the values function
util/stats/info.py:
it's just run
--HG--
extra : convert_revision : 3cb67d8112a1a5fdf761b73732859a71f585bd1f
util/stats/profile.py:
Pass around the number of symbols limit
deal with categorization a bit better.
--HG--
extra : convert_revision : 908410e296efd4514f2dfc0eb9e6e42834585560
util/stats/db.py:
Build a result object as the result of a query operation so it is
easier to populate and contains a bit more information than just
a big dict. Also change the next level data into a matrix instead
of a dict of dicts.
Move the "get" function into the Database object. (The get function
is used by the output parsing function as the interface for accessing
backend storage, same interface for profile stuff.)
Change the old get variable to the method variable, it describes how
the get works, (whether using sum, stdev, etc.)
util/stats/display.py:
Clean up the display functions, mostly formatting.
Handle values the way they should be now.
util/stats/info.py:
Totally re-work how values are accessed from their data store.
Access individual values on demand instead of calculating everything
and passing up a huge result from the bottom.
This impacts the way that proxying works, and in general, everything
is now esentially a proxy for the lower level database. Provide new
operators: unproxy, scalar, vector, value, values, total, and len which
retrieve the proper result from the object they are called on.
Move the ProxyGroup stuff (proxies of proxies!) here from the now gone
proxy.py file and integrate the shared parts of the code. The ProxyGroup
stuff allows you to write formulas without specifying the statistics
until evaluation time.
Get rid of global variables!
util/stats/output.py:
Move the dbinfo stuff into the Database itself. Each source should
have it's own get() function for accessing it's data.
This get() function behaves a bit differently than before in that it
can return vectors as well, deal with these vectors and with no result
conditions better.
util/stats/stats.py:
the info module no longer has the source global variable, just
create the database source and pass it around as necessary
--HG--
extra : convert_revision : 8e5aa228e5d3ae8068ef9c40f65b3a2f9e7c0cff
util/stats/stats.py:
Make the default jobfile Test.py in the current directory
add the -J flag to tell it not to use a jobfile
--HG--
extra : convert_revision : 5cf5bb2f32ed9c9701a94eabc9b2a538581acf94
SConscript:
Get rid of the pc_sample stuff and move to the new profiling stuff
base/traceflags.py:
DPRINTF Stack stuff
cpu/base.cc:
cpu/base.hh:
cpu/exec_context.cc:
cpu/exec_context.hh:
cpu/simple/cpu.cc:
Add profiling stuff
kern/kernel_stats.hh:
Use a smart pointer
sim/system.cc:
sim/system.hh:
Create a new symbol table that has all of the symbols for a
particular system
util/stats/categories.py:
change around the categories, add categories for function
profiling stuff
util/stats/profile.py:
No profile parsing and display code to deal with function
profiling stuff, graph, dot, and text outputs.
--HG--
extra : convert_revision : b3de0cdc8bd468e42647966e2640ae009bda9eb8
util/pbs/job.py:
the default jobfile is now Test.py in the root of the jobs directory
util/pbs/pbs.py:
Clean up the qsub options handling and add job dependencies
util/pbs/send.py:
the default jobfile is now Test.py in the root of the jobs directory
add a flag to depend on your checkpoint
add a flag to specify your node type
create the base directory if it doesn't exist
--HG--
extra : convert_revision : dfffa4a5b0e68b2550a28fbb06b9d6a208ea1f2e
util/stats/output.py:
Create the graph directory if it doesn't exist
Don't write out a graph if all of the jobs for that graph are missing
--HG--
extra : convert_revision : 7993baf1a4be33a062f86a4f09791f01eaafa43c
what it is sooner
Don't handle sigstop since you're not allowed to.
util/pbs/send.py:
write the pbs jobid here in send.py so we know what it is sooner
--HG--
extra : convert_revision : 93292d046cb4b628031e0e57e39eb4470b598ed8
home directory (/z/m5/regression), so for now any modifications
should be manually copied there as well.
Note that this script is designed to be useful for running full
regressions outside of the cron job as well.
--HG--
extra : convert_revision : 052ec5d58b5ff765d8f3a9b50849ef34d62c8d66
For this to work qdo must be on your path. I've copied it into
/usr/local/bin on zizzer.
build/SConstruct:
Add BATCH and BATCH_CMD options to support compiling/testing
on pool via qdo.
--HG--
extra : convert_revision : b7fc46465e897f7f15ed4a67f6735886917a6c4b
information that can be used for other aspects of sending jobs.
New graphing output stuff with matplotlib.
util/pbs/job.py:
Shuffle code around and create the JobDir class which encapsulates
all of the functionality needed for making, organizing, and cleaning
a job directory.
Better status output
util/pbs/jobfile.py:
Majory re-working of the jobfile code.
A job file now consists of several objects that describe how
jobs should be run, it includes information about checkpoints,
and graphing.
util/pbs/send.py:
use the new jobfile code.
deal with the 15 character limit of pbs by truncating the name and
using the raj hack.
util/stats/db.py:
fix the __str__ function for nodes
provide __getitem__ for the Database class
util/stats/stats.py:
use the jobfile stuff to figure out what the proper naming
and organziation of the graphs should be.
move all output code to output.py, get rid of ploticus and use
matplotlib
--HG--
rename : util/categories.py => util/stats/categories.py
extra : convert_revision : 0d793cbf6ad9492290e8ec875ce001c84095e1f7
Make the Link directory even more useful by working with
sub-directories.
util/pbs/job.py:
Expose JOBNAME as a separate parameter from PBS_JOBNAME. If the
former exists, it is used as the jobname for starting the job, if
it doesn't exist, PBS_JOBNAME is used. This is to get around the 15
character maximum pbs job name length. While we're at it, shuffle
things around to hopefully make things a bit more clear.
util/pbs/send.py:
Make the Link directory functionality more sophisticated, copy
sub-directories and links to directories. (we still don't copy
dotfiles though)
Add the setname() function to contact pbs and use raj's hack to
tell the webpage about longer jobnames. (it's gross, don't look)
truncate the pbs job name to 15 characters so that it works.
--HG--
extra : convert_revision : 4a76b1a1c33721c7ca93e2fbb761f95bc3a2ac69
for accessing physical packets.
Add support for tap devices found on linux and bsd.
--HG--
extra : convert_revision : 198b082f2e847da8471c3f22d6a55beb9f4b592e
kern/linux/sched.hh:
kern/linux/thread_info.hh:
got rid of everything but exactly what we needed
util/categories.py:
newest version from one of my repositories
--HG--
extra : convert_revision : c4328e5938d421d60493c0da07022bfa9e92c404
util/stats/stats.py:
Changed some stuff for graphing purposes:
full_cpu is now full0
frequencies are now s,m,f,q not s,6,8,q
L2 is now l2
etherdev is now etherdev0
May want to consider fact that NAT box should be the sum of etherdev0 and etherdev1 (not in script yet)
--HG--
extra : convert_revision : 39a7d0bcf1b9354a77c12de5981e8277408ba791
util/tracediff:
Fix to work with new parameter and output directory structure.
--HG--
extra : convert_revision : 421ed14fa02df7c9e95eb93f4d36b9ff046f1e39
outside of the loop so we get all of the jobs, not just the
last one.
util/pbs/send.py:
fix indent
--HG--
extra : convert_revision : eee9546b4945ff949fdfdf339fc95a23603b47d3
sim/pyconfig/SConscript:
Embed the jobfile.py script into the binary so that we don't
need to copy it into the Base directory every time.
test/genini.py:
Add the util/pbs directory to the path so we can get to
jobfile.py
Add a -I argument to set to add to the path.
util/pbs/pbs.py:
Create a MyPOpen class. This is a lot like the popen2.Popen3 class
in the python library except that my version allows redirection of
standard in and standard out to a file instead of a pipe.
Use this popen class to execute qsub or ssh qsub. This was important
for the ssh version of qsub because we need to pipe the script into
standard in of ssh so that the script can get to the qsub command.
(Otherwise we have a problem discovering the path.)
util/pbs/send.py:
Tweak the script so it figures out paths in NFS correctly.
Use the new system for running qsub.
--HG--
extra : convert_revision : 1289915ba99cec6fd464b71215c32d2197ff2824
util/pbs/send.py:
- add a -d to set the job root directory allowing one to run
send.py from anywhere.
- specify full paths to files instead of relative paths to make -d
work and to allow ssh qsub to work again.
- make the Link directory only copy links that point to regular files.
--HG--
extra : convert_revision : dd330cee08b97c5d72c3d58ef123f83ac7ccede7
Fix up configuration scrupts to have better support for
running on the simulation pool.
--HG--
extra : convert_revision : 0178c8600b193d6c0ca69163fb735a7fa0e70782
util/tracediff:
Fix bug (used += instead of .= for string concatenation in Perl...
wrong language!).
Also updated for new config (s/Universe/root/).
--HG--
extra : convert_revision : 0db3f22794037dc51cc29f78a75bd22012a8ecd9
get rid of the alias for true to True and false to False to keep
consistent python syntax.
util/stats/info.py:
Fix typo
--HG--
extra : convert_revision : e69588a8de52424e043315e70008ca3a3ede7d5b
cleaned up stability code and wrote some better help for stats.py
fixed sample bug in info.py
dev/ns_gige.cc:
dev/ns_gige.hh:
dev/sinic.cc:
dev/sinic.hh:
add total bandwidth/packets/bytes stats
util/stats/info.py:
fixed samples bug
util/stats/stats.py:
cleaned up stability code and wrote a bit better help
--HG--
extra : convert_revision : cae06f4fac744d7a51ee0909f21f03509151ea8f
util/stats/db.py:
added working listticks (for printing) and retticks(for using in python) code
util/stats/stats.py:
added stability function that checks if all samples are within 10% of mean.
--HG--
extra : convert_revision : 7eb1714db75e456f248fe7cae73db1c57642947d
add option to limit results to a set of ticks
fix ticks code to work
util/stats/info.py:
change samples -> ticks and pass all parameters
util/stats/stats.py:
add option to select a set of ticks and fix display bug
--HG--
extra : convert_revision : eca80a8c6bb75cf82bf1624f3d0170690b2928af
util/stats/db.py:
Update for newer MySQLdb, the result of a blob in a query is an
array.array now, so we need to convert that to a string
--HG--
extra : convert_revision : 32732983d3d7141755085ec4913fdae057edc67f
ipkb stat and formulas from the command line.
util/stats/info.py:
no need to raise an attribute error if two values aren't found
in the exact same set of runs. Would be good to check that each
run is the same though.
util/stats/stats.py:
more graph tweaking
command to execute a formula from the command line.
add interrupts per kilobyte of data
--HG--
extra : convert_revision : 78d6b14d340d08edcbc69e4c1c5a4c1dd9bb10dd
util/stats/stats.py:
we only need the system if we're issuing one of the commands that
uses a stored formula.
--HG--
extra : convert_revision : d129a00eeba46a03f7d600922d679aa0f43636be
util/stats/info.py:
Make the binnings stuff work again.
util/stats/stats.py:
small patch for graphing
make it so we can print out bins for the stat command
--HG--
extra : convert_revision : c0279ac7030fd5146dd00801baa41e7baf97d1f4
util/stats/stats.py:
tweak the graphing stuff for the new configurations we have.
add more graph types.
nsgige -> etherdev
deal with memory hierarchy change by using L2 instead of L3
--HG--
extra : convert_revision : 55362e79d9f8d0d68aa08129f5af944b378a9f4c
sim/main.cc:
Get rid of default.ini processing... it's kind of a pain and nobody uses it.
util/tracediff:
Add comments on usage.
--HG--
extra : convert_revision : b811288b2945585d60685684ea88c99d1913fbf3
util/stats/stats.py:
get the options from the options struct now
gratuitously change the output directory for graphs.
--HG--
extra : convert_revision : 468f34bdc2c8b5fc3a393eaa4da4ec288e35c8c7
Make the database creation/removal/cleanup code use python
Make formulas work with the database
Add support to do some graphing, but needs more work
Still need to work on vectors, 2d vectors, dists and vectordists
--HG--
extra : convert_revision : 1a88320dcc036a3751e8a036770766dce76a568c
SConscript:
Add pyconfig/{pyconfig,code}.cc
Add list of object description (.od) files.
Include pyconfig/SConscript.
base/inifile.cc:
Get rid of CPP_PIPE... it never really worked anyway.
base/inifile.hh:
Make load(ifstream&) method public so pyconfig
code can call it.
sim/main.cc:
Handle Python config scripts (end in '.py' instead of '.ini').
sim/pyconfig/m5configbase.py:
Add license.
Fix minor __setattr__ problem (2.3 related?)
--HG--
rename : util/config/m5configbase.py => sim/pyconfig/m5configbase.py
extra : convert_revision : 5e004922f950bfdefced333285584b80ad7ffb83
average the results.
It works on alpha but I haven't got it working on x86 I think for
lack of knowing a good address to read.
--HG--
extra : convert_revision : e2442de641741674d692245712aa92e258cf6d48
the source tree for *.odesc files every time we run the script.
This is now factored out into load_odesc.py, which should be used
to generate m5odescs.py, which is then used as the source of object
& parameter definitions.
util/config/m5configbase.py:
- Move odesc loading code to separate load_odescs.py, so maybe someday
that can be done once at build time.
- Print out children of a node in the order they are added.
- Automatically assign a parent-less node to the first node for which it
is used as the value of a parameter. (Easier demonstrated than explained.)
- Calculate object paths dynamically when requested rather than trying
to keep them up to date as objects get assigned to parents.
--HG--
rename : util/config/m5config.py => util/config/m5configbase.py
extra : convert_revision : 2183a09d32f3862ab377e0a929715f30505a03cb
- Add support for assigning NULL to SimObject pointers. In Python,
this is a special value, distinct from None.
- Initial, incomplete pass at regenerating C++ parameter code (declarations
and INIT_PARAM macros) from .odesc files.
util/config/m5config.py:
- Add support for assigning NULL to SimObject pointers. In Python,
this is a special value, distinct from None.
- Initial, incomplete pass at regenerating C++ parameter code (declarations
and INIT_PARAM macros) from .odesc files.
--HG--
extra : convert_revision : d7ae8f32e30b3c0829fd1a60589dd998e2e0d0d7