Commit graph

68 commits

Author SHA1 Message Date
Evgeniy Ivanov
2487445f5f make panic() work for multiboot/elf case
. we cannot use the boot monitor to print the system diag buffer
	. for serial, we do nothing, just reset, everything is already printed
	. for not-serial, we print the current diag buffer using direct video
	  memory access from the kernel
2012-02-14 14:48:10 +01:00
Arun Thomas
ae561b8f12 Add MKAPIC and MKACPI options 2011-07-31 16:22:43 +02:00
Erik van der Kouwe
03a7d0e8ae Add cttybaud boot monitor variable to control speed of serial console (combine with ctty 0) 2011-03-16 12:25:10 +00:00
Ben Gras
07bfb4f4e4 kernel - account for kernel cpu time (ipc, kcalls) in caller 2011-02-08 13:58:32 +00:00
Ben Gras
95702f970b kernel - doesn't do lock timings any more 2011-02-04 13:42:17 +00:00
Arun Thomas
6e86430130 Remove code for kernel task stack initialization
We no longer have kernel tasks, so this code is unnecessary
2011-01-27 12:18:33 +00:00
Tomas Hruby
c9bfb13cdb Kernel keeps information about each cpu
- kernel maintains a cpu_info array which contains various
  information about each cpu as filled when each cpu boots

- the information contains idetification, features etc.
2010-10-26 21:07:27 +00:00
Tomas Hruby
13bda81ee0 Fixed FPU for single cpu 2010-09-16 09:51:45 +00:00
Tomas Hruby
72cc01ff48 apic_timer_x
- set the apic_timer_x factor variable to slowdown apic timer in
  virtual machines
2010-09-16 07:18:47 +00:00
Tomas Hruby
5b8b623765 SMP - lazy FPU
- when a process is migrated to a different CPU it may have an active
  FPU context in the processor registers. We must save it and migrate
  it together with the process.
2010-09-15 14:11:25 +00:00
Tomas Hruby
c554aef0e1 SMP - BKL statistics
- pressing 'B' on the serial cnsole prints statistics for BKL per cpu.

- 'b' resets the counters

- it presents number of cycles each CPU spends in kernel, how many
  cycyles it spends spinning while waiting for the BKL

- it shows optimistic estimation in how many cases we get the lock
  immediately without spinning. As the test is not atomic the lock may
  be already held by some other cpu before we actually try to acquire
  it.
2010-09-15 14:10:37 +00:00
Tomas Hruby
62c666566e SMP - We boot APs
- kernel detects CPUs by searching ACPI tables for local apic nodes

- each CPU has its own TSS that points to its own stack. All cpus boot
  on the same boot stack (in sequence) but switch to its private stack
  as soon as they can.

- final booting code in main() placed in bsp_finish_booting() which is
  executed only after the BSP switches to its final stack

- apic functions to send startup interrupts

- assembler functions to handle CPU features not needed for single cpu
  mode like memory barries, HT detection etc.

- new files kernel/smp.[ch], kernel/arch/i386/arch_smp.c and
  kernel/arch/i386/include/arch_smp.h

- 16-bit trampoline code for the APs. It is executed by each AP after
  receiving startup IPIs it brings up the CPUs to 32bit mode and let
  them spin in an infinite loop so they don't do any damage.

- implementation of kernel spinlock

- CONFIG_SMP and CONFIG_MAX_CPUS set by the build system
2010-09-15 14:09:52 +00:00
Tomas Hruby
13a0d5fa5e SMP - Cpu local variables
- most global variables carry information which is specific to the
  local CPU and each CPU must have its own copy

- cpu local variable must be declared in cpulocal.h between
  DECLARE_CPULOCAL_START and DECLARE_CPULOCAL_END markers using
  DECLARE_CPULOCAL macro

- to access the cpu local data the provided macros must be used

	get_cpu_var(cpu, name)
	get_cpu_var_ptr(cpu, name)

	get_cpulocal_var(name)
	get_cpulocal_var_ptr(name)

- using this macros makes future changes in the implementation
  possible

- switching to ELF will make the declaration of cpu local data much
  simpler, e.g.

  CPULOCAL int blah;

  anywhere in the kernel source code
2010-09-15 14:09:46 +00:00
Ben Gras
b05c989298 kernel - prettier output for ipc errors, call names instead of trap numbers 2010-07-16 15:36:29 +00:00
Tomas Hruby
cbc9586c13 Lazy FPU
- FPU context is stored only if conflict between 2 FPU users or while
  exporting context of a process to userspace while it is the active
  user of FPU

- FPU has its owner (fpu_owner) which points to the process whose
  state is currently loaded in FPU

- the FPU exception is only turned on when scheduling a process which
  is not the owner of FPU

- FPU state is restored for the process that generated the FPU
  exception. This process runs immediately without letting scheduler
  to pick a new process to resolve the FPU conflict asap, to minimize
  the FPU thrashing and FPU exception hadler execution

- faster all non-FPU-exception kernel entries as FPU state is not
  checked nor saved

- removed MF_USED_FPU flag, only MF_FPU_INITIALIZED remains to signal
  that a process has used FPU in the past
2010-06-07 07:43:17 +00:00
Ben Gras
2f892aca91 kernel fpu context switching: fix race condition
There seems to have been a broken assumption in the fpu context
restoring code.  It restores the context of the running process, without
guarantee that the current process is the one that will be scheduled.
This caused fpu saving for a different process to be triggered without
fpu hardware being enabled, causing an fpu exception in the kernel. This
practically only shows up with DEBUG_RACE on. Fix my thruby+me.

The fix
 . is to only set the fpu-in-use-by-this-process flag in the
   exception handler, and then take care of fpu restoring when
   actually returning to userspace

And the patch
 . translates fpu saving and restoring to c in arch_system.c,
   getting rid of a juicy chunk of assembly
 . makes osfxsr_feature private to arch_system.c
 . removes most of the arch dependent code from do_sigsend
2010-06-03 11:32:22 +00:00
Tomas Hruby
451a6890d6 scheduling - time quantum in miliseconds
- Currently the cpu time quantum is timer-ticks based. Thus the
  remaining quantum is decreased only if the processes is interrupted
  by a timer tick. As processes block a lot this typically does not
  happen for normal user processes. Also the quantum depends on the
  frequency of the timer.

- This change makes the quantum miliseconds based. Internally the
  miliseconds are translated into cpu cycles. Everytime userspace
  execution is interrupted by kernel the cycles just consumed by the
  current process are deducted from the remaining quantum.

- It makes the quantum system timer frequency independent.

- The boot processes quantum is loosely derived from the tick-based
  quantas and 60Hz timer and subject to future change

- the 64bit arithmetics is a little ugly, will be changes once we have
  compiler support for 64bit integers (soon)
2010-05-25 08:06:14 +00:00
Tomas Hruby
f51eea4b32 Changed pagefault delivery to VM
this patch changes the way pagefaults are delivered to VM. It adopts
the same model as the out-of-quantum messages sent by kernel to a
scheduler.

- everytime a userspace pagefault occurs, kernel creates a message
  which is sent to VM on behalf of the faulting process

- the process is blocked on delivery to VM in the standard IPC code
  instead of waiting in a spacial in-kernel queue (stack) and is not
  runnable until VM tell kernel that the pagefault is resolved and is
  free to clear the RTS_PAGEFAULT flag.

- VM does not need call kernel and poll the pagefault information
  which saves many (1/2?) calls and kernel calls that return "no more
  data"

- VM notification by kernel does not need to use signals

- each entry in proc table is by 12 bytes smaller (~3k save)
2010-04-26 23:21:26 +00:00
Kees van Reeuwijk
d106968d77 Remove useless symbol declarations from headers, make symbols local where possible, add some explicit initialization to global variables. 2010-04-22 07:49:40 +00:00
Arun Thomas
1f9ce647cf Move archtypes.h, fpu.h, and stackframe.h
Move archtypes.h to include/ dir, since several servers require it. Move
fpu.h and stackframe.h to arch-specific header directory. Make source
files and makefiles aware of the new header locations.
2010-03-09 09:41:14 +00:00
Erik van der Kouwe
ff835e0e35 use the verbose=2 boot monitor setting to get extensive output for debugging 2010-02-13 22:11:16 +00:00
Tomas Hruby
1b56fdb33c Time accounting based on TSC
- as thre are still KERNEL and IDLE entries, time accounting for
  kernel and idle time works the same as for any other process

- everytime we stop accounting for the currently running process,
  kernel or idle, we read the TSC counter and increment the p_cycles
  entry.

- the process cycles inherently include some of the kernel cycles as
  we can stop accounting for the process only after we save its
  context and we start accounting just before we restore its context

- this assumes that the system does not scale the CPU frequency which
  will be true for ... long time ;-)
2010-02-10 15:36:54 +00:00
Tomas Hruby
728f0f0c49 Removal of the system task
* Userspace change to use the new kernel calls

	- _taskcall(SYSTASK...) changed to _kernel_call(...)

	- int 32 reused for the kernel calls

	- _do_kernel_call() to make the trap to kernel

	- kernel_call() to make the actuall kernel call from C using
	  _do_kernel_call()

	- unlike ipc call the kernel call always succeeds as kernel is
	  always available, however, kernel may return an error

* Kernel side implementation of kernel calls

	- the SYSTEm task does not run, only the proc table entry is
	  preserved

	- every data_copy(SYSTEM is no data_copy(KERNEL

	- "locking" is an empty operation now as everything runs in
	  kernel

	- sys_task() is replaced by kernel_call() which copies the
	  message into kernel, dispatches the call to its handler and
	  finishes by either copying the results back to userspace (if
	  need be) or by suspending the process because of VM

	- suspended processes are later made runnable once the memory
	  issue is resolved, picked up by the scheduler and only at
	  this time the call is resumed (in fact restarted) which does
	  not need to copy the message from userspace as the message
	  is already saved in the process structure.

	- no ned for the vmrestart queue, the scheduler will restart
	  the system calls

	- no special case in do_vmctl(), all requests remove the
	  RTS_VMREQUEST flag
2010-02-09 15:20:09 +00:00
Tomas Hruby
8f82633fa2 Removed useless global variable sys_call_code
- we have to same information in the message (m_ptr) where needed
2010-02-03 18:17:01 +00:00
Tomas Hruby
cca24d06d8 This patch removes the global variables who_p and who_e from the
kernel (sys task).  The main reason is that these would have to become
cpu local variables on SMP.  Once the system task is not a task but a
genuine part of the kernel there is even less reason to have these
extra variables as proc_ptr will already contain all neccessary
information. In addition converting who_e to the process pointer and
back again all the time will be avoided.

Although proc_ptr will contain all important information, accessing it
as a cpu local variable will be fairly expensive, hence the value
would be assigned to some on stack local variable. Therefore it is
better to add the 'caller' argument to the syscall handlers to pass
the value on stack anyway. It also clearly denotes on who's behalf is
the syscall being executed.

This patch also ANSIfies the syscall function headers.

Last but not least, it also fixes a potential bug in virtual_copy_f()
in case the check is disabled. So far the function in case of a
failure could possible reuse an old who_p in case this function had
not been called from the system task.

virtual_copy_f() takes the caller as a parameter too. In case the
checking is disabled, the caller must be NULL and non NULL if it is
enabled as we must be able to suspend the caller.
2010-02-03 09:04:48 +00:00
Kees van Reeuwijk
a701e290f7 Removed unused symbols.
Made some functions PRIVATE, including ones that aren't used anywhere.
2010-01-25 18:13:48 +00:00
Tomas Hruby
5efa92f754 NMI watchdog is an awesome feature for debugging locked up kernels.
There is not that much use for it on a single CPU, however, deadlock
between kernel and system task can be delected. Or a runaway loop.

If a kernel gets locked up the timer interrupts don't occure (as all
interrupts are disabled in kernel mode). The only chance is to
interrupt the kernel by a non-maskable interrupt.

This patch generates NMIs using performance counters. It uses the most
widely available performace counters. As the performance counters are 
highly model-specific this patch is not guaranteed to work on every
machine.  Unfortunately this is also true for KVM :-/ On the other
hand adding this feature for other models is not extremely difficult
and the framework makes it hopefully easy enough.

Depending on the frequency of the CPU an NMI is generated at most
about every 0.5s If the cpu's speed is less then 2Ghz it is generated
at most every 1s. In general an NMI is generated much less often as
the performance counter counts down only if the cpu is not idle.
Therefore the overhead of this feature is fairly minimal even if the
load is high.

Uppon detecting that the kernel is locked up the kernel dumps the 
state of the kernel registers and panics.

Local APIC must be enabled for the watchdog to work.

The code is _always_ compiled in, however, it is only enabled if  
watchdog=<non-zero> is set in the boot monitor.

One corner case is serial console debugging. As dumping a lot of stuff
to the serial link may take a lot of time, the watchdog does not 
detect lockups during this time!!! as it would result in too many
false positives. 10 nmi have to be handled before the lockup is
detected. This means something between ~5s to 10s.

Another corner case is that the watchdog is enabled only after the
paging is enabled as it would be pure madness to try to get it right.
2010-01-16 20:53:55 +00:00
Ben Gras
bd42705433 FPU context switching support by Evgeniy Ivanov. 2009-12-02 13:01:48 +00:00
David van Moolenbroek
fce9fd4b4e Add 'getidle' CPU utilization measurement infrastructure 2009-12-02 11:52:26 +00:00
Tomas Hruby
8a44a44cb9 Local APIC
- local APIC timer used as the source of time

- PIC is still used as the hw interrupt controller as we don't have
  enough info without ACPI or MPS to set up IO APICs

- remapping of APIC when switching paging on, uses the new mechanism
  to tell VM what phys areas to map in kernel's virtual space

- one more step to SMP

based on code by Arun C.
2009-11-16 21:41:44 +00:00
Tomas Hruby
daf7940c69 pick_proc() called only just before returning to userspace
- new proc_is_runnable() macro to test whether process is runnable. All tests
  whether p_rts_flags == 0 converted to use this macro

- pick_proc() calls removed from enqueue() and dequeue()

- removed the test for recursive calls from pick_proc() as it certainly cannot
  be called recursively now

- PREEMPTED flag to mark processes that were preempted by enqueueuing a higher
  priority process in enqueue()

- enqueue_head() to enqueue PREEMPTED processes again at the head of their
  current priority queue

- NO_QUANTUM flag to block and dequeue processes preempted by timer tick with
  exceeded quantum. They need to be enqueued again in schedcheck()

- next_ptr global variable removed
2009-11-09 17:48:31 +00:00
Tomas Hruby
ae75f9d4e5 Removal of the executable flag from files that cannot be executed
- 755 -> 644
2009-11-09 10:26:00 +00:00
Tomas Hruby
ebbce7507b Complete ovehaul of mode switching code
- after a trap to kernel, the code automatically switches to kernel
  stack, in the future local to the CPU

- k_reenter variable replaced by a test whether the CS is kernel cs or
  not. The information is passed further if needed. Removes a global
  variable which would need to be cpu local

- no need for global variables describing the exception or trap
  context. This information is kept on stack and a pointer to this
  structure is passed to the C code as a single structure

- removed loadedcr3 variable and its use replaced by reading the %cr3
  register

- no need to redisable interrupts in restart() as they are already
  disabled.

- unified handling of traps that push and don't push errorcode

- removed save() function as the process context is not saved directly
  to process table but saved as required by the trap code. Essentially
  it means that save() code is inlined everywhere not only in the
  exception handling routine

- returning from syscall is more arch independent - it sets the retger
  in C

- top of the x86 stack contains the current CPU id and pointer to the
  currently scheduled process (the one right interrupted) so the mode
  switch code can find where to save the context without need to use
  proc_ptr which will be cpu local in the future and therefore
  difficult to access in assembler and expensive to access in general

- some more clean up of level0 code. No need to read-back the argument
  passed in
  %eax from the proc structure. The mode switch code does not clobber
  %the general registers and hence we can just call what is in %eax

- many assebly macros in sconst.h as they will be reused by the apic
  assembly
2009-11-06 09:08:26 +00:00
Tomas Hruby
f2a1f21a39 Clock task split
- preemption handled in the clock timer interrupt handler, not in the clock task

- more achitecture independent clock timer handling code

- smp ready as each CPU can have its own timer
2009-11-06 09:04:15 +00:00
Ben Gras
30804b9ed7 thanks to tomas: fix for level0() race condition - global variable can
be used concurrently.  pass the function in eax instead; this gets rid
of the global variable.  also execute the function directly if we're
already trapped into the kernel.

revert of u32_t endpoint_t to int (some code assumes endpoints are
negative for negative slot numbers).
2009-10-05 15:22:31 +00:00
Ben Gras
fe35879325 - panic if there's no runnable process
- more basic sanity check before recursive enter check (data segment)
 - try to jump to boot monitor instantly on recursive panic
2009-10-03 11:30:35 +00:00
Ben Gras
cd8b915ed9 Primary goal for these changes is:
- no longer have kernel have its own page table that is loaded
    on every kernel entry (trap, interrupt, exception). the primary
    purpose is to reduce the number of required reloads.
Result:
  - kernel can only access memory of process that was running when
    kernel was entered
  - kernel must be mapped into every process page table, so traps to
    kernel keep working
Problem:
  - kernel must often access memory of arbitrary processes (e.g. send
    arbitrary processes messages); this can't happen directly any more;
    usually because that process' page table isn't loaded at all, sometimes
    because that memory isn't mapped in at all, sometimes because it isn't
    mapped in read-write.
So:
  - kernel must be able to map in memory of any process, in its own
    address space.
Implementation:
  - VM and kernel share a range of memory in which addresses of
    all page tables of all processes are available. This has two purposes:
      . Kernel has to know what data to copy in order to map in a range
      . Kernel has to know where to write the data in order to map it in
    That last point is because kernel has to write in the currently loaded
    page table.
  - Processes and kernel are separated through segments; kernel segments
    haven't changed.
  - The kernel keeps the process whose page table is currently loaded
    in 'ptproc.'
  - If it wants to map in a range of memory, it writes the value of the
    page directory entry for that range into the page directory entry
    in the currently loaded map. There is a slot reserved for such
    purposes. The kernel can then access this memory directly.
  - In order to do this, its segment has been increased (and the
    segments of processes start where it ends).
  - In the pagefault handler, detect if the kernel is doing
    'trappable' memory access (i.e. a pagefault isn't a fatal
     error) and if so,
       - set the saved instruction pointer to phys_copy_fault,
	 breaking out of phys_copy
       - set the saved eax register to the address of the page
	 fault, both for sanity checking and for checking in
	 which of the two ranges that phys_copy was called
	 with the fault occured
  - Some boot-time processes do not have their own page table,
    and are mapped in with the kernel, and separated with
    segments. The kernel detects this using HASPT. If such a
    process has to be scheduled, any page table will work and
    no page table switch is done.

Major changes in kernel are
  - When accessing user processes memory, kernel no longer
    explicitly checks before it does so if that memory is OK.
    It simply makes the mapping (if necessary), tries to do the
    operation, and traps the pagefault if that memory isn't present;
    if that happens, the copy function returns EFAULT.
    So all of the CHECKRANGE_OR_SUSPEND macros are gone.
  - Kernel no longer has to copy/read and parse page tables.
  - A message copying optimisation: when messages are copied, and
    the recipient isn't mapped in, they are copied into a buffer
    in the kernel. This is done in QueueMess. The next time
    the recipient is scheduled, this message is copied into
    its memory. This happens in schedcheck().
    This eliminates the mapping/copying step for messages, and makes
    it easier to deliver messages. This eliminates soft_notify.
  - Kernel no longer creates a page table at all, so the vm_setbuf
    and pagetable writing in memory.c is gone.

Minor changes in kernel are
  - ipc_stats thrown out, wasn't used
  - misc flags all renamed to MF_*
  - NOREC_* macros to enter and leave functions that should not
    be called recursively; just sanity checks really
  - code to fully decode segment selectors and descriptors
    to print on exceptions
  - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 14:31:52 +00:00
Ben Gras
9647fbc94e moved type and constants for random data to include file;
added consistency check in random; added source of randomness
internal to random using timing; only retrieve random bins that are full.
2009-04-02 15:24:44 +00:00
Ben Gras
c27008fbcc cprofile not conditional 2009-01-09 21:44:52 +00:00
Ben Gras
128a0508c0 timing measurement code out of kernel and into library
(so other components can use it too)
2009-01-09 16:15:15 +00:00
Ben Gras
afef5e0711 . some flags to <minix/const.h>
. add system_hz for runtime HZ value
2008-12-11 14:12:52 +00:00
Ben Gras
c078ec0331 Basic VM and other minor improvements.
Not complete, probably not fully debugged or optimized.
2008-11-19 12:26:10 +00:00
Philip Homburg
992edfd558 Keep track of various statistics related to IPC and SYSTEM. 2008-02-22 12:36:46 +00:00
Philip Homburg
6ef2e9b866 Added global variable boottime, prototype for do_stime, and table entry for
SYS_STIME.
2007-08-07 12:21:40 +00:00
Ben Gras
6f77685609 Split of architecture-dependent and -independent functions for i386,
mainly in the kernel and headers. This split based on work by
Ingmar Alting <iaalting@cs.vu.nl> done for his Minix PowerPC architecture
port.

 . kernel does not program the interrupt controller directly, do any
   other architecture-dependent operations, or contain assembly any more,
   but uses architecture-dependent functions in arch/$(ARCH)/.
 . architecture-dependent constants and types defined in arch/$(ARCH)/include.
 . <ibm/portio.h> moved to <minix/portio.h>, as they have become, for now,
   architecture-independent functions.
 . int86, sdevio, readbios, and iopenable are now i386-specific kernel calls
   and live in arch/i386/do_* now.
 . i386 arch now supports even less 86 code; e.g. mpx86.s and klib86.s have
   gone, and 'machine.protected' is gone (and always taken to be 1 in i386).
   If 86 support is to return, it should be a new architecture.
 . prototypes for the architecture-dependent functions defined in
   kernel/arch/$(ARCH)/*.c but used in kernel/ are in kernel/proto.h
 . /etc/make.conf included in makefiles and shell scripts that need to
   know the building architecture; it defines ARCH=<arch>, currently only
   i386.
 . some basic per-architecture build support outside of the kernel (lib)
 . in clock.c, only dequeue a process if it was ready
 . fixes for new include files

files deleted:
 . mpx/klib.s - only for choosing between mpx/klib86 and -386
 . klib86.s - only for 86

i386-specific files files moved (or arch-dependent stuff moved) to arch/i386/:
 . mpx386.s (entry point)
 . klib386.s
 . sconst.h
 . exception.c
 . protect.c
 . protect.h
 . i8269.c
2006-12-22 15:22:27 +00:00
Ben Gras
b89c6634f5 Use endpoint_t. New prototypes for related to grants and safecopy functions. 2006-06-20 09:57:00 +00:00
Ben Gras
1335d5d700 'proc number' is process slot, 'endpoint' are generation-aware process
instance numbers, encoded and decoded using macros in <minix/endpoint.h>.

proc number -> endpoint migration
  . proc_nr in the interrupt hook is now an endpoint, proc_nr_e.
  . m_source for messages and notifies is now an endpoint, instead of
    proc number.
  . isokendpt() converts an endpoint to a process number, returns
    success (but fails if the process number is out of range, the
    process slot is not a living process, or the given endpoint
    number does not match the endpoint number in the process slot,
    indicating an old process).
  . okendpt() is the same as isokendpt(), but panic()s if the conversion
    fails. This is mainly used for decoding message.m_source endpoints,
    and other endpoint numbers in kernel data structures, which should
    always be correct.
  . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt()
    get passed the __FILE__ and __LINE__ of the calling lines, and
    print messages about what is wrong with the endpoint number
    (out of range proc, empty proc, or inconsistent endpoint number),
    with the caller, making finding where the conversion failed easy
    without having to include code for every call to print where things
    went wrong. Sometimes this is harmless (wrong arg to a kernel call),
    sometimes it's a fatal internal inconsistency (bogus m_source).
  . some process table fields have been appended an _e to indicate it's
    become and endpoint.
  . process endpoint is stored in p_endpoint, without generation number.
    it turns out the kernel never needs the generation number, except
    when fork()ing, so it's decoded then.
  . kernel calls all take endpoints as arguments, not proc numbers.
    the one exception is sys_fork(), which needs to know in which slot
    to put the child.
2006-03-03 10:00:02 +00:00
Ben Gras
87f2236ad2 load average measurement implementation, accessable through
getloadavg() system call in the library.
2005-11-14 15:50:46 +00:00
Philip Homburg
9bee3f4b08 IOPL, VM, and serial debug output (disabled). 2005-09-30 12:54:59 +00:00
Ben Gras
c655d8b3ae Added shutdown_started global variable. If it's set, we're in the
process of doing a shutdown.

Initial purpose is - suppress dead process diagnostic message.
2005-09-08 14:31:23 +00:00