2005-04-21 16:53:53 +02:00
|
|
|
/* This file contains the main program of MINIX as well as its shutdown code.
|
|
|
|
* The routine main() initializes the system and starts the ball rolling by
|
|
|
|
* setting up the process table, interrupt vectors, and scheduling each task
|
|
|
|
* to run to initialize itself.
|
2005-06-24 18:24:40 +02:00
|
|
|
* The routine shutdown() does the opposite and brings down MINIX.
|
2005-04-21 16:53:53 +02:00
|
|
|
*
|
|
|
|
* The entries into this file are:
|
|
|
|
* main: MINIX main program
|
|
|
|
* prepare_shutdown: prepare to take MINIX down
|
|
|
|
*/
|
|
|
|
#include "kernel.h"
|
2005-07-20 17:25:38 +02:00
|
|
|
#include <string.h>
|
2005-04-21 16:53:53 +02:00
|
|
|
#include <unistd.h>
|
2010-03-10 14:00:05 +01:00
|
|
|
#include <assert.h>
|
2005-04-21 16:53:53 +02:00
|
|
|
#include <a.out.h>
|
|
|
|
#include <minix/com.h>
|
'proc number' is process slot, 'endpoint' are generation-aware process
instance numbers, encoded and decoded using macros in <minix/endpoint.h>.
proc number -> endpoint migration
. proc_nr in the interrupt hook is now an endpoint, proc_nr_e.
. m_source for messages and notifies is now an endpoint, instead of
proc number.
. isokendpt() converts an endpoint to a process number, returns
success (but fails if the process number is out of range, the
process slot is not a living process, or the given endpoint
number does not match the endpoint number in the process slot,
indicating an old process).
. okendpt() is the same as isokendpt(), but panic()s if the conversion
fails. This is mainly used for decoding message.m_source endpoints,
and other endpoint numbers in kernel data structures, which should
always be correct.
. if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt()
get passed the __FILE__ and __LINE__ of the calling lines, and
print messages about what is wrong with the endpoint number
(out of range proc, empty proc, or inconsistent endpoint number),
with the caller, making finding where the conversion failed easy
without having to include code for every call to print where things
went wrong. Sometimes this is harmless (wrong arg to a kernel call),
sometimes it's a fatal internal inconsistency (bogus m_source).
. some process table fields have been appended an _e to indicate it's
become and endpoint.
. process endpoint is stored in p_endpoint, without generation number.
it turns out the kernel never needs the generation number, except
when fork()ing, so it's decoded then.
. kernel calls all take endpoints as arguments, not proc numbers.
the one exception is sys_fork(), which needs to know in which slot
to put the child.
2006-03-03 11:00:02 +01:00
|
|
|
#include <minix/endpoint.h>
|
2005-04-21 16:53:53 +02:00
|
|
|
#include "proc.h"
|
Primary goal for these changes is:
- no longer have kernel have its own page table that is loaded
on every kernel entry (trap, interrupt, exception). the primary
purpose is to reduce the number of required reloads.
Result:
- kernel can only access memory of process that was running when
kernel was entered
- kernel must be mapped into every process page table, so traps to
kernel keep working
Problem:
- kernel must often access memory of arbitrary processes (e.g. send
arbitrary processes messages); this can't happen directly any more;
usually because that process' page table isn't loaded at all, sometimes
because that memory isn't mapped in at all, sometimes because it isn't
mapped in read-write.
So:
- kernel must be able to map in memory of any process, in its own
address space.
Implementation:
- VM and kernel share a range of memory in which addresses of
all page tables of all processes are available. This has two purposes:
. Kernel has to know what data to copy in order to map in a range
. Kernel has to know where to write the data in order to map it in
That last point is because kernel has to write in the currently loaded
page table.
- Processes and kernel are separated through segments; kernel segments
haven't changed.
- The kernel keeps the process whose page table is currently loaded
in 'ptproc.'
- If it wants to map in a range of memory, it writes the value of the
page directory entry for that range into the page directory entry
in the currently loaded map. There is a slot reserved for such
purposes. The kernel can then access this memory directly.
- In order to do this, its segment has been increased (and the
segments of processes start where it ends).
- In the pagefault handler, detect if the kernel is doing
'trappable' memory access (i.e. a pagefault isn't a fatal
error) and if so,
- set the saved instruction pointer to phys_copy_fault,
breaking out of phys_copy
- set the saved eax register to the address of the page
fault, both for sanity checking and for checking in
which of the two ranges that phys_copy was called
with the fault occured
- Some boot-time processes do not have their own page table,
and are mapped in with the kernel, and separated with
segments. The kernel detects this using HASPT. If such a
process has to be scheduled, any page table will work and
no page table switch is done.
Major changes in kernel are
- When accessing user processes memory, kernel no longer
explicitly checks before it does so if that memory is OK.
It simply makes the mapping (if necessary), tries to do the
operation, and traps the pagefault if that memory isn't present;
if that happens, the copy function returns EFAULT.
So all of the CHECKRANGE_OR_SUSPEND macros are gone.
- Kernel no longer has to copy/read and parse page tables.
- A message copying optimisation: when messages are copied, and
the recipient isn't mapped in, they are copied into a buffer
in the kernel. This is done in QueueMess. The next time
the recipient is scheduled, this message is copied into
its memory. This happens in schedcheck().
This eliminates the mapping/copying step for messages, and makes
it easier to deliver messages. This eliminates soft_notify.
- Kernel no longer creates a page table at all, so the vm_setbuf
and pagetable writing in memory.c is gone.
Minor changes in kernel are
- ipc_stats thrown out, wasn't used
- misc flags all renamed to MF_*
- NOREC_* macros to enter and leave functions that should not
be called recursively; just sanity checks really
- code to fully decode segment selectors and descriptors
to print on exceptions
- lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
|
|
|
#include "debug.h"
|
2009-11-06 10:04:15 +01:00
|
|
|
#include "clock.h"
|
2005-04-21 16:53:53 +02:00
|
|
|
|
2005-05-02 16:30:04 +02:00
|
|
|
/* Prototype declarations for PRIVATE functions. */
|
|
|
|
FORWARD _PROTOTYPE( void announce, (void));
|
|
|
|
|
2005-04-21 16:53:53 +02:00
|
|
|
/*===========================================================================*
|
2005-09-11 18:44:06 +02:00
|
|
|
* main *
|
2005-04-21 16:53:53 +02:00
|
|
|
*===========================================================================*/
|
|
|
|
PUBLIC void main()
|
|
|
|
{
|
|
|
|
/* Start the ball rolling. */
|
2005-07-29 17:26:23 +02:00
|
|
|
struct boot_image *ip; /* boot image pointer */
|
2005-07-20 17:25:38 +02:00
|
|
|
register struct proc *rp; /* process pointer */
|
|
|
|
register struct priv *sp; /* privilege structure pointer */
|
2010-01-22 23:01:08 +01:00
|
|
|
register int i, j;
|
2005-04-21 16:53:53 +02:00
|
|
|
int hdrindex; /* index to array of a.out headers */
|
2005-06-20 13:26:48 +02:00
|
|
|
phys_clicks text_base;
|
2006-05-11 16:49:46 +02:00
|
|
|
vir_clicks text_clicks, data_clicks, st_clicks;
|
2005-04-21 16:53:53 +02:00
|
|
|
reg_t ktsb; /* kernel task stack base */
|
|
|
|
struct exec e_hdr; /* for a copy of an a.out header */
|
|
|
|
|
2009-10-03 13:30:35 +02:00
|
|
|
/* Global value to test segment sanity. */
|
|
|
|
magictest = MAGICTEST;
|
2009-08-30 16:55:30 +02:00
|
|
|
|
2010-02-13 23:11:16 +01:00
|
|
|
DEBUGMAX(("main()\n"));
|
|
|
|
|
2005-06-24 18:24:40 +02:00
|
|
|
/* Clear the process table. Anounce each slot as empty and set up mappings
|
2005-07-14 17:12:12 +02:00
|
|
|
* for proc_addr() and proc_nr() macros. Do the same for the table with
|
2005-07-26 14:48:34 +02:00
|
|
|
* privilege structures for the system processes.
|
2005-04-21 16:53:53 +02:00
|
|
|
*/
|
|
|
|
for (rp = BEG_PROC_ADDR, i = -NR_TASKS; rp < END_PROC_ADDR; ++rp, ++i) {
|
2009-11-10 10:11:13 +01:00
|
|
|
rp->p_rts_flags = RTS_SLOT_FREE; /* initialize free slot */
|
2009-05-12 13:35:01 +02:00
|
|
|
rp->p_magic = PMAGIC;
|
2005-04-21 16:53:53 +02:00
|
|
|
rp->p_nr = i; /* proc number from ptr */
|
'proc number' is process slot, 'endpoint' are generation-aware process
instance numbers, encoded and decoded using macros in <minix/endpoint.h>.
proc number -> endpoint migration
. proc_nr in the interrupt hook is now an endpoint, proc_nr_e.
. m_source for messages and notifies is now an endpoint, instead of
proc number.
. isokendpt() converts an endpoint to a process number, returns
success (but fails if the process number is out of range, the
process slot is not a living process, or the given endpoint
number does not match the endpoint number in the process slot,
indicating an old process).
. okendpt() is the same as isokendpt(), but panic()s if the conversion
fails. This is mainly used for decoding message.m_source endpoints,
and other endpoint numbers in kernel data structures, which should
always be correct.
. if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt()
get passed the __FILE__ and __LINE__ of the calling lines, and
print messages about what is wrong with the endpoint number
(out of range proc, empty proc, or inconsistent endpoint number),
with the caller, making finding where the conversion failed easy
without having to include code for every call to print where things
went wrong. Sometimes this is harmless (wrong arg to a kernel call),
sometimes it's a fatal internal inconsistency (bogus m_source).
. some process table fields have been appended an _e to indicate it's
become and endpoint.
. process endpoint is stored in p_endpoint, without generation number.
it turns out the kernel never needs the generation number, except
when fork()ing, so it's decoded then.
. kernel calls all take endpoints as arguments, not proc numbers.
the one exception is sys_fork(), which needs to know in which slot
to put the child.
2006-03-03 11:00:02 +01:00
|
|
|
rp->p_endpoint = _ENDPOINT(0, rp->p_nr); /* generation no. 0 */
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
2005-07-14 17:12:12 +02:00
|
|
|
for (sp = BEG_PRIV_ADDR, i = 0; sp < END_PRIV_ADDR; ++sp, ++i) {
|
|
|
|
sp->s_proc_nr = NONE; /* initialize as free */
|
2010-03-30 16:07:15 +02:00
|
|
|
sp->s_id = (sys_id_t) i; /* priv structure index */
|
2005-07-14 17:12:12 +02:00
|
|
|
ppriv_addr[i] = sp; /* priv ptr from number */
|
|
|
|
}
|
2005-04-21 16:53:53 +02:00
|
|
|
|
2005-10-02 21:02:05 +02:00
|
|
|
/* Set up proc table entries for processes in boot image. The stacks of the
|
2005-04-21 16:53:53 +02:00
|
|
|
* kernel tasks are initialized to an array in data space. The stacks
|
|
|
|
* of the servers have been added to the data segment by the monitor, so
|
|
|
|
* the stack pointer is set to the end of the data segment. All the
|
|
|
|
* processes are in low memory on the 8086. On the 386 only the kernel
|
|
|
|
* is in low memory, the rest is loaded in extended memory.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Task stacks. */
|
|
|
|
ktsb = (reg_t) t_stack;
|
|
|
|
|
2005-07-14 17:12:12 +02:00
|
|
|
for (i=0; i < NR_BOOT_PROCS; ++i) {
|
2010-01-26 13:26:06 +01:00
|
|
|
int schedulable_proc;
|
|
|
|
proc_nr_t proc_nr;
|
2009-12-11 01:08:19 +01:00
|
|
|
int ipc_to_m, kcalls;
|
2009-05-12 13:35:01 +02:00
|
|
|
|
2005-07-14 17:12:12 +02:00
|
|
|
ip = &image[i]; /* process' attributes */
|
2010-02-13 23:11:16 +01:00
|
|
|
DEBUGMAX(("initializing %s... ", ip->proc_name));
|
2005-07-14 17:12:12 +02:00
|
|
|
rp = proc_addr(ip->proc_nr); /* get process pointer */
|
'proc number' is process slot, 'endpoint' are generation-aware process
instance numbers, encoded and decoded using macros in <minix/endpoint.h>.
proc number -> endpoint migration
. proc_nr in the interrupt hook is now an endpoint, proc_nr_e.
. m_source for messages and notifies is now an endpoint, instead of
proc number.
. isokendpt() converts an endpoint to a process number, returns
success (but fails if the process number is out of range, the
process slot is not a living process, or the given endpoint
number does not match the endpoint number in the process slot,
indicating an old process).
. okendpt() is the same as isokendpt(), but panic()s if the conversion
fails. This is mainly used for decoding message.m_source endpoints,
and other endpoint numbers in kernel data structures, which should
always be correct.
. if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt()
get passed the __FILE__ and __LINE__ of the calling lines, and
print messages about what is wrong with the endpoint number
(out of range proc, empty proc, or inconsistent endpoint number),
with the caller, making finding where the conversion failed easy
without having to include code for every call to print where things
went wrong. Sometimes this is harmless (wrong arg to a kernel call),
sometimes it's a fatal internal inconsistency (bogus m_source).
. some process table fields have been appended an _e to indicate it's
become and endpoint.
. process endpoint is stored in p_endpoint, without generation number.
it turns out the kernel never needs the generation number, except
when fork()ing, so it's decoded then.
. kernel calls all take endpoints as arguments, not proc numbers.
the one exception is sys_fork(), which needs to know in which slot
to put the child.
2006-03-03 11:00:02 +01:00
|
|
|
ip->endpoint = rp->p_endpoint; /* ipc endpoint */
|
Userspace scheduling
- cotributed by Bjorn Swift
- In this first phase, scheduling is moved from the kernel to the PM
server. The next steps are to a) moving scheduling to its own server
and b) include useful information in the "out of quantum" message,
so that the scheduler can make use of this information.
- The kernel process table now keeps record of who is responsible for
scheduling each process (p_scheduler). When this pointer is NULL,
the process will be scheduled by the kernel. If such a process runs
out of quantum, the kernel will simply renew its quantum an requeue
it.
- When PM loads, it will take over scheduling of all running
processes, except system processes, using sys_schedctl().
Essentially, this only results in taking over init. As children
inherit a scheduler from their parent, user space programs forked by
init will inherit PM (for now) as their scheduler.
- Once a process has been assigned a scheduler, and runs out of
quantum, its RTS_NO_QUANTUM flag will be set and the process
dequeued. The kernel will send a message to the scheduler, on the
process' behalf, informing the scheduler that it has run out of
quantum. The scheduler can take what ever action it pleases, based
on its policy, and then reschedule the process using the
sys_schedule() system call.
- Balance queues does not work as before. While the old in-kernel
function used to renew the quantum of processes in the highest
priority run queue, the user-space implementation only acts on
processes that have been bumped down to a lower priority queue.
This approach reacts slower to changes than the old one, but saves
us sending a sys_schedule message for each process every time we
balance the queues. Currently, when processes are moved up a
priority queue, their quantum is also renewed, but this can be
fiddled with.
- do_nice has been removed from kernel. PM answers to get- and
setpriority calls, updates it's own nice variable as well as the
max_run_queue. This will be refactored once scheduling is moved to a
separate server. We will probably have PM update it's local nice
value and then send a message to whoever is scheduling the process.
- changes to fix an issue in do_fork() where processes could run out
of quantum but bypassing the code path that handles it correctly.
The future plan is to remove the policy from do_fork() and implement
it in userspace too.
2010-03-29 13:07:20 +02:00
|
|
|
rp->p_scheduler = NULL; /* no user space scheduler */
|
2005-06-30 17:55:19 +02:00
|
|
|
rp->p_priority = ip->priority; /* current priority */
|
|
|
|
rp->p_quantum_size = ip->quantum; /* quantum size in ticks */
|
2005-08-22 17:14:11 +02:00
|
|
|
rp->p_ticks_left = ip->quantum; /* current credit */
|
2005-07-21 20:36:40 +02:00
|
|
|
strncpy(rp->p_name, ip->proc_name, P_NAME_LEN); /* set process name */
|
2006-06-20 11:56:06 +02:00
|
|
|
|
2009-12-11 01:08:19 +01:00
|
|
|
/* See if this process is immediately schedulable.
|
|
|
|
* In that case, set its privileges now and allow it to run.
|
|
|
|
* Only kernel tasks and the root system process get to run immediately.
|
|
|
|
* All the other system processes are inhibited from running by the
|
|
|
|
* RTS_NO_PRIV flag. They can only be scheduled once the root system
|
|
|
|
* process has set their privileges.
|
2006-06-20 11:56:06 +02:00
|
|
|
*/
|
2009-12-11 01:08:19 +01:00
|
|
|
proc_nr = proc_nr(rp);
|
|
|
|
schedulable_proc = (iskerneln(proc_nr) || isrootsysn(proc_nr));
|
|
|
|
if(schedulable_proc) {
|
|
|
|
/* Assign privilege structure. Force a static privilege id. */
|
|
|
|
(void) get_priv(rp, static_priv_id(proc_nr));
|
|
|
|
|
|
|
|
/* Priviliges for kernel tasks. */
|
|
|
|
if(iskerneln(proc_nr)) {
|
|
|
|
/* Privilege flags. */
|
|
|
|
priv(rp)->s_flags = (proc_nr == IDLE ? IDL_F : TSK_F);
|
|
|
|
/* Allowed traps. */
|
|
|
|
priv(rp)->s_trap_mask = (proc_nr == CLOCK
|
|
|
|
|| proc_nr == SYSTEM ? CSK_T : TSK_T);
|
|
|
|
ipc_to_m = TSK_M; /* allowed targets */
|
|
|
|
kcalls = TSK_KC; /* allowed kernel calls */
|
|
|
|
}
|
|
|
|
/* Priviliges for the root system process. */
|
|
|
|
else if(isrootsysn(proc_nr)) {
|
|
|
|
priv(rp)->s_flags= RSYS_F; /* privilege flags */
|
|
|
|
priv(rp)->s_trap_mask= RSYS_T; /* allowed traps */
|
|
|
|
ipc_to_m = RSYS_M; /* allowed targets */
|
|
|
|
kcalls = RSYS_KC; /* allowed kernel calls */
|
New RS and new signal handling for system processes.
UPDATING INFO:
20100317:
/usr/src/etc/system.conf updated to ignore default kernel calls: copy
it (or merge it) to /etc/system.conf.
The hello driver (/dev/hello) added to the distribution:
# cd /usr/src/commands/scripts && make clean install
# cd /dev && MAKEDEV hello
KERNEL CHANGES:
- Generic signal handling support. The kernel no longer assumes PM as a signal
manager for every process. The signal manager of a given process can now be
specified in its privilege slot. When a signal has to be delivered, the kernel
performs the lookup and forwards the signal to the appropriate signal manager.
PM is the default signal manager for user processes, RS is the default signal
manager for system processes. To enable ptrace()ing for system processes, it
is sufficient to change the default signal manager to PM. This will temporarily
disable crash recovery, though.
- sys_exit() is now split into sys_exit() (i.e. exit() for system processes,
which generates a self-termination signal), and sys_clear() (i.e. used by PM
to ask the kernel to clear a process slot when a process exits).
- Added a new kernel call (i.e. sys_update()) to swap two process slots and
implement live update.
PM CHANGES:
- Posix signal handling is no longer allowed for system processes. System
signals are split into two fixed categories: termination and non-termination
signals. When a non-termination signaled is processed, PM transforms the signal
into an IPC message and delivers the message to the system process. When a
termination signal is processed, PM terminates the process.
- PM no longer assumes itself as the signal manager for system processes. It now
makes sure that every system signal goes through the kernel before being
actually processes. The kernel will then dispatch the signal to the appropriate
signal manager which may or may not be PM.
SYSLIB CHANGES:
- Simplified SEF init and LU callbacks.
- Added additional predefined SEF callbacks to debug crash recovery and
live update.
- Fixed a temporary ack in the SEF init protocol. SEF init reply is now
completely synchronous.
- Added SEF signal event type to provide a uniform interface for system
processes to deal with signals. A sef_cb_signal_handler() callback is
available for system processes to handle every received signal. A
sef_cb_signal_manager() callback is used by signal managers to process
system signals on behalf of the kernel.
- Fixed a few bugs with memory mapping and DS.
VM CHANGES:
- Page faults and memory requests coming from the kernel are now implemented
using signals.
- Added a new VM call to swap two process slots and implement live update.
- The call is used by RS at update time and in turn invokes the kernel call
sys_update().
RS CHANGES:
- RS has been reworked with a better functional decomposition.
- Better kernel call masks. com.h now defines the set of very basic kernel calls
every system service is allowed to use. This makes system.conf simpler and
easier to maintain. In addition, this guarantees a higher level of isolation
for system libraries that use one or more kernel calls internally (e.g. printf).
- RS is the default signal manager for system processes. By default, RS
intercepts every signal delivered to every system process. This makes crash
recovery possible before bringing PM and friends in the loop.
- RS now supports fast rollback when something goes wrong while initializing
the new version during a live update.
- Live update is now implemented by keeping the two versions side-by-side and
swapping the process slots when the old version is ready to update.
- Crash recovery is now implemented by keeping the two versions side-by-side
and cleaning up the old version only when the recovery process is complete.
DS CHANGES:
- Fixed a bug when the process doing ds_publish() or ds_delete() is not known
by DS.
- Fixed the completely broken support for strings. String publishing is now
implemented in the system library and simply wraps publishing of memory ranges.
Ideally, we should adopt a similar approach for other data types as well.
- Test suite fixed.
DRIVER CHANGES:
- The hello driver has been added to the Minix distribution to demonstrate basic
live update and crash recovery functionalities.
- Other drivers have been adapted to conform the new SEF interface.
2010-03-17 02:15:29 +01:00
|
|
|
priv(rp)->s_sig_mgr = RSYS_SM; /* signal manager */
|
2009-12-11 01:08:19 +01:00
|
|
|
}
|
2010-01-26 13:26:06 +01:00
|
|
|
/* Priviliges for ordinary process. */
|
|
|
|
else {
|
|
|
|
NOT_REACHABLE;
|
|
|
|
}
|
2009-12-11 01:08:19 +01:00
|
|
|
|
|
|
|
/* Fill in target mask. */
|
|
|
|
for (j=0; j < NR_SYS_PROCS; j++) {
|
|
|
|
if (ipc_to_m & (1 << j))
|
|
|
|
set_sendto_bit(rp, j);
|
|
|
|
else
|
|
|
|
unset_sendto_bit(rp, j);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Fill in kernel call mask. */
|
Initialization protocol for system services.
SYSLIB CHANGES:
- SEF framework now supports a new SEF Init request type from RS. 3 different
callbacks are available (init_fresh, init_lu, init_restart) to specify
initialization code when a service starts fresh, starts after a live update,
or restarts.
SYSTEM SERVICE CHANGES:
- Initialization code for system services is now enclosed in a callback SEF will
automatically call at init time. The return code of the callback will
tell RS whether the initialization completed successfully.
- Each init callback can access information passed by RS to initialize. As of
now, each system service has access to the public entries of RS's system process
table to gather all the information required to initialize. This design
eliminates many existing or potential races at boot time and provides a uniform
initialization interface to system services. The same interface will be reused
for the upcoming publish/subscribe model to handle dynamic
registration / deregistration of system services.
VM CHANGES:
- Uniform privilege management for all system services. Every service uses the
same call mask format. For boot services, VM copies the call mask from init
data. For dynamic services, VM still receives the call mask via rs_set_priv
call that will be soon replaced by the upcoming publish/subscribe model.
RS CHANGES:
- The system process table has been reorganized and split into private entries
and public entries. Only the latter ones are exposed to system services.
- VM call masks are now entirely configured in rs/table.c
- RS has now its own slot in the system process table. Only kernel tasks and
user processes not included in the boot image are now left out from the system
process table.
- RS implements the initialization protocol for system services.
- For services in the boot image, RS blocks till initialization is complete and
panics when failure is reported back. Services are initialized in their order of
appearance in the boot image priv table and RS blocks to implements synchronous
initialization for every system service having the flag SF_SYNCH_BOOT set.
- For services started dynamically, the initialization protocol is implemented
as though it were the first ping for the service. In this case, if the
system service fails to report back (or reports failure), RS brings the service
down rather than trying to restart it.
2010-01-08 02:20:42 +01:00
|
|
|
for(j = 0; j < SYS_CALL_MASK_SIZE; j++) {
|
2009-12-11 01:08:19 +01:00
|
|
|
priv(rp)->s_k_call_mask[j] = (kcalls == NO_C ? 0 : (~0));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
/* Don't let the process run for now. */
|
|
|
|
RTS_SET(rp, RTS_NO_PRIV);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (iskerneln(proc_nr)) { /* part of the kernel? */
|
2005-06-30 17:55:19 +02:00
|
|
|
if (ip->stksize > 0) { /* HARDWARE stack size is 0 */
|
2005-07-14 17:12:12 +02:00
|
|
|
rp->p_priv->s_stack_guard = (reg_t *) ktsb;
|
|
|
|
*rp->p_priv->s_stack_guard = STACK_GUARD;
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
2005-06-30 17:55:19 +02:00
|
|
|
ktsb += ip->stksize; /* point to high end of stack */
|
2005-04-21 16:53:53 +02:00
|
|
|
rp->p_reg.sp = ktsb; /* this task's initial stack ptr */
|
|
|
|
hdrindex = 0; /* all use the first a.out header */
|
|
|
|
} else {
|
2009-12-11 01:08:19 +01:00
|
|
|
hdrindex = 1 + i-NR_TASKS; /* system/user processes */
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
|
|
|
|
2008-11-19 13:26:10 +01:00
|
|
|
/* Architecture-specific way to find out aout header of this
|
|
|
|
* boot process.
|
2005-04-21 16:53:53 +02:00
|
|
|
*/
|
2008-11-19 13:26:10 +01:00
|
|
|
arch_get_aout_headers(hdrindex, &e_hdr);
|
|
|
|
|
2005-04-21 16:53:53 +02:00
|
|
|
/* Convert addresses to clicks and build process memory map */
|
|
|
|
text_base = e_hdr.a_syms >> CLICK_SHIFT;
|
2010-01-14 16:24:16 +01:00
|
|
|
text_clicks = (vir_clicks) (CLICK_CEIL(e_hdr.a_text) >> CLICK_SHIFT);
|
|
|
|
data_clicks = (vir_clicks) (CLICK_CEIL(e_hdr.a_data
|
|
|
|
+ e_hdr.a_bss) >> CLICK_SHIFT);
|
|
|
|
st_clicks = (vir_clicks) (CLICK_CEIL(e_hdr.a_total) >> CLICK_SHIFT);
|
2006-05-11 16:49:46 +02:00
|
|
|
if (!(e_hdr.a_flags & A_SEP))
|
|
|
|
{
|
2010-01-14 16:24:16 +01:00
|
|
|
data_clicks = (vir_clicks) (CLICK_CEIL(e_hdr.a_text +
|
|
|
|
e_hdr.a_data + e_hdr.a_bss) >> CLICK_SHIFT);
|
2006-05-11 16:49:46 +02:00
|
|
|
text_clicks = 0; /* common I&D */
|
|
|
|
}
|
2005-04-21 16:53:53 +02:00
|
|
|
rp->p_memmap[T].mem_phys = text_base;
|
|
|
|
rp->p_memmap[T].mem_len = text_clicks;
|
|
|
|
rp->p_memmap[D].mem_phys = text_base + text_clicks;
|
|
|
|
rp->p_memmap[D].mem_len = data_clicks;
|
2006-05-11 16:49:46 +02:00
|
|
|
rp->p_memmap[S].mem_phys = text_base + text_clicks + st_clicks;
|
|
|
|
rp->p_memmap[S].mem_vir = st_clicks;
|
|
|
|
rp->p_memmap[S].mem_len = 0;
|
2005-04-21 16:53:53 +02:00
|
|
|
|
|
|
|
/* Set initial register values. The processor status word for tasks
|
|
|
|
* is different from that of other processes because tasks can
|
|
|
|
* access I/O; this is not allowed to less-privileged processes
|
|
|
|
*/
|
2005-06-30 17:55:19 +02:00
|
|
|
rp->p_reg.pc = (reg_t) ip->initial_pc;
|
2009-12-11 01:08:19 +01:00
|
|
|
rp->p_reg.psw = (iskerneln(proc_nr)) ? INIT_TASK_PSW : INIT_PSW;
|
2005-04-21 16:53:53 +02:00
|
|
|
|
|
|
|
/* Initialize the server stack pointer. Take it down one word
|
|
|
|
* to give crtso.s something to use as "argc".
|
|
|
|
*/
|
2009-12-11 01:08:19 +01:00
|
|
|
if (isusern(proc_nr)) { /* user-space process? */
|
2005-04-21 16:53:53 +02:00
|
|
|
rp->p_reg.sp = (rp->p_memmap[S].mem_vir +
|
|
|
|
rp->p_memmap[S].mem_len) << CLICK_SHIFT;
|
|
|
|
rp->p_reg.sp -= sizeof(reg_t);
|
|
|
|
}
|
2008-12-11 15:15:23 +01:00
|
|
|
|
Primary goal for these changes is:
- no longer have kernel have its own page table that is loaded
on every kernel entry (trap, interrupt, exception). the primary
purpose is to reduce the number of required reloads.
Result:
- kernel can only access memory of process that was running when
kernel was entered
- kernel must be mapped into every process page table, so traps to
kernel keep working
Problem:
- kernel must often access memory of arbitrary processes (e.g. send
arbitrary processes messages); this can't happen directly any more;
usually because that process' page table isn't loaded at all, sometimes
because that memory isn't mapped in at all, sometimes because it isn't
mapped in read-write.
So:
- kernel must be able to map in memory of any process, in its own
address space.
Implementation:
- VM and kernel share a range of memory in which addresses of
all page tables of all processes are available. This has two purposes:
. Kernel has to know what data to copy in order to map in a range
. Kernel has to know where to write the data in order to map it in
That last point is because kernel has to write in the currently loaded
page table.
- Processes and kernel are separated through segments; kernel segments
haven't changed.
- The kernel keeps the process whose page table is currently loaded
in 'ptproc.'
- If it wants to map in a range of memory, it writes the value of the
page directory entry for that range into the page directory entry
in the currently loaded map. There is a slot reserved for such
purposes. The kernel can then access this memory directly.
- In order to do this, its segment has been increased (and the
segments of processes start where it ends).
- In the pagefault handler, detect if the kernel is doing
'trappable' memory access (i.e. a pagefault isn't a fatal
error) and if so,
- set the saved instruction pointer to phys_copy_fault,
breaking out of phys_copy
- set the saved eax register to the address of the page
fault, both for sanity checking and for checking in
which of the two ranges that phys_copy was called
with the fault occured
- Some boot-time processes do not have their own page table,
and are mapped in with the kernel, and separated with
segments. The kernel detects this using HASPT. If such a
process has to be scheduled, any page table will work and
no page table switch is done.
Major changes in kernel are
- When accessing user processes memory, kernel no longer
explicitly checks before it does so if that memory is OK.
It simply makes the mapping (if necessary), tries to do the
operation, and traps the pagefault if that memory isn't present;
if that happens, the copy function returns EFAULT.
So all of the CHECKRANGE_OR_SUSPEND macros are gone.
- Kernel no longer has to copy/read and parse page tables.
- A message copying optimisation: when messages are copied, and
the recipient isn't mapped in, they are copied into a buffer
in the kernel. This is done in QueueMess. The next time
the recipient is scheduled, this message is copied into
its memory. This happens in schedcheck().
This eliminates the mapping/copying step for messages, and makes
it easier to deliver messages. This eliminates soft_notify.
- Kernel no longer creates a page table at all, so the vm_setbuf
and pagetable writing in memory.c is gone.
Minor changes in kernel are
- ipc_stats thrown out, wasn't used
- misc flags all renamed to MF_*
- NOREC_* macros to enter and leave functions that should not
be called recursively; just sanity checks really
- code to fully decode segment selectors and descriptors
to print on exceptions
- lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
|
|
|
/* scheduling functions depend on proc_ptr pointing somewhere. */
|
|
|
|
if(!proc_ptr) proc_ptr = rp;
|
|
|
|
|
2008-12-11 15:15:23 +01:00
|
|
|
/* If this process has its own page table, VM will set the
|
|
|
|
* PT up and manage it. VM will signal the kernel when it has
|
|
|
|
* done this; until then, don't let it run.
|
|
|
|
*/
|
2009-12-11 01:08:19 +01:00
|
|
|
if(ip->flags & PROC_FULLVM)
|
2009-11-10 10:11:13 +01:00
|
|
|
RTS_SET(rp, RTS_VMINHIBIT);
|
2009-12-11 01:08:19 +01:00
|
|
|
|
2010-02-09 16:22:43 +01:00
|
|
|
/* None of the kernel tasks run */
|
|
|
|
if (rp->p_nr < 0) RTS_SET(rp, RTS_PROC_STOP);
|
2009-11-10 10:11:13 +01:00
|
|
|
RTS_UNSET(rp, RTS_SLOT_FREE); /* remove RTS_SLOT_FREE and schedule */
|
2005-04-21 16:53:53 +02:00
|
|
|
alloc_segments(rp);
|
2010-02-13 23:11:16 +01:00
|
|
|
DEBUGMAX(("done\n"));
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
|
|
|
|
2009-11-16 22:41:44 +01:00
|
|
|
/* Architecture-dependent initialization. */
|
2010-02-13 23:11:16 +01:00
|
|
|
DEBUGMAX(("arch_init()... "));
|
2009-11-16 22:41:44 +01:00
|
|
|
arch_init();
|
2010-02-13 23:11:16 +01:00
|
|
|
DEBUGMAX(("done\n"));
|
2009-11-16 22:41:44 +01:00
|
|
|
|
2010-02-09 16:12:20 +01:00
|
|
|
/* System and processes initialization */
|
2010-02-13 23:11:16 +01:00
|
|
|
DEBUGMAX(("system_init()... "));
|
2010-02-09 16:12:20 +01:00
|
|
|
system_init();
|
2010-02-13 23:11:16 +01:00
|
|
|
DEBUGMAX(("done\n"));
|
2010-02-09 16:12:20 +01:00
|
|
|
|
2006-10-30 16:53:38 +01:00
|
|
|
#if SPROFILE
|
|
|
|
sprofiling = 0; /* we're not profiling until instructed to */
|
|
|
|
#endif /* SPROFILE */
|
|
|
|
cprof_procs_no = 0; /* init nr of hash table slots used */
|
|
|
|
|
2008-11-19 13:26:10 +01:00
|
|
|
vm_running = 0;
|
2009-04-02 17:24:44 +02:00
|
|
|
krandom.random_sources = RANDOM_SOURCES;
|
|
|
|
krandom.random_elements = RANDOM_ELEMENTS;
|
2008-11-19 13:26:10 +01:00
|
|
|
|
2005-06-17 11:09:54 +02:00
|
|
|
/* MINIX is now ready. All boot image processes are on the ready queue.
|
|
|
|
* Return to the assembly code to start running the current process.
|
2005-04-21 16:53:53 +02:00
|
|
|
*/
|
Primary goal for these changes is:
- no longer have kernel have its own page table that is loaded
on every kernel entry (trap, interrupt, exception). the primary
purpose is to reduce the number of required reloads.
Result:
- kernel can only access memory of process that was running when
kernel was entered
- kernel must be mapped into every process page table, so traps to
kernel keep working
Problem:
- kernel must often access memory of arbitrary processes (e.g. send
arbitrary processes messages); this can't happen directly any more;
usually because that process' page table isn't loaded at all, sometimes
because that memory isn't mapped in at all, sometimes because it isn't
mapped in read-write.
So:
- kernel must be able to map in memory of any process, in its own
address space.
Implementation:
- VM and kernel share a range of memory in which addresses of
all page tables of all processes are available. This has two purposes:
. Kernel has to know what data to copy in order to map in a range
. Kernel has to know where to write the data in order to map it in
That last point is because kernel has to write in the currently loaded
page table.
- Processes and kernel are separated through segments; kernel segments
haven't changed.
- The kernel keeps the process whose page table is currently loaded
in 'ptproc.'
- If it wants to map in a range of memory, it writes the value of the
page directory entry for that range into the page directory entry
in the currently loaded map. There is a slot reserved for such
purposes. The kernel can then access this memory directly.
- In order to do this, its segment has been increased (and the
segments of processes start where it ends).
- In the pagefault handler, detect if the kernel is doing
'trappable' memory access (i.e. a pagefault isn't a fatal
error) and if so,
- set the saved instruction pointer to phys_copy_fault,
breaking out of phys_copy
- set the saved eax register to the address of the page
fault, both for sanity checking and for checking in
which of the two ranges that phys_copy was called
with the fault occured
- Some boot-time processes do not have their own page table,
and are mapped in with the kernel, and separated with
segments. The kernel detects this using HASPT. If such a
process has to be scheduled, any page table will work and
no page table switch is done.
Major changes in kernel are
- When accessing user processes memory, kernel no longer
explicitly checks before it does so if that memory is OK.
It simply makes the mapping (if necessary), tries to do the
operation, and traps the pagefault if that memory isn't present;
if that happens, the copy function returns EFAULT.
So all of the CHECKRANGE_OR_SUSPEND macros are gone.
- Kernel no longer has to copy/read and parse page tables.
- A message copying optimisation: when messages are copied, and
the recipient isn't mapped in, they are copied into a buffer
in the kernel. This is done in QueueMess. The next time
the recipient is scheduled, this message is copied into
its memory. This happens in schedcheck().
This eliminates the mapping/copying step for messages, and makes
it easier to deliver messages. This eliminates soft_notify.
- Kernel no longer creates a page table at all, so the vm_setbuf
and pagetable writing in memory.c is gone.
Minor changes in kernel are
- ipc_stats thrown out, wasn't used
- misc flags all renamed to MF_*
- NOREC_* macros to enter and leave functions that should not
be called recursively; just sanity checks really
- code to fully decode segment selectors and descriptors
to print on exceptions
- lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
|
|
|
bill_ptr = proc_addr(IDLE); /* it has to point somewhere */
|
2005-06-17 11:09:54 +02:00
|
|
|
announce(); /* print MINIX startup banner */
|
2009-11-06 10:04:15 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* enable timer interrupts and clock task on the boot CPU
|
|
|
|
*/
|
2010-03-05 16:05:11 +01:00
|
|
|
|
2009-11-06 10:04:15 +01:00
|
|
|
if (boot_cpu_init_timer(system_hz)) {
|
2010-03-05 16:05:11 +01:00
|
|
|
panic( "FATAL : failed to initialize timer interrupts; "
|
|
|
|
"cannot continue without any clock source!");
|
2009-11-06 10:04:15 +01:00
|
|
|
}
|
|
|
|
|
Primary goal for these changes is:
- no longer have kernel have its own page table that is loaded
on every kernel entry (trap, interrupt, exception). the primary
purpose is to reduce the number of required reloads.
Result:
- kernel can only access memory of process that was running when
kernel was entered
- kernel must be mapped into every process page table, so traps to
kernel keep working
Problem:
- kernel must often access memory of arbitrary processes (e.g. send
arbitrary processes messages); this can't happen directly any more;
usually because that process' page table isn't loaded at all, sometimes
because that memory isn't mapped in at all, sometimes because it isn't
mapped in read-write.
So:
- kernel must be able to map in memory of any process, in its own
address space.
Implementation:
- VM and kernel share a range of memory in which addresses of
all page tables of all processes are available. This has two purposes:
. Kernel has to know what data to copy in order to map in a range
. Kernel has to know where to write the data in order to map it in
That last point is because kernel has to write in the currently loaded
page table.
- Processes and kernel are separated through segments; kernel segments
haven't changed.
- The kernel keeps the process whose page table is currently loaded
in 'ptproc.'
- If it wants to map in a range of memory, it writes the value of the
page directory entry for that range into the page directory entry
in the currently loaded map. There is a slot reserved for such
purposes. The kernel can then access this memory directly.
- In order to do this, its segment has been increased (and the
segments of processes start where it ends).
- In the pagefault handler, detect if the kernel is doing
'trappable' memory access (i.e. a pagefault isn't a fatal
error) and if so,
- set the saved instruction pointer to phys_copy_fault,
breaking out of phys_copy
- set the saved eax register to the address of the page
fault, both for sanity checking and for checking in
which of the two ranges that phys_copy was called
with the fault occured
- Some boot-time processes do not have their own page table,
and are mapped in with the kernel, and separated with
segments. The kernel detects this using HASPT. If such a
process has to be scheduled, any page table will work and
no page table switch is done.
Major changes in kernel are
- When accessing user processes memory, kernel no longer
explicitly checks before it does so if that memory is OK.
It simply makes the mapping (if necessary), tries to do the
operation, and traps the pagefault if that memory isn't present;
if that happens, the copy function returns EFAULT.
So all of the CHECKRANGE_OR_SUSPEND macros are gone.
- Kernel no longer has to copy/read and parse page tables.
- A message copying optimisation: when messages are copied, and
the recipient isn't mapped in, they are copied into a buffer
in the kernel. This is done in QueueMess. The next time
the recipient is scheduled, this message is copied into
its memory. This happens in schedcheck().
This eliminates the mapping/copying step for messages, and makes
it easier to deliver messages. This eliminates soft_notify.
- Kernel no longer creates a page table at all, so the vm_setbuf
and pagetable writing in memory.c is gone.
Minor changes in kernel are
- ipc_stats thrown out, wasn't used
- misc flags all renamed to MF_*
- NOREC_* macros to enter and leave functions that should not
be called recursively; just sanity checks really
- code to fully decode segment selectors and descriptors
to print on exceptions
- lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
|
|
|
/* Warnings for sanity checks that take time. These warnings are printed
|
|
|
|
* so it's a clear warning no full release should be done with them
|
|
|
|
* enabled.
|
|
|
|
*/
|
|
|
|
#if DEBUG_PROC_CHECK
|
|
|
|
FIXME("PROC check enabled");
|
|
|
|
#endif
|
2009-11-06 10:04:15 +01:00
|
|
|
|
2010-02-13 23:11:16 +01:00
|
|
|
DEBUGMAX(("cycles_accounting_init()... "));
|
2010-02-10 16:36:54 +01:00
|
|
|
cycles_accounting_init();
|
2010-02-13 23:11:16 +01:00
|
|
|
DEBUGMAX(("done\n"));
|
2010-02-10 16:36:54 +01:00
|
|
|
|
2010-03-10 14:00:05 +01:00
|
|
|
assert(runqueues_ok());
|
|
|
|
|
2005-04-21 16:53:53 +02:00
|
|
|
restart();
|
2010-01-14 10:46:16 +01:00
|
|
|
NOT_REACHABLE;
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
|
|
|
|
2005-09-11 18:44:06 +02:00
|
|
|
/*===========================================================================*
|
|
|
|
* announce *
|
|
|
|
*===========================================================================*/
|
2005-04-21 16:53:53 +02:00
|
|
|
PRIVATE void announce(void)
|
|
|
|
{
|
|
|
|
/* Display the MINIX startup banner. */
|
2010-03-03 16:45:01 +01:00
|
|
|
printf("\nMINIX %s.%s. "
|
2007-03-30 17:17:32 +02:00
|
|
|
#ifdef _SVN_REVISION
|
|
|
|
"(" _SVN_REVISION ")\n"
|
2007-03-21 14:35:06 +01:00
|
|
|
#endif
|
2010-01-27 17:19:50 +01:00
|
|
|
"Copyright 2010, Vrije Universiteit, Amsterdam, The Netherlands\n",
|
2005-07-20 17:25:38 +02:00
|
|
|
OS_RELEASE, OS_VERSION);
|
2010-03-03 16:45:01 +01:00
|
|
|
printf("MINIX is open source software, see http://www.minix3.org\n");
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
|
|
|
|
2005-09-11 18:44:06 +02:00
|
|
|
/*===========================================================================*
|
|
|
|
* prepare_shutdown *
|
|
|
|
*===========================================================================*/
|
2010-03-27 15:31:00 +01:00
|
|
|
PUBLIC void prepare_shutdown(const int how)
|
2005-04-21 16:53:53 +02:00
|
|
|
{
|
2005-07-27 16:32:16 +02:00
|
|
|
/* This function prepares to shutdown MINIX. */
|
2005-07-21 20:36:40 +02:00
|
|
|
static timer_t shutdown_timer;
|
2005-04-21 16:53:53 +02:00
|
|
|
|
2005-10-05 11:51:50 +02:00
|
|
|
/* Continue after 1 second, to give processes a chance to get scheduled to
|
|
|
|
* do shutdown work. Set a watchog timer to call shutdown(). The timer
|
2005-07-20 17:25:38 +02:00
|
|
|
* argument passes the shutdown status.
|
2005-04-21 16:53:53 +02:00
|
|
|
*/
|
2010-03-03 16:45:01 +01:00
|
|
|
printf("MINIX will now be shut down ...\n");
|
2005-07-21 20:36:40 +02:00
|
|
|
tmr_arg(&shutdown_timer)->ta_int = how;
|
2008-12-11 15:15:23 +01:00
|
|
|
set_timer(&shutdown_timer, get_uptime() + system_hz, minix_shutdown);
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
Split of architecture-dependent and -independent functions for i386,
mainly in the kernel and headers. This split based on work by
Ingmar Alting <iaalting@cs.vu.nl> done for his Minix PowerPC architecture
port.
. kernel does not program the interrupt controller directly, do any
other architecture-dependent operations, or contain assembly any more,
but uses architecture-dependent functions in arch/$(ARCH)/.
. architecture-dependent constants and types defined in arch/$(ARCH)/include.
. <ibm/portio.h> moved to <minix/portio.h>, as they have become, for now,
architecture-independent functions.
. int86, sdevio, readbios, and iopenable are now i386-specific kernel calls
and live in arch/i386/do_* now.
. i386 arch now supports even less 86 code; e.g. mpx86.s and klib86.s have
gone, and 'machine.protected' is gone (and always taken to be 1 in i386).
If 86 support is to return, it should be a new architecture.
. prototypes for the architecture-dependent functions defined in
kernel/arch/$(ARCH)/*.c but used in kernel/ are in kernel/proto.h
. /etc/make.conf included in makefiles and shell scripts that need to
know the building architecture; it defines ARCH=<arch>, currently only
i386.
. some basic per-architecture build support outside of the kernel (lib)
. in clock.c, only dequeue a process if it was ready
. fixes for new include files
files deleted:
. mpx/klib.s - only for choosing between mpx/klib86 and -386
. klib86.s - only for 86
i386-specific files files moved (or arch-dependent stuff moved) to arch/i386/:
. mpx386.s (entry point)
. klib386.s
. sconst.h
. exception.c
. protect.c
. protect.h
. i8269.c
2006-12-22 16:22:27 +01:00
|
|
|
|
2005-09-11 18:44:06 +02:00
|
|
|
/*===========================================================================*
|
|
|
|
* shutdown *
|
|
|
|
*===========================================================================*/
|
2010-03-27 15:31:00 +01:00
|
|
|
PUBLIC void minix_shutdown(timer_t *tp)
|
2005-04-21 16:53:53 +02:00
|
|
|
{
|
|
|
|
/* This function is called from prepare_shutdown or stop_sequence to bring
|
2005-06-24 18:24:40 +02:00
|
|
|
* down MINIX. How to shutdown is in the argument: RBT_HALT (return to the
|
|
|
|
* monitor), RBT_MONITOR (execute given code), RBT_RESET (hard reset).
|
2005-04-21 16:53:53 +02:00
|
|
|
*/
|
2009-11-06 10:04:15 +01:00
|
|
|
arch_stop_local_timer();
|
2009-11-16 22:41:44 +01:00
|
|
|
intr_init(INTS_ORIG, 0);
|
2008-11-19 13:26:10 +01:00
|
|
|
arch_shutdown(tp ? tmr_arg(tp)->ta_int : RBT_PANIC);
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
|
|
|
|