2005-04-21 16:53:53 +02:00
|
|
|
/* This file contains the main program of MINIX as well as its shutdown code.
|
|
|
|
* The routine main() initializes the system and starts the ball rolling by
|
|
|
|
* setting up the process table, interrupt vectors, and scheduling each task
|
|
|
|
* to run to initialize itself.
|
2005-06-24 18:24:40 +02:00
|
|
|
* The routine shutdown() does the opposite and brings down MINIX.
|
2005-04-21 16:53:53 +02:00
|
|
|
*
|
|
|
|
* The entries into this file are:
|
|
|
|
* main: MINIX main program
|
|
|
|
* prepare_shutdown: prepare to take MINIX down
|
|
|
|
*/
|
|
|
|
#include "kernel.h"
|
2005-07-20 17:25:38 +02:00
|
|
|
#include <string.h>
|
2005-04-21 16:53:53 +02:00
|
|
|
#include <unistd.h>
|
2010-03-10 14:00:05 +01:00
|
|
|
#include <assert.h>
|
2005-04-21 16:53:53 +02:00
|
|
|
#include <a.out.h>
|
|
|
|
#include <minix/com.h>
|
'proc number' is process slot, 'endpoint' are generation-aware process
instance numbers, encoded and decoded using macros in <minix/endpoint.h>.
proc number -> endpoint migration
. proc_nr in the interrupt hook is now an endpoint, proc_nr_e.
. m_source for messages and notifies is now an endpoint, instead of
proc number.
. isokendpt() converts an endpoint to a process number, returns
success (but fails if the process number is out of range, the
process slot is not a living process, or the given endpoint
number does not match the endpoint number in the process slot,
indicating an old process).
. okendpt() is the same as isokendpt(), but panic()s if the conversion
fails. This is mainly used for decoding message.m_source endpoints,
and other endpoint numbers in kernel data structures, which should
always be correct.
. if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt()
get passed the __FILE__ and __LINE__ of the calling lines, and
print messages about what is wrong with the endpoint number
(out of range proc, empty proc, or inconsistent endpoint number),
with the caller, making finding where the conversion failed easy
without having to include code for every call to print where things
went wrong. Sometimes this is harmless (wrong arg to a kernel call),
sometimes it's a fatal internal inconsistency (bogus m_source).
. some process table fields have been appended an _e to indicate it's
become and endpoint.
. process endpoint is stored in p_endpoint, without generation number.
it turns out the kernel never needs the generation number, except
when fork()ing, so it's decoded then.
. kernel calls all take endpoints as arguments, not proc numbers.
the one exception is sys_fork(), which needs to know in which slot
to put the child.
2006-03-03 11:00:02 +01:00
|
|
|
#include <minix/endpoint.h>
|
2010-05-25 10:06:14 +02:00
|
|
|
#include <minix/u64.h>
|
2005-04-21 16:53:53 +02:00
|
|
|
#include "proc.h"
|
Primary goal for these changes is:
- no longer have kernel have its own page table that is loaded
on every kernel entry (trap, interrupt, exception). the primary
purpose is to reduce the number of required reloads.
Result:
- kernel can only access memory of process that was running when
kernel was entered
- kernel must be mapped into every process page table, so traps to
kernel keep working
Problem:
- kernel must often access memory of arbitrary processes (e.g. send
arbitrary processes messages); this can't happen directly any more;
usually because that process' page table isn't loaded at all, sometimes
because that memory isn't mapped in at all, sometimes because it isn't
mapped in read-write.
So:
- kernel must be able to map in memory of any process, in its own
address space.
Implementation:
- VM and kernel share a range of memory in which addresses of
all page tables of all processes are available. This has two purposes:
. Kernel has to know what data to copy in order to map in a range
. Kernel has to know where to write the data in order to map it in
That last point is because kernel has to write in the currently loaded
page table.
- Processes and kernel are separated through segments; kernel segments
haven't changed.
- The kernel keeps the process whose page table is currently loaded
in 'ptproc.'
- If it wants to map in a range of memory, it writes the value of the
page directory entry for that range into the page directory entry
in the currently loaded map. There is a slot reserved for such
purposes. The kernel can then access this memory directly.
- In order to do this, its segment has been increased (and the
segments of processes start where it ends).
- In the pagefault handler, detect if the kernel is doing
'trappable' memory access (i.e. a pagefault isn't a fatal
error) and if so,
- set the saved instruction pointer to phys_copy_fault,
breaking out of phys_copy
- set the saved eax register to the address of the page
fault, both for sanity checking and for checking in
which of the two ranges that phys_copy was called
with the fault occured
- Some boot-time processes do not have their own page table,
and are mapped in with the kernel, and separated with
segments. The kernel detects this using HASPT. If such a
process has to be scheduled, any page table will work and
no page table switch is done.
Major changes in kernel are
- When accessing user processes memory, kernel no longer
explicitly checks before it does so if that memory is OK.
It simply makes the mapping (if necessary), tries to do the
operation, and traps the pagefault if that memory isn't present;
if that happens, the copy function returns EFAULT.
So all of the CHECKRANGE_OR_SUSPEND macros are gone.
- Kernel no longer has to copy/read and parse page tables.
- A message copying optimisation: when messages are copied, and
the recipient isn't mapped in, they are copied into a buffer
in the kernel. This is done in QueueMess. The next time
the recipient is scheduled, this message is copied into
its memory. This happens in schedcheck().
This eliminates the mapping/copying step for messages, and makes
it easier to deliver messages. This eliminates soft_notify.
- Kernel no longer creates a page table at all, so the vm_setbuf
and pagetable writing in memory.c is gone.
Minor changes in kernel are
- ipc_stats thrown out, wasn't used
- misc flags all renamed to MF_*
- NOREC_* macros to enter and leave functions that should not
be called recursively; just sanity checks really
- code to fully decode segment selectors and descriptors
to print on exceptions
- lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
|
|
|
#include "debug.h"
|
2009-11-06 10:04:15 +01:00
|
|
|
#include "clock.h"
|
2010-09-07 09:18:11 +02:00
|
|
|
#include "hw_intr.h"
|
2011-06-09 16:09:13 +02:00
|
|
|
#include "arch_proto.h"
|
2005-04-21 16:53:53 +02:00
|
|
|
|
2010-09-15 16:09:52 +02:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
#include "smp.h"
|
|
|
|
#endif
|
2011-07-29 20:36:42 +02:00
|
|
|
#ifdef USE_WATCHDOG
|
2010-09-15 16:10:03 +02:00
|
|
|
#include "watchdog.h"
|
|
|
|
#endif
|
|
|
|
#include "spinlock.h"
|
2010-09-15 16:09:52 +02:00
|
|
|
|
2010-11-12 19:38:10 +01:00
|
|
|
/* dummy for linking */
|
|
|
|
char *** _penviron;
|
|
|
|
|
2005-05-02 16:30:04 +02:00
|
|
|
/* Prototype declarations for PRIVATE functions. */
|
2012-03-25 20:25:53 +02:00
|
|
|
static void announce(void);
|
2005-05-02 16:30:04 +02:00
|
|
|
|
2012-03-25 20:25:53 +02:00
|
|
|
void bsp_finish_booting(void)
|
2010-09-15 16:09:52 +02:00
|
|
|
{
|
2010-09-15 16:10:18 +02:00
|
|
|
int i;
|
2010-09-15 16:09:52 +02:00
|
|
|
#if SPROFILE
|
|
|
|
sprofiling = 0; /* we're not profiling until instructed to */
|
|
|
|
#endif /* SPROFILE */
|
|
|
|
cprof_procs_no = 0; /* init nr of hash table slots used */
|
|
|
|
|
2010-10-26 23:07:27 +02:00
|
|
|
cpu_identify();
|
|
|
|
|
2010-09-15 16:09:52 +02:00
|
|
|
vm_running = 0;
|
|
|
|
krandom.random_sources = RANDOM_SOURCES;
|
|
|
|
krandom.random_elements = RANDOM_ELEMENTS;
|
|
|
|
|
|
|
|
/* MINIX is now ready. All boot image processes are on the ready queue.
|
|
|
|
* Return to the assembly code to start running the current process.
|
|
|
|
*/
|
2010-09-15 16:10:24 +02:00
|
|
|
|
|
|
|
/* it should point somewhere */
|
|
|
|
get_cpulocal_var(bill_ptr) = get_cpulocal_var_ptr(idle_proc);
|
|
|
|
get_cpulocal_var(proc_ptr) = get_cpulocal_var_ptr(idle_proc);
|
2010-09-15 16:09:52 +02:00
|
|
|
announce(); /* print MINIX startup banner */
|
|
|
|
|
2010-09-15 16:10:18 +02:00
|
|
|
/*
|
|
|
|
* we have access to the cpu local run queue, only now schedule the processes.
|
|
|
|
* We ignore the slots for the former kernel tasks
|
|
|
|
*/
|
|
|
|
for (i=0; i < NR_BOOT_PROCS - NR_TASKS; i++) {
|
|
|
|
RTS_UNSET(proc_addr(i), RTS_PROC_STOP);
|
|
|
|
}
|
2010-09-15 16:09:52 +02:00
|
|
|
/*
|
|
|
|
* enable timer interrupts and clock task on the boot CPU
|
|
|
|
*/
|
|
|
|
if (boot_cpu_init_timer(system_hz)) {
|
|
|
|
panic("FATAL : failed to initialize timer interrupts, "
|
|
|
|
"cannot continue without any clock source!");
|
|
|
|
}
|
|
|
|
|
2010-09-15 16:11:25 +02:00
|
|
|
fpu_init();
|
|
|
|
|
2010-09-15 16:09:52 +02:00
|
|
|
/* Warnings for sanity checks that take time. These warnings are printed
|
|
|
|
* so it's a clear warning no full release should be done with them
|
|
|
|
* enabled.
|
|
|
|
*/
|
|
|
|
#if DEBUG_SCHED_CHECK
|
|
|
|
FIXME("DEBUG_SCHED_CHECK enabled");
|
|
|
|
#endif
|
|
|
|
#if DEBUG_VMASSERT
|
|
|
|
FIXME("DEBUG_VMASSERT enabled");
|
|
|
|
#endif
|
|
|
|
#if DEBUG_PROC_CHECK
|
|
|
|
FIXME("PROC check enabled");
|
|
|
|
#endif
|
|
|
|
|
|
|
|
DEBUGEXTRA(("cycles_accounting_init()... "));
|
|
|
|
cycles_accounting_init();
|
|
|
|
DEBUGEXTRA(("done\n"));
|
|
|
|
|
2010-09-15 16:10:18 +02:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
cpu_set_flag(bsp_cpu_id, CPU_IS_READY);
|
2010-09-15 16:10:33 +02:00
|
|
|
machine.processors_count = ncpus;
|
|
|
|
machine.bsp_id = bsp_cpu_id;
|
|
|
|
#else
|
|
|
|
machine.processors_count = 1;
|
|
|
|
machine.bsp_id = 0;
|
2010-09-15 16:10:18 +02:00
|
|
|
#endif
|
|
|
|
|
2010-09-15 16:09:52 +02:00
|
|
|
switch_to_user();
|
|
|
|
NOT_REACHABLE;
|
|
|
|
}
|
|
|
|
|
2005-04-21 16:53:53 +02:00
|
|
|
/*===========================================================================*
|
2005-09-11 18:44:06 +02:00
|
|
|
* main *
|
2005-04-21 16:53:53 +02:00
|
|
|
*===========================================================================*/
|
2012-03-25 20:25:53 +02:00
|
|
|
int main(void)
|
2005-04-21 16:53:53 +02:00
|
|
|
{
|
|
|
|
/* Start the ball rolling. */
|
2005-07-29 17:26:23 +02:00
|
|
|
struct boot_image *ip; /* boot image pointer */
|
2005-07-20 17:25:38 +02:00
|
|
|
register struct proc *rp; /* process pointer */
|
2010-01-22 23:01:08 +01:00
|
|
|
register int i, j;
|
2010-08-24 14:51:11 +02:00
|
|
|
size_t argsz; /* size of arguments passed to crtso on stack */
|
2005-04-21 16:53:53 +02:00
|
|
|
|
2010-09-15 16:10:03 +02:00
|
|
|
BKL_LOCK();
|
2009-10-03 13:30:35 +02:00
|
|
|
/* Global value to test segment sanity. */
|
|
|
|
magictest = MAGICTEST;
|
2009-08-30 16:55:30 +02:00
|
|
|
|
2010-05-10 20:07:59 +02:00
|
|
|
DEBUGEXTRA(("main()\n"));
|
2010-02-13 23:11:16 +01:00
|
|
|
|
2010-09-15 16:09:43 +02:00
|
|
|
proc_init();
|
2005-04-21 16:53:53 +02:00
|
|
|
|
2011-01-27 13:18:33 +01:00
|
|
|
/* Set up proc table entries for processes in boot image. The stacks
|
2005-04-21 16:53:53 +02:00
|
|
|
* of the servers have been added to the data segment by the monitor, so
|
2011-01-27 13:18:33 +01:00
|
|
|
* the stack pointer is set to the end of the data segment.
|
2005-04-21 16:53:53 +02:00
|
|
|
*/
|
|
|
|
|
2005-07-14 17:12:12 +02:00
|
|
|
for (i=0; i < NR_BOOT_PROCS; ++i) {
|
2010-01-26 13:26:06 +01:00
|
|
|
int schedulable_proc;
|
|
|
|
proc_nr_t proc_nr;
|
2009-12-11 01:08:19 +01:00
|
|
|
int ipc_to_m, kcalls;
|
2010-12-07 11:32:42 +01:00
|
|
|
sys_map_t map;
|
2009-05-12 13:35:01 +02:00
|
|
|
|
2005-07-14 17:12:12 +02:00
|
|
|
ip = &image[i]; /* process' attributes */
|
2010-05-10 20:07:59 +02:00
|
|
|
DEBUGEXTRA(("initializing %s... ", ip->proc_name));
|
2005-07-14 17:12:12 +02:00
|
|
|
rp = proc_addr(ip->proc_nr); /* get process pointer */
|
'proc number' is process slot, 'endpoint' are generation-aware process
instance numbers, encoded and decoded using macros in <minix/endpoint.h>.
proc number -> endpoint migration
. proc_nr in the interrupt hook is now an endpoint, proc_nr_e.
. m_source for messages and notifies is now an endpoint, instead of
proc number.
. isokendpt() converts an endpoint to a process number, returns
success (but fails if the process number is out of range, the
process slot is not a living process, or the given endpoint
number does not match the endpoint number in the process slot,
indicating an old process).
. okendpt() is the same as isokendpt(), but panic()s if the conversion
fails. This is mainly used for decoding message.m_source endpoints,
and other endpoint numbers in kernel data structures, which should
always be correct.
. if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt()
get passed the __FILE__ and __LINE__ of the calling lines, and
print messages about what is wrong with the endpoint number
(out of range proc, empty proc, or inconsistent endpoint number),
with the caller, making finding where the conversion failed easy
without having to include code for every call to print where things
went wrong. Sometimes this is harmless (wrong arg to a kernel call),
sometimes it's a fatal internal inconsistency (bogus m_source).
. some process table fields have been appended an _e to indicate it's
become and endpoint.
. process endpoint is stored in p_endpoint, without generation number.
it turns out the kernel never needs the generation number, except
when fork()ing, so it's decoded then.
. kernel calls all take endpoints as arguments, not proc numbers.
the one exception is sys_fork(), which needs to know in which slot
to put the child.
2006-03-03 11:00:02 +01:00
|
|
|
ip->endpoint = rp->p_endpoint; /* ipc endpoint */
|
2010-05-25 10:06:14 +02:00
|
|
|
make_zero64(rp->p_cpu_time_left);
|
2005-07-21 20:36:40 +02:00
|
|
|
strncpy(rp->p_name, ip->proc_name, P_NAME_LEN); /* set process name */
|
2010-09-19 17:52:12 +02:00
|
|
|
|
|
|
|
reset_proc_accounting(rp);
|
2006-06-20 11:56:06 +02:00
|
|
|
|
2009-12-11 01:08:19 +01:00
|
|
|
/* See if this process is immediately schedulable.
|
|
|
|
* In that case, set its privileges now and allow it to run.
|
|
|
|
* Only kernel tasks and the root system process get to run immediately.
|
|
|
|
* All the other system processes are inhibited from running by the
|
|
|
|
* RTS_NO_PRIV flag. They can only be scheduled once the root system
|
|
|
|
* process has set their privileges.
|
2006-06-20 11:56:06 +02:00
|
|
|
*/
|
2009-12-11 01:08:19 +01:00
|
|
|
proc_nr = proc_nr(rp);
|
|
|
|
schedulable_proc = (iskerneln(proc_nr) || isrootsysn(proc_nr));
|
|
|
|
if(schedulable_proc) {
|
|
|
|
/* Assign privilege structure. Force a static privilege id. */
|
|
|
|
(void) get_priv(rp, static_priv_id(proc_nr));
|
|
|
|
|
|
|
|
/* Priviliges for kernel tasks. */
|
|
|
|
if(iskerneln(proc_nr)) {
|
|
|
|
/* Privilege flags. */
|
|
|
|
priv(rp)->s_flags = (proc_nr == IDLE ? IDL_F : TSK_F);
|
|
|
|
/* Allowed traps. */
|
|
|
|
priv(rp)->s_trap_mask = (proc_nr == CLOCK
|
|
|
|
|| proc_nr == SYSTEM ? CSK_T : TSK_T);
|
|
|
|
ipc_to_m = TSK_M; /* allowed targets */
|
|
|
|
kcalls = TSK_KC; /* allowed kernel calls */
|
|
|
|
}
|
|
|
|
/* Priviliges for the root system process. */
|
|
|
|
else if(isrootsysn(proc_nr)) {
|
2010-07-13 23:11:44 +02:00
|
|
|
priv(rp)->s_flags= RSYS_F; /* privilege flags */
|
|
|
|
priv(rp)->s_trap_mask= SRV_T; /* allowed traps */
|
|
|
|
ipc_to_m = SRV_M; /* allowed targets */
|
|
|
|
kcalls = SRV_KC; /* allowed kernel calls */
|
|
|
|
priv(rp)->s_sig_mgr = SRV_SM; /* signal manager */
|
|
|
|
rp->p_priority = SRV_Q; /* priority queue */
|
|
|
|
rp->p_quantum_size_ms = SRV_QT; /* quantum size */
|
2009-12-11 01:08:19 +01:00
|
|
|
}
|
2010-01-26 13:26:06 +01:00
|
|
|
/* Priviliges for ordinary process. */
|
|
|
|
else {
|
|
|
|
NOT_REACHABLE;
|
|
|
|
}
|
2009-12-11 01:08:19 +01:00
|
|
|
|
|
|
|
/* Fill in target mask. */
|
2010-12-07 11:32:42 +01:00
|
|
|
memset(&map, 0, sizeof(map));
|
|
|
|
|
|
|
|
if (ipc_to_m == ALL_M) {
|
|
|
|
for(j = 0; j < NR_SYS_PROCS; j++)
|
|
|
|
set_sys_bit(map, j);
|
|
|
|
}
|
|
|
|
|
|
|
|
fill_sendto_mask(rp, &map);
|
2009-12-11 01:08:19 +01:00
|
|
|
|
|
|
|
/* Fill in kernel call mask. */
|
Initialization protocol for system services.
SYSLIB CHANGES:
- SEF framework now supports a new SEF Init request type from RS. 3 different
callbacks are available (init_fresh, init_lu, init_restart) to specify
initialization code when a service starts fresh, starts after a live update,
or restarts.
SYSTEM SERVICE CHANGES:
- Initialization code for system services is now enclosed in a callback SEF will
automatically call at init time. The return code of the callback will
tell RS whether the initialization completed successfully.
- Each init callback can access information passed by RS to initialize. As of
now, each system service has access to the public entries of RS's system process
table to gather all the information required to initialize. This design
eliminates many existing or potential races at boot time and provides a uniform
initialization interface to system services. The same interface will be reused
for the upcoming publish/subscribe model to handle dynamic
registration / deregistration of system services.
VM CHANGES:
- Uniform privilege management for all system services. Every service uses the
same call mask format. For boot services, VM copies the call mask from init
data. For dynamic services, VM still receives the call mask via rs_set_priv
call that will be soon replaced by the upcoming publish/subscribe model.
RS CHANGES:
- The system process table has been reorganized and split into private entries
and public entries. Only the latter ones are exposed to system services.
- VM call masks are now entirely configured in rs/table.c
- RS has now its own slot in the system process table. Only kernel tasks and
user processes not included in the boot image are now left out from the system
process table.
- RS implements the initialization protocol for system services.
- For services in the boot image, RS blocks till initialization is complete and
panics when failure is reported back. Services are initialized in their order of
appearance in the boot image priv table and RS blocks to implements synchronous
initialization for every system service having the flag SF_SYNCH_BOOT set.
- For services started dynamically, the initialization protocol is implemented
as though it were the first ping for the service. In this case, if the
system service fails to report back (or reports failure), RS brings the service
down rather than trying to restart it.
2010-01-08 02:20:42 +01:00
|
|
|
for(j = 0; j < SYS_CALL_MASK_SIZE; j++) {
|
2009-12-11 01:08:19 +01:00
|
|
|
priv(rp)->s_k_call_mask[j] = (kcalls == NO_C ? 0 : (~0));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
/* Don't let the process run for now. */
|
2010-07-01 10:32:33 +02:00
|
|
|
RTS_SET(rp, RTS_NO_PRIV | RTS_NO_QUANTUM);
|
2009-12-11 01:08:19 +01:00
|
|
|
}
|
2011-05-04 18:51:43 +02:00
|
|
|
rp->p_memmap[T].mem_vir = ABS2CLICK(ip->memmap.text_vaddr);
|
|
|
|
rp->p_memmap[T].mem_phys = ABS2CLICK(ip->memmap.text_paddr);
|
|
|
|
rp->p_memmap[T].mem_len = ABS2CLICK(ip->memmap.text_bytes);
|
|
|
|
rp->p_memmap[D].mem_vir = ABS2CLICK(ip->memmap.data_vaddr);
|
|
|
|
rp->p_memmap[D].mem_phys = ABS2CLICK(ip->memmap.data_paddr);
|
|
|
|
rp->p_memmap[D].mem_len = ABS2CLICK(ip->memmap.data_bytes);
|
|
|
|
rp->p_memmap[S].mem_phys = ABS2CLICK(ip->memmap.data_paddr +
|
|
|
|
ip->memmap.data_bytes +
|
|
|
|
ip->memmap.stack_bytes);
|
|
|
|
rp->p_memmap[S].mem_vir = ABS2CLICK(ip->memmap.data_vaddr +
|
|
|
|
ip->memmap.data_bytes +
|
|
|
|
ip->memmap.stack_bytes);
|
|
|
|
rp->p_memmap[S].mem_len = 0;
|
2005-04-21 16:53:53 +02:00
|
|
|
|
|
|
|
/* Set initial register values. The processor status word for tasks
|
|
|
|
* is different from that of other processes because tasks can
|
|
|
|
* access I/O; this is not allowed to less-privileged processes
|
|
|
|
*/
|
2011-05-04 18:51:43 +02:00
|
|
|
rp->p_reg.pc = ip->memmap.entry;
|
2009-12-11 01:08:19 +01:00
|
|
|
rp->p_reg.psw = (iskerneln(proc_nr)) ? INIT_TASK_PSW : INIT_PSW;
|
2005-04-21 16:53:53 +02:00
|
|
|
|
2010-08-20 13:07:16 +02:00
|
|
|
/* Initialize the server stack pointer. Take it down three words
|
|
|
|
* to give crtso.s something to use as "argc", "argv" and "envp".
|
2005-04-21 16:53:53 +02:00
|
|
|
*/
|
2009-12-11 01:08:19 +01:00
|
|
|
if (isusern(proc_nr)) { /* user-space process? */
|
2005-04-21 16:53:53 +02:00
|
|
|
rp->p_reg.sp = (rp->p_memmap[S].mem_vir +
|
|
|
|
rp->p_memmap[S].mem_len) << CLICK_SHIFT;
|
2010-08-24 14:51:11 +02:00
|
|
|
argsz = 3 * sizeof(reg_t);
|
|
|
|
rp->p_reg.sp -= argsz;
|
|
|
|
phys_memset(rp->p_reg.sp -
|
|
|
|
(rp->p_memmap[S].mem_vir << CLICK_SHIFT) +
|
|
|
|
(rp->p_memmap[S].mem_phys << CLICK_SHIFT),
|
|
|
|
0, argsz);
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
2008-12-11 15:15:23 +01:00
|
|
|
|
Primary goal for these changes is:
- no longer have kernel have its own page table that is loaded
on every kernel entry (trap, interrupt, exception). the primary
purpose is to reduce the number of required reloads.
Result:
- kernel can only access memory of process that was running when
kernel was entered
- kernel must be mapped into every process page table, so traps to
kernel keep working
Problem:
- kernel must often access memory of arbitrary processes (e.g. send
arbitrary processes messages); this can't happen directly any more;
usually because that process' page table isn't loaded at all, sometimes
because that memory isn't mapped in at all, sometimes because it isn't
mapped in read-write.
So:
- kernel must be able to map in memory of any process, in its own
address space.
Implementation:
- VM and kernel share a range of memory in which addresses of
all page tables of all processes are available. This has two purposes:
. Kernel has to know what data to copy in order to map in a range
. Kernel has to know where to write the data in order to map it in
That last point is because kernel has to write in the currently loaded
page table.
- Processes and kernel are separated through segments; kernel segments
haven't changed.
- The kernel keeps the process whose page table is currently loaded
in 'ptproc.'
- If it wants to map in a range of memory, it writes the value of the
page directory entry for that range into the page directory entry
in the currently loaded map. There is a slot reserved for such
purposes. The kernel can then access this memory directly.
- In order to do this, its segment has been increased (and the
segments of processes start where it ends).
- In the pagefault handler, detect if the kernel is doing
'trappable' memory access (i.e. a pagefault isn't a fatal
error) and if so,
- set the saved instruction pointer to phys_copy_fault,
breaking out of phys_copy
- set the saved eax register to the address of the page
fault, both for sanity checking and for checking in
which of the two ranges that phys_copy was called
with the fault occured
- Some boot-time processes do not have their own page table,
and are mapped in with the kernel, and separated with
segments. The kernel detects this using HASPT. If such a
process has to be scheduled, any page table will work and
no page table switch is done.
Major changes in kernel are
- When accessing user processes memory, kernel no longer
explicitly checks before it does so if that memory is OK.
It simply makes the mapping (if necessary), tries to do the
operation, and traps the pagefault if that memory isn't present;
if that happens, the copy function returns EFAULT.
So all of the CHECKRANGE_OR_SUSPEND macros are gone.
- Kernel no longer has to copy/read and parse page tables.
- A message copying optimisation: when messages are copied, and
the recipient isn't mapped in, they are copied into a buffer
in the kernel. This is done in QueueMess. The next time
the recipient is scheduled, this message is copied into
its memory. This happens in schedcheck().
This eliminates the mapping/copying step for messages, and makes
it easier to deliver messages. This eliminates soft_notify.
- Kernel no longer creates a page table at all, so the vm_setbuf
and pagetable writing in memory.c is gone.
Minor changes in kernel are
- ipc_stats thrown out, wasn't used
- misc flags all renamed to MF_*
- NOREC_* macros to enter and leave functions that should not
be called recursively; just sanity checks really
- code to fully decode segment selectors and descriptors
to print on exceptions
- lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
|
|
|
/* scheduling functions depend on proc_ptr pointing somewhere. */
|
2010-09-15 16:09:46 +02:00
|
|
|
if(!get_cpulocal_var(proc_ptr))
|
|
|
|
get_cpulocal_var(proc_ptr) = rp;
|
Primary goal for these changes is:
- no longer have kernel have its own page table that is loaded
on every kernel entry (trap, interrupt, exception). the primary
purpose is to reduce the number of required reloads.
Result:
- kernel can only access memory of process that was running when
kernel was entered
- kernel must be mapped into every process page table, so traps to
kernel keep working
Problem:
- kernel must often access memory of arbitrary processes (e.g. send
arbitrary processes messages); this can't happen directly any more;
usually because that process' page table isn't loaded at all, sometimes
because that memory isn't mapped in at all, sometimes because it isn't
mapped in read-write.
So:
- kernel must be able to map in memory of any process, in its own
address space.
Implementation:
- VM and kernel share a range of memory in which addresses of
all page tables of all processes are available. This has two purposes:
. Kernel has to know what data to copy in order to map in a range
. Kernel has to know where to write the data in order to map it in
That last point is because kernel has to write in the currently loaded
page table.
- Processes and kernel are separated through segments; kernel segments
haven't changed.
- The kernel keeps the process whose page table is currently loaded
in 'ptproc.'
- If it wants to map in a range of memory, it writes the value of the
page directory entry for that range into the page directory entry
in the currently loaded map. There is a slot reserved for such
purposes. The kernel can then access this memory directly.
- In order to do this, its segment has been increased (and the
segments of processes start where it ends).
- In the pagefault handler, detect if the kernel is doing
'trappable' memory access (i.e. a pagefault isn't a fatal
error) and if so,
- set the saved instruction pointer to phys_copy_fault,
breaking out of phys_copy
- set the saved eax register to the address of the page
fault, both for sanity checking and for checking in
which of the two ranges that phys_copy was called
with the fault occured
- Some boot-time processes do not have their own page table,
and are mapped in with the kernel, and separated with
segments. The kernel detects this using HASPT. If such a
process has to be scheduled, any page table will work and
no page table switch is done.
Major changes in kernel are
- When accessing user processes memory, kernel no longer
explicitly checks before it does so if that memory is OK.
It simply makes the mapping (if necessary), tries to do the
operation, and traps the pagefault if that memory isn't present;
if that happens, the copy function returns EFAULT.
So all of the CHECKRANGE_OR_SUSPEND macros are gone.
- Kernel no longer has to copy/read and parse page tables.
- A message copying optimisation: when messages are copied, and
the recipient isn't mapped in, they are copied into a buffer
in the kernel. This is done in QueueMess. The next time
the recipient is scheduled, this message is copied into
its memory. This happens in schedcheck().
This eliminates the mapping/copying step for messages, and makes
it easier to deliver messages. This eliminates soft_notify.
- Kernel no longer creates a page table at all, so the vm_setbuf
and pagetable writing in memory.c is gone.
Minor changes in kernel are
- ipc_stats thrown out, wasn't used
- misc flags all renamed to MF_*
- NOREC_* macros to enter and leave functions that should not
be called recursively; just sanity checks really
- code to fully decode segment selectors and descriptors
to print on exceptions
- lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
|
|
|
|
2008-12-11 15:15:23 +01:00
|
|
|
/* If this process has its own page table, VM will set the
|
|
|
|
* PT up and manage it. VM will signal the kernel when it has
|
|
|
|
* done this; until then, don't let it run.
|
|
|
|
*/
|
2009-12-11 01:08:19 +01:00
|
|
|
if(ip->flags & PROC_FULLVM)
|
2010-09-15 16:10:18 +02:00
|
|
|
rp->p_rts_flags |= RTS_VMINHIBIT;
|
2009-12-11 01:08:19 +01:00
|
|
|
|
2010-09-15 16:10:18 +02:00
|
|
|
rp->p_rts_flags |= RTS_PROC_STOP;
|
|
|
|
rp->p_rts_flags &= ~RTS_SLOT_FREE;
|
2005-04-21 16:53:53 +02:00
|
|
|
alloc_segments(rp);
|
2010-05-10 20:07:59 +02:00
|
|
|
DEBUGEXTRA(("done\n"));
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
|
|
|
|
2010-07-16 17:36:29 +02:00
|
|
|
#define IPCNAME(n) { \
|
|
|
|
assert((n) >= 0 && (n) <= IPCNO_HIGHEST); \
|
|
|
|
assert(!ipc_call_names[n]); \
|
|
|
|
ipc_call_names[n] = #n; \
|
|
|
|
}
|
|
|
|
|
|
|
|
IPCNAME(SEND);
|
|
|
|
IPCNAME(RECEIVE);
|
|
|
|
IPCNAME(SENDREC);
|
|
|
|
IPCNAME(NOTIFY);
|
|
|
|
IPCNAME(SENDNB);
|
|
|
|
IPCNAME(SENDA);
|
|
|
|
|
2010-09-15 16:09:52 +02:00
|
|
|
/* Architecture-dependent initialization. */
|
|
|
|
DEBUGEXTRA(("arch_init()... "));
|
|
|
|
arch_init();
|
|
|
|
DEBUGEXTRA(("done\n"));
|
|
|
|
|
|
|
|
/* System and processes initialization */
|
|
|
|
DEBUGEXTRA(("system_init()... "));
|
|
|
|
system_init();
|
|
|
|
DEBUGEXTRA(("done\n"));
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
if (config_no_apic) {
|
|
|
|
BOOT_VERBOSE(printf("APIC disabled, disables SMP, using legacy PIC\n"));
|
|
|
|
smp_single_cpu_fallback();
|
|
|
|
} else if (config_no_smp) {
|
|
|
|
BOOT_VERBOSE(printf("SMP disabled, using legacy PIC\n"));
|
|
|
|
smp_single_cpu_fallback();
|
2010-09-15 16:10:07 +02:00
|
|
|
} else {
|
2010-09-15 16:09:52 +02:00
|
|
|
smp_init();
|
2010-09-15 16:10:07 +02:00
|
|
|
/*
|
|
|
|
* if smp_init() returns it means that it failed and we try to finish
|
|
|
|
* single CPU booting
|
|
|
|
*/
|
|
|
|
bsp_finish_booting();
|
|
|
|
}
|
2010-09-15 16:09:52 +02:00
|
|
|
#else
|
|
|
|
/*
|
|
|
|
* if configured for a single CPU, we are already on the kernel stack which we
|
|
|
|
* are going to use everytime we execute kernel code. We finish booting and we
|
|
|
|
* never return here
|
|
|
|
*/
|
|
|
|
bsp_finish_booting();
|
|
|
|
#endif
|
2010-03-10 14:00:05 +01:00
|
|
|
|
2010-01-14 10:46:16 +01:00
|
|
|
NOT_REACHABLE;
|
2010-07-06 13:29:23 +02:00
|
|
|
return 1;
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
|
|
|
|
2005-09-11 18:44:06 +02:00
|
|
|
/*===========================================================================*
|
|
|
|
* announce *
|
|
|
|
*===========================================================================*/
|
2012-03-25 20:25:53 +02:00
|
|
|
static void announce(void)
|
2005-04-21 16:53:53 +02:00
|
|
|
{
|
|
|
|
/* Display the MINIX startup banner. */
|
2010-03-03 16:45:01 +01:00
|
|
|
printf("\nMINIX %s.%s. "
|
2010-11-11 03:00:12 +01:00
|
|
|
#ifdef _VCS_REVISION
|
|
|
|
"(" _VCS_REVISION ")\n"
|
2007-03-21 14:35:06 +01:00
|
|
|
#endif
|
2012-02-22 16:34:39 +01:00
|
|
|
"Copyright 2012, Vrije Universiteit, Amsterdam, The Netherlands\n",
|
2005-07-20 17:25:38 +02:00
|
|
|
OS_RELEASE, OS_VERSION);
|
2010-03-03 16:45:01 +01:00
|
|
|
printf("MINIX is open source software, see http://www.minix3.org\n");
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
|
|
|
|
2005-09-11 18:44:06 +02:00
|
|
|
/*===========================================================================*
|
|
|
|
* prepare_shutdown *
|
|
|
|
*===========================================================================*/
|
2012-03-25 20:25:53 +02:00
|
|
|
void prepare_shutdown(const int how)
|
2005-04-21 16:53:53 +02:00
|
|
|
{
|
2005-07-27 16:32:16 +02:00
|
|
|
/* This function prepares to shutdown MINIX. */
|
2005-07-21 20:36:40 +02:00
|
|
|
static timer_t shutdown_timer;
|
2005-04-21 16:53:53 +02:00
|
|
|
|
2005-10-05 11:51:50 +02:00
|
|
|
/* Continue after 1 second, to give processes a chance to get scheduled to
|
|
|
|
* do shutdown work. Set a watchog timer to call shutdown(). The timer
|
2005-07-20 17:25:38 +02:00
|
|
|
* argument passes the shutdown status.
|
2005-04-21 16:53:53 +02:00
|
|
|
*/
|
2010-03-03 16:45:01 +01:00
|
|
|
printf("MINIX will now be shut down ...\n");
|
2005-07-21 20:36:40 +02:00
|
|
|
tmr_arg(&shutdown_timer)->ta_int = how;
|
2008-12-11 15:15:23 +01:00
|
|
|
set_timer(&shutdown_timer, get_uptime() + system_hz, minix_shutdown);
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
Split of architecture-dependent and -independent functions for i386,
mainly in the kernel and headers. This split based on work by
Ingmar Alting <iaalting@cs.vu.nl> done for his Minix PowerPC architecture
port.
. kernel does not program the interrupt controller directly, do any
other architecture-dependent operations, or contain assembly any more,
but uses architecture-dependent functions in arch/$(ARCH)/.
. architecture-dependent constants and types defined in arch/$(ARCH)/include.
. <ibm/portio.h> moved to <minix/portio.h>, as they have become, for now,
architecture-independent functions.
. int86, sdevio, readbios, and iopenable are now i386-specific kernel calls
and live in arch/i386/do_* now.
. i386 arch now supports even less 86 code; e.g. mpx86.s and klib86.s have
gone, and 'machine.protected' is gone (and always taken to be 1 in i386).
If 86 support is to return, it should be a new architecture.
. prototypes for the architecture-dependent functions defined in
kernel/arch/$(ARCH)/*.c but used in kernel/ are in kernel/proto.h
. /etc/make.conf included in makefiles and shell scripts that need to
know the building architecture; it defines ARCH=<arch>, currently only
i386.
. some basic per-architecture build support outside of the kernel (lib)
. in clock.c, only dequeue a process if it was ready
. fixes for new include files
files deleted:
. mpx/klib.s - only for choosing between mpx/klib86 and -386
. klib86.s - only for 86
i386-specific files files moved (or arch-dependent stuff moved) to arch/i386/:
. mpx386.s (entry point)
. klib386.s
. sconst.h
. exception.c
. protect.c
. protect.h
. i8269.c
2006-12-22 16:22:27 +01:00
|
|
|
|
2005-09-11 18:44:06 +02:00
|
|
|
/*===========================================================================*
|
|
|
|
* shutdown *
|
|
|
|
*===========================================================================*/
|
2012-03-25 20:25:53 +02:00
|
|
|
void minix_shutdown(timer_t *tp)
|
2005-04-21 16:53:53 +02:00
|
|
|
{
|
|
|
|
/* This function is called from prepare_shutdown or stop_sequence to bring
|
2005-06-24 18:24:40 +02:00
|
|
|
* down MINIX. How to shutdown is in the argument: RBT_HALT (return to the
|
|
|
|
* monitor), RBT_MONITOR (execute given code), RBT_RESET (hard reset).
|
2005-04-21 16:53:53 +02:00
|
|
|
*/
|
2010-09-15 16:09:52 +02:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/*
|
|
|
|
* FIXME
|
|
|
|
*
|
|
|
|
* we will need to stop timers on all cpus if SMP is enabled and put them in
|
|
|
|
* such a state that we can perform the whole boot process once restarted from
|
|
|
|
* monitor again
|
|
|
|
*/
|
|
|
|
if (ncpus > 1)
|
2010-09-15 16:10:54 +02:00
|
|
|
smp_shutdown_aps();
|
2010-09-15 16:09:52 +02:00
|
|
|
#endif
|
2010-09-07 09:18:11 +02:00
|
|
|
hw_intr_disable_all();
|
2010-09-15 16:11:06 +02:00
|
|
|
stop_local_timer();
|
2009-11-16 22:41:44 +01:00
|
|
|
intr_init(INTS_ORIG, 0);
|
2008-11-19 13:26:10 +01:00
|
|
|
arch_shutdown(tp ? tmr_arg(tp)->ta_int : RBT_PANIC);
|
2005-04-21 16:53:53 +02:00
|
|
|
}
|
|
|
|
|