minix/kernel/proc.c

1398 lines
42 KiB
C
Raw Normal View History

2005-04-21 16:53:53 +02:00
/* This file contains essentially all of the process and message handling.
* Together with "mpx.s" it forms the lowest layer of the MINIX kernel.
* There is one entry point from the outside:
2005-04-21 16:53:53 +02:00
*
2005-08-29 18:47:18 +02:00
* sys_call: a system call, i.e., the kernel is trapped with an INT
2005-04-21 16:53:53 +02:00
*
* Changes:
* Aug 19, 2005 rewrote scheduling code (Jorrit N. Herder)
* Jul 25, 2005 rewrote system call handling (Jorrit N. Herder)
* May 26, 2005 rewrote message passing functions (Jorrit N. Herder)
* May 24, 2005 new notification system call (Jorrit N. Herder)
* Oct 28, 2004 nonblocking send and receive calls (Jorrit N. Herder)
*
* The code here is critical to make everything work and is important for the
* overall performance of the system. A large fraction of the code deals with
* list manipulation. To make this both easy to understand and fast to execute
* pointer pointers are used throughout the code. Pointer pointers prevent
* exceptions for the head or tail of a linked list.
*
* node_t *queue, *new_node; // assume these as global variables
* node_t **xpp = &queue; // get pointer pointer to head of queue
* while (*xpp != NULL) // find last pointer of the linked list
* xpp = &(*xpp)->next; // get pointer to next pointer
* *xpp = new_node; // now replace the end (the NULL pointer)
* new_node->next = NULL; // and mark the new end of the list
*
* For example, when adding a new node to the end of the list, one normally
* makes an exception for an empty list and looks up the end of the list for
* nonempty lists. As shown above, this is not required with pointer pointers.
2005-04-21 16:53:53 +02:00
*/
#include <minix/com.h>
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
#include <minix/endpoint.h>
#include <stddef.h>
#include <signal.h>
#include <minix/syslib.h>
#include <assert.h>
2005-04-21 16:53:53 +02:00
#include "debug.h"
#include "kernel.h"
#include "proc.h"
#include "vm.h"
/* Scheduling and message passing functions */
FORWARD _PROTOTYPE( void idle, (void));
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
/**
* Made public for use in clock.c (for user-space scheduling)
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
FORWARD _PROTOTYPE( int mini_send, (struct proc *caller_ptr, int dst_e,
message *m_ptr, int flags));
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
*/
FORWARD _PROTOTYPE( int mini_receive, (struct proc *caller_ptr, int src,
message *m_ptr, int flags));
FORWARD _PROTOTYPE( int mini_senda, (struct proc *caller_ptr,
asynmsg_t *table, size_t size));
FORWARD _PROTOTYPE( int deadlock, (int function,
register struct proc *caller, proc_nr_t src_dst));
FORWARD _PROTOTYPE( int try_async, (struct proc *caller_ptr));
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
FORWARD _PROTOTYPE( int try_one, (struct proc *src_ptr, struct proc *dst_ptr,
int *postponed));
FORWARD _PROTOTYPE( struct proc * pick_proc, (void));
FORWARD _PROTOTYPE( void enqueue_head, (struct proc *rp));
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
#define PICK_ANY 1
#define PICK_HIGHERONLY 2
#define BuildNotifyMessage(m_ptr, src, dst_ptr) \
(m_ptr)->m_type = NOTIFY_FROM(src); \
(m_ptr)->NOTIFY_TIMESTAMP = get_uptime(); \
switch (src) { \
case HARDWARE: \
(m_ptr)->NOTIFY_ARG = priv(dst_ptr)->s_int_pending; \
priv(dst_ptr)->s_int_pending = 0; \
break; \
case SYSTEM: \
(m_ptr)->NOTIFY_ARG = priv(dst_ptr)->s_sig_pending; \
priv(dst_ptr)->s_sig_pending = 0; \
break; \
}
/*===========================================================================*
* idle *
*===========================================================================*/
PRIVATE void idle(void)
{
/* This function is called whenever there is no work to do.
* Halt the CPU, and measure how many timestamp counter ticks are
* spent not doing anything. This allows test setups to measure
* the CPU utiliziation of certain workloads with high precision.
*/
/* start accounting for the idle time */
cycles_accounting_stop(proc_addr(KERNEL));
halt_cpu();
/*
* end of accounting for the idle task does not happen here, the kernel
* is handling stuff for quite a while before it gets back here!
*/
}
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
/*===========================================================================*
* schedcheck *
*===========================================================================*/
Complete ovehaul of mode switching code - after a trap to kernel, the code automatically switches to kernel stack, in the future local to the CPU - k_reenter variable replaced by a test whether the CS is kernel cs or not. The information is passed further if needed. Removes a global variable which would need to be cpu local - no need for global variables describing the exception or trap context. This information is kept on stack and a pointer to this structure is passed to the C code as a single structure - removed loadedcr3 variable and its use replaced by reading the %cr3 register - no need to redisable interrupts in restart() as they are already disabled. - unified handling of traps that push and don't push errorcode - removed save() function as the process context is not saved directly to process table but saved as required by the trap code. Essentially it means that save() code is inlined everywhere not only in the exception handling routine - returning from syscall is more arch independent - it sets the retger in C - top of the x86 stack contains the current CPU id and pointer to the currently scheduled process (the one right interrupted) so the mode switch code can find where to save the context without need to use proc_ptr which will be cpu local in the future and therefore difficult to access in assembler and expensive to access in general - some more clean up of level0 code. No need to read-back the argument passed in %eax from the proc structure. The mode switch code does not clobber %the general registers and hence we can just call what is in %eax - many assebly macros in sconst.h as they will be reused by the apic assembly
2009-11-06 10:08:26 +01:00
PUBLIC struct proc * schedcheck(void)
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
{
/* This function is called an instant before proc_ptr is
* to be scheduled again.
*/
/*
* if the current process is still runnable check the misc flags and let
* it run unless it becomes not runnable in the meantime
*/
if (proc_is_runnable(proc_ptr))
goto check_misc_flags;
/*
* if a process becomes not runnable while handling the misc flags, we
* need to pick a new one here and start from scratch. Also if the
* current process wasn' runnable, we pick a new one here
*/
not_runnable_pick_new:
if (proc_is_preempted(proc_ptr)) {
proc_ptr->p_rts_flags &= ~RTS_PREEMPTED;
if (proc_is_runnable(proc_ptr))
enqueue_head(proc_ptr);
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
}
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
/*
* If this process is scheduled by the kernel, we renew it's quantum
* and remove it's RTS_NO_QUANTUM flag.
*/
if (proc_no_quantum(proc_ptr) && (proc_ptr->p_scheduler == NULL)) {
/* give new quantum */
proc_ptr->p_ticks_left = proc_ptr->p_quantum_size;
RTS_UNSET(proc_ptr, RTS_NO_QUANTUM);
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
}
/*
* if we have no process to run, set IDLE as the current process for
* time accounting and put the cpu in and idle state. After the next
* timer interrupt the execution resumes here and we can pick another
* process. If there is still nothing runnable we "schedule" IDLE again
*/
while (!(proc_ptr = pick_proc())) {
proc_ptr = proc_addr(IDLE);
if (priv(proc_ptr)->s_flags & BILLABLE)
bill_ptr = proc_ptr;
idle();
}
switch_address_space(proc_ptr);
check_misc_flags:
assert(proc_ptr);
assert(proc_is_runnable(proc_ptr));
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
assert(proc_ptr->p_ticks_left > 0);
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
while (proc_ptr->p_misc_flags &
(MF_KCALL_RESUME | MF_DELIVERMSG |
MF_SC_DEFER | MF_SC_TRACE | MF_SC_ACTIVE)) {
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
assert(proc_is_runnable(proc_ptr));
if (proc_ptr->p_misc_flags & MF_KCALL_RESUME) {
kernel_call_resume(proc_ptr);
}
else if (proc_ptr->p_misc_flags & MF_DELIVERMSG) {
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
TRACE(VF_SCHEDULING, printf("delivering to %s / %d\n",
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
proc_ptr->p_name, proc_ptr->p_endpoint););
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
if(delivermsg(proc_ptr) == VMSUSPEND) {
TRACE(VF_SCHEDULING,
printf("suspending %s / %d\n",
proc_ptr->p_name,
proc_ptr->p_endpoint););
assert(!proc_is_runnable(proc_ptr));
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
}
}
else if (proc_ptr->p_misc_flags & MF_SC_DEFER) {
/* Perform the system call that we deferred earlier. */
assert (!(proc_ptr->p_misc_flags & MF_SC_ACTIVE));
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
arch_do_syscall(proc_ptr);
/* If the process is stopped for signal delivery, and
* not blocked sending a message after the system call,
* inform PM.
*/
if ((proc_ptr->p_misc_flags & MF_SIG_DELAY) &&
!RTS_ISSET(proc_ptr, RTS_SENDING))
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
sig_delay_done(proc_ptr);
}
else if (proc_ptr->p_misc_flags & MF_SC_TRACE) {
/* Trigger a system call leave event if this was a
* system call. We must do this after processing the
* other flags above, both for tracing correctness and
* to be able to use 'break'.
*/
if (!(proc_ptr->p_misc_flags & MF_SC_ACTIVE))
break;
proc_ptr->p_misc_flags &=
~(MF_SC_TRACE | MF_SC_ACTIVE);
/* Signal the "leave system call" event.
* Block the process.
*/
cause_sig(proc_nr(proc_ptr), SIGTRAP);
}
else if (proc_ptr->p_misc_flags & MF_SC_ACTIVE) {
/* If MF_SC_ACTIVE was set, remove it now:
* we're leaving the system call.
*/
proc_ptr->p_misc_flags &= ~MF_SC_ACTIVE;
break;
}
if (!proc_is_runnable(proc_ptr))
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
break;
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
}
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
/*
* After handling the misc flags the selected process might not be
* runnable anymore. We have to checkit and schedule another one
*/
if (!proc_is_runnable(proc_ptr))
goto not_runnable_pick_new;
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
TRACE(VF_SCHEDULING, printf("starting %s / %d\n",
proc_ptr->p_name, proc_ptr->p_endpoint););
#if DEBUG_TRACE
proc_ptr->p_schedules++;
#endif
Complete ovehaul of mode switching code - after a trap to kernel, the code automatically switches to kernel stack, in the future local to the CPU - k_reenter variable replaced by a test whether the CS is kernel cs or not. The information is passed further if needed. Removes a global variable which would need to be cpu local - no need for global variables describing the exception or trap context. This information is kept on stack and a pointer to this structure is passed to the C code as a single structure - removed loadedcr3 variable and its use replaced by reading the %cr3 register - no need to redisable interrupts in restart() as they are already disabled. - unified handling of traps that push and don't push errorcode - removed save() function as the process context is not saved directly to process table but saved as required by the trap code. Essentially it means that save() code is inlined everywhere not only in the exception handling routine - returning from syscall is more arch independent - it sets the retger in C - top of the x86 stack contains the current CPU id and pointer to the currently scheduled process (the one right interrupted) so the mode switch code can find where to save the context without need to use proc_ptr which will be cpu local in the future and therefore difficult to access in assembler and expensive to access in general - some more clean up of level0 code. No need to read-back the argument passed in %eax from the proc structure. The mode switch code does not clobber %the general registers and hence we can just call what is in %eax - many assebly macros in sconst.h as they will be reused by the apic assembly
2009-11-06 10:08:26 +01:00
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
Complete ovehaul of mode switching code - after a trap to kernel, the code automatically switches to kernel stack, in the future local to the CPU - k_reenter variable replaced by a test whether the CS is kernel cs or not. The information is passed further if needed. Removes a global variable which would need to be cpu local - no need for global variables describing the exception or trap context. This information is kept on stack and a pointer to this structure is passed to the C code as a single structure - removed loadedcr3 variable and its use replaced by reading the %cr3 register - no need to redisable interrupts in restart() as they are already disabled. - unified handling of traps that push and don't push errorcode - removed save() function as the process context is not saved directly to process table but saved as required by the trap code. Essentially it means that save() code is inlined everywhere not only in the exception handling routine - returning from syscall is more arch independent - it sets the retger in C - top of the x86 stack contains the current CPU id and pointer to the currently scheduled process (the one right interrupted) so the mode switch code can find where to save the context without need to use proc_ptr which will be cpu local in the future and therefore difficult to access in assembler and expensive to access in general - some more clean up of level0 code. No need to read-back the argument passed in %eax from the proc structure. The mode switch code does not clobber %the general registers and hence we can just call what is in %eax - many assebly macros in sconst.h as they will be reused by the apic assembly
2009-11-06 10:08:26 +01:00
proc_ptr = arch_finish_schedcheck();
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
assert(proc_ptr->p_ticks_left > 0);
cycles_accounting_stop(proc_addr(KERNEL));
Complete ovehaul of mode switching code - after a trap to kernel, the code automatically switches to kernel stack, in the future local to the CPU - k_reenter variable replaced by a test whether the CS is kernel cs or not. The information is passed further if needed. Removes a global variable which would need to be cpu local - no need for global variables describing the exception or trap context. This information is kept on stack and a pointer to this structure is passed to the C code as a single structure - removed loadedcr3 variable and its use replaced by reading the %cr3 register - no need to redisable interrupts in restart() as they are already disabled. - unified handling of traps that push and don't push errorcode - removed save() function as the process context is not saved directly to process table but saved as required by the trap code. Essentially it means that save() code is inlined everywhere not only in the exception handling routine - returning from syscall is more arch independent - it sets the retger in C - top of the x86 stack contains the current CPU id and pointer to the currently scheduled process (the one right interrupted) so the mode switch code can find where to save the context without need to use proc_ptr which will be cpu local in the future and therefore difficult to access in assembler and expensive to access in general - some more clean up of level0 code. No need to read-back the argument passed in %eax from the proc structure. The mode switch code does not clobber %the general registers and hence we can just call what is in %eax - many assebly macros in sconst.h as they will be reused by the apic assembly
2009-11-06 10:08:26 +01:00
return proc_ptr;
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
}
2005-04-21 16:53:53 +02:00
/*===========================================================================*
* sys_call *
*===========================================================================*/
PUBLIC int do_ipc(call_nr, src_dst_e, m_ptr, bit_map)
int call_nr; /* system call number and flags */
endpoint_t src_dst_e; /* src to receive from or dst to send to */
2005-04-21 16:53:53 +02:00
message *m_ptr; /* pointer to message in the caller's space */
2006-03-10 17:10:05 +01:00
long bit_map; /* notification event set or flags */
2005-04-21 16:53:53 +02:00
{
/* System calls are done by trapping to the kernel with an INT instruction.
* The trap is caught and sys_call() is called to send or receive a message
* (or both). The caller is always given by 'proc_ptr'.
*/
2010-03-27 15:31:00 +01:00
struct proc *const caller_ptr = proc_ptr; /* get pointer to caller */
2005-04-21 16:53:53 +02:00
int result; /* the system call's result */
int src_dst_p; /* Process slot number */
size_t msg_size;
2006-03-08 13:06:33 +01:00
assert(!RTS_ISSET(caller_ptr, RTS_SLOT_FREE));
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
/* If this process is subject to system call tracing, handle that first. */
if (caller_ptr->p_misc_flags & (MF_SC_TRACE | MF_SC_DEFER)) {
/* Are we tracing this process, and is it the first sys_call entry? */
if ((caller_ptr->p_misc_flags & (MF_SC_TRACE | MF_SC_DEFER)) ==
MF_SC_TRACE) {
/* We must notify the tracer before processing the actual
* system call. If we don't, the tracer could not obtain the
* input message. Postpone the entire system call.
*/
caller_ptr->p_misc_flags &= ~MF_SC_TRACE;
caller_ptr->p_misc_flags |= MF_SC_DEFER;
/* Signal the "enter system call" event. Block the process. */
cause_sig(proc_nr(caller_ptr), SIGTRAP);
/* Preserve the return register's value. */
return caller_ptr->p_reg.retreg;
}
/* If the MF_SC_DEFER flag is set, the syscall is now being resumed. */
caller_ptr->p_misc_flags &= ~MF_SC_DEFER;
assert (!(caller_ptr->p_misc_flags & MF_SC_ACTIVE));
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
/* Set a flag to allow reliable tracing of leaving the system call. */
caller_ptr->p_misc_flags |= MF_SC_ACTIVE;
}
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
if(caller_ptr->p_misc_flags & MF_DELIVERMSG) {
panic("sys_call: MF_DELIVERMSG on for %s / %d\n",
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
caller_ptr->p_name, caller_ptr->p_endpoint);
}
/* Clear IPC status code. */
IPC_STATUS_CLEAR(caller_ptr);
/* Check destination. SENDA is special because its argument is a table and
* not a single destination. RECEIVE is the only call that accepts ANY (in
* addition to a real endpoint). The other calls (SEND, SENDREC,
* and NOTIFY) require an endpoint to corresponds to a process. In addition,
* it is necessary to check whether a process is allowed to send to a given
IPC privileges fixes Kernel: o Remove s_ipc_sendrec, instead using s_ipc_to for all send primitives o Centralize s_ipc_to bit manipulation, - disallowing assignment of bits pointing to unused priv structs; - preventing send-to-self by not setting bit for own priv struct; - preserving send mask matrix symmetry in all cases o Add IPC send mask checks to SENDA, which were missing entirely somehow o Slightly improve IPC stats accounting for SENDA o Remove SYSTEM from user processes' send mask o Half-fix the dependency between boot image order and process numbers, - correcting the table order of the boot processes; - documenting the order requirement needed for proper send masks; - warning at boot time if the order is violated RS: o Add support in /etc/drivers.conf for servers that talk to user processes, - disallowing IPC to user processes if no "ipc" field is present - adding a special "USER" label to explicitly allow IPC to user processes o Always apply IPC masks when specified; remove -i flag from service(8) o Use kernel send mask symmetry to delay adding IPC permissions for labels that do not exist yet, adding them to that label's process upon creation o Add VM to ipc permissions list for rtl8139 and fxp in drivers.conf Left to future fixes: o Removal of the table order vs process numbers dependency altogether, possibly using per-process send list structures as used for SYSTEM calls o Proper assignment of send masks to boot processes; some of the assigned (~0) masks are much wider than necessary o Proper assignment of IPC send masks for many more servers in drivers.conf o Removal of the debugging warning about the now legitimate case where RS's add_forward_ipc cannot find the IPC destination's label yet
2009-07-02 18:25:31 +02:00
* destination.
*/
if (call_nr == SENDA)
{
/* No destination argument */
}
else if (src_dst_e == ANY)
{
if (call_nr != RECEIVE)
{
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
#if 0
printf("sys_call: trap %d by %d with bad endpoint %d\n",
call_nr, proc_nr(caller_ptr), src_dst_e);
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
#endif
return EINVAL;
}
src_dst_p = src_dst_e;
}
else
{
/* Require a valid source and/or destination process. */
if(!isokendpt(src_dst_e, &src_dst_p)) {
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
#if 0
printf("sys_call: trap %d by %d with bad endpoint %d\n",
call_nr, proc_nr(caller_ptr), src_dst_e);
#endif
return EDEADSRCDST;
}
IPC privileges fixes Kernel: o Remove s_ipc_sendrec, instead using s_ipc_to for all send primitives o Centralize s_ipc_to bit manipulation, - disallowing assignment of bits pointing to unused priv structs; - preventing send-to-self by not setting bit for own priv struct; - preserving send mask matrix symmetry in all cases o Add IPC send mask checks to SENDA, which were missing entirely somehow o Slightly improve IPC stats accounting for SENDA o Remove SYSTEM from user processes' send mask o Half-fix the dependency between boot image order and process numbers, - correcting the table order of the boot processes; - documenting the order requirement needed for proper send masks; - warning at boot time if the order is violated RS: o Add support in /etc/drivers.conf for servers that talk to user processes, - disallowing IPC to user processes if no "ipc" field is present - adding a special "USER" label to explicitly allow IPC to user processes o Always apply IPC masks when specified; remove -i flag from service(8) o Use kernel send mask symmetry to delay adding IPC permissions for labels that do not exist yet, adding them to that label's process upon creation o Add VM to ipc permissions list for rtl8139 and fxp in drivers.conf Left to future fixes: o Removal of the table order vs process numbers dependency altogether, possibly using per-process send list structures as used for SYSTEM calls o Proper assignment of send masks to boot processes; some of the assigned (~0) masks are much wider than necessary o Proper assignment of IPC send masks for many more servers in drivers.conf o Removal of the debugging warning about the now legitimate case where RS's add_forward_ipc cannot find the IPC destination's label yet
2009-07-02 18:25:31 +02:00
/* If the call is to send to a process, i.e., for SEND, SENDNB,
* SENDREC or NOTIFY, verify that the caller is allowed to send to
* the given destination.
*/
IPC privileges fixes Kernel: o Remove s_ipc_sendrec, instead using s_ipc_to for all send primitives o Centralize s_ipc_to bit manipulation, - disallowing assignment of bits pointing to unused priv structs; - preventing send-to-self by not setting bit for own priv struct; - preserving send mask matrix symmetry in all cases o Add IPC send mask checks to SENDA, which were missing entirely somehow o Slightly improve IPC stats accounting for SENDA o Remove SYSTEM from user processes' send mask o Half-fix the dependency between boot image order and process numbers, - correcting the table order of the boot processes; - documenting the order requirement needed for proper send masks; - warning at boot time if the order is violated RS: o Add support in /etc/drivers.conf for servers that talk to user processes, - disallowing IPC to user processes if no "ipc" field is present - adding a special "USER" label to explicitly allow IPC to user processes o Always apply IPC masks when specified; remove -i flag from service(8) o Use kernel send mask symmetry to delay adding IPC permissions for labels that do not exist yet, adding them to that label's process upon creation o Add VM to ipc permissions list for rtl8139 and fxp in drivers.conf Left to future fixes: o Removal of the table order vs process numbers dependency altogether, possibly using per-process send list structures as used for SYSTEM calls o Proper assignment of send masks to boot processes; some of the assigned (~0) masks are much wider than necessary o Proper assignment of IPC send masks for many more servers in drivers.conf o Removal of the debugging warning about the now legitimate case where RS's add_forward_ipc cannot find the IPC destination's label yet
2009-07-02 18:25:31 +02:00
if (call_nr != RECEIVE)
{
IPC privileges fixes Kernel: o Remove s_ipc_sendrec, instead using s_ipc_to for all send primitives o Centralize s_ipc_to bit manipulation, - disallowing assignment of bits pointing to unused priv structs; - preventing send-to-self by not setting bit for own priv struct; - preserving send mask matrix symmetry in all cases o Add IPC send mask checks to SENDA, which were missing entirely somehow o Slightly improve IPC stats accounting for SENDA o Remove SYSTEM from user processes' send mask o Half-fix the dependency between boot image order and process numbers, - correcting the table order of the boot processes; - documenting the order requirement needed for proper send masks; - warning at boot time if the order is violated RS: o Add support in /etc/drivers.conf for servers that talk to user processes, - disallowing IPC to user processes if no "ipc" field is present - adding a special "USER" label to explicitly allow IPC to user processes o Always apply IPC masks when specified; remove -i flag from service(8) o Use kernel send mask symmetry to delay adding IPC permissions for labels that do not exist yet, adding them to that label's process upon creation o Add VM to ipc permissions list for rtl8139 and fxp in drivers.conf Left to future fixes: o Removal of the table order vs process numbers dependency altogether, possibly using per-process send list structures as used for SYSTEM calls o Proper assignment of send masks to boot processes; some of the assigned (~0) masks are much wider than necessary o Proper assignment of IPC send masks for many more servers in drivers.conf o Removal of the debugging warning about the now legitimate case where RS's add_forward_ipc cannot find the IPC destination's label yet
2009-07-02 18:25:31 +02:00
if (!may_send_to(caller_ptr, src_dst_p)) {
#if DEBUG_ENABLE_IPC_WARNINGS
printf(
"sys_call: ipc mask denied trap %d from %d to %d\n",
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
call_nr, caller_ptr->p_endpoint, src_dst_e);
#endif
return(ECALLDENIED); /* call denied by ipc mask */
}
}
}
/* Only allow non-negative call_nr values less than 32 */
if (call_nr < 0 || call_nr >= 32)
{
#if DEBUG_ENABLE_IPC_WARNINGS
printf("sys_call: trap %d not allowed, caller %d, src_dst %d\n",
call_nr, proc_nr(caller_ptr), src_dst_p);
#endif
return(ETRAPDENIED); /* trap denied by mask or kernel */
}
2005-04-21 16:53:53 +02:00
/* Check if the process has privileges for the requested call. Calls to the
* kernel may only be SENDREC, because tasks always reply and may not block
* if the caller doesn't do receive().
2005-04-21 16:53:53 +02:00
*/
if (!(priv(caller_ptr)->s_trap_mask & (1 << call_nr))) {
#if DEBUG_ENABLE_IPC_WARNINGS
printf("sys_call: trap %d not allowed, caller %d, src_dst %d\n",
call_nr, proc_nr(caller_ptr), src_dst_p);
#endif
return(ETRAPDENIED); /* trap denied by mask or kernel */
}
2009-11-28 14:15:07 +01:00
/* SENDA has no src_dst value here, so this check is in mini_senda() as well.
*/
if (call_nr != SENDREC && call_nr != RECEIVE && call_nr != SENDA &&
iskerneln(src_dst_p)) {
#if DEBUG_ENABLE_IPC_WARNINGS
printf("sys_call: trap %d not allowed, caller %d, src_dst %d\n",
call_nr, proc_nr(caller_ptr), src_dst_e);
#endif
return(ETRAPDENIED); /* trap denied by mask or kernel */
}
/* Get and check the size of the argument in bytes.
* Normally this is just the size of a regular message, but in the
* case of SENDA the argument is a table.
*/
if(call_nr == SENDA) {
msg_size = (size_t) src_dst_e;
/* Limit size to something reasonable. An arbitrary choice is 16
* times the number of process table entries.
*/
if (msg_size > 16*(NR_TASKS + NR_PROCS))
return EDOM;
msg_size *= sizeof(asynmsg_t); /* convert to bytes */
} else {
msg_size = sizeof(*m_ptr);
}
2005-04-21 16:53:53 +02:00
/* Now check if the call is known and try to perform the request. The only
* system calls that exist in MINIX are sending and receiving messages.
* - SENDREC: combines SEND and RECEIVE in a single system call
* - SEND: sender blocks until its message has been delivered
* - RECEIVE: receiver blocks until an acceptable message has arrived
* - NOTIFY: asynchronous call; deliver notification or mark pending
* - SENDA: list of asynchronous send requests
2005-04-21 16:53:53 +02:00
*/
switch(call_nr) {
case SENDREC:
/* A flag is set so that notifications cannot interrupt SENDREC. */
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
caller_ptr->p_misc_flags |= MF_REPLY_PEND;
/* fall through */
case SEND:
result = mini_send(caller_ptr, src_dst_e, m_ptr, 0);
if (call_nr == SEND || result != OK)
break; /* done, or SEND failed */
/* fall through for SENDREC */
case RECEIVE:
if (call_nr == RECEIVE)
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
caller_ptr->p_misc_flags &= ~MF_REPLY_PEND;
result = mini_receive(caller_ptr, src_dst_e, m_ptr, 0);
break;
case NOTIFY:
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
result = mini_notify(caller_ptr, src_dst_e);
break;
case SENDNB:
result = mini_send(caller_ptr, src_dst_e, m_ptr, NON_BLOCKING);
break;
case SENDA:
result = mini_senda(caller_ptr, (asynmsg_t *)m_ptr, (size_t)src_dst_e);
break;
default:
result = EBADCALL; /* illegal system call */
2005-04-21 16:53:53 +02:00
}
/* Now, return the result of the system call to the caller. */
return(result);
}
/*===========================================================================*
* deadlock *
*===========================================================================*/
PRIVATE int deadlock(function, cp, src_dst)
int function; /* trap number */
register struct proc *cp; /* pointer to caller */
proc_nr_t src_dst; /* src or dst process */
{
/* Check for deadlock. This can happen if 'caller_ptr' and 'src_dst' have
* a cyclic dependency of blocking send and receive calls. The only cyclic
* depency that is not fatal is if the caller and target directly SEND(REC)
* and RECEIVE to each other. If a deadlock is found, the group size is
* returned. Otherwise zero is returned.
*/
register struct proc *xp; /* process pointer */
int group_size = 1; /* start with only caller */
#if DEBUG_ENABLE_IPC_WARNINGS
static struct proc *processes[NR_PROCS + NR_TASKS];
processes[0] = cp;
#endif
while (src_dst != ANY) { /* check while process nr */
endpoint_t dep;
xp = proc_addr(src_dst); /* follow chain of processes */
#if DEBUG_ENABLE_IPC_WARNINGS
processes[group_size] = xp;
#endif
group_size ++; /* extra process in group */
2006-03-09 14:59:59 +01:00
/* Check whether the last process in the chain has a dependency. If it
* has not, the cycle cannot be closed and we are done.
*/
if((dep = P_BLOCKEDON(xp)) == NONE)
return 0;
if(dep == ANY)
src_dst = ANY;
else
okendpt(dep, &src_dst);
/* Now check if there is a cyclic dependency. For group sizes of two,
* a combination of SEND(REC) and RECEIVE is not fatal. Larger groups
* or other combinations indicate a deadlock.
*/
if (src_dst == proc_nr(cp)) { /* possible deadlock */
if (group_size == 2) { /* caller and src_dst */
/* The function number is magically converted to flags. */
if ((xp->p_rts_flags ^ (function << 2)) & RTS_SENDING) {
return(0); /* not a deadlock */
}
}
#if DEBUG_ENABLE_IPC_WARNINGS
{
int i;
printf("deadlock between these processes:\n");
for(i = 0; i < group_size; i++) {
printf(" %10s ", processes[i]->p_name);
proc_stacktrace(processes[i]);
}
}
#endif
return(group_size); /* deadlock found */
}
}
return(0); /* not a deadlock */
}
2005-04-21 16:53:53 +02:00
/*===========================================================================*
* mini_send *
*===========================================================================*/
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
PUBLIC int mini_send(caller_ptr, dst_e, m_ptr, flags)
2005-04-21 16:53:53 +02:00
register struct proc *caller_ptr; /* who is trying to send a message? */
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
int dst_e; /* to whom is message being sent? */
2005-04-21 16:53:53 +02:00
message *m_ptr; /* pointer to message buffer */
2010-03-27 15:31:00 +01:00
const int flags;
2005-04-21 16:53:53 +02:00
{
/* Send a message from 'caller_ptr' to 'dst'. If 'dst' is blocked waiting
* for this message, copy the message to it and unblock 'dst'. If 'dst' is
2005-04-21 16:53:53 +02:00
* not waiting at all, or is waiting for another source, queue 'caller_ptr'.
*/
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
register struct proc *dst_ptr;
register struct proc **xpp;
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
int dst_p;
dst_p = _ENDPOINT_P(dst_e);
dst_ptr = proc_addr(dst_p);
2005-04-21 16:53:53 +02:00
if (RTS_ISSET(dst_ptr, RTS_NO_ENDPOINT))
{
return EDEADSRCDST;
}
/* Check if 'dst' is blocked waiting for this message. The destination's
* RTS_SENDING flag may be set when its SENDREC call blocked while sending.
*/
if (WILLRECEIVE(dst_ptr, caller_ptr->p_endpoint)) {
int call;
2005-04-21 16:53:53 +02:00
/* Destination is indeed waiting for this message. */
assert(!(dst_ptr->p_misc_flags & MF_DELIVERMSG));
if (!(flags & FROM_KERNEL)) {
if(copy_msg_from_user(caller_ptr, m_ptr, &dst_ptr->p_delivermsg))
return EFAULT;
} else {
dst_ptr->p_delivermsg = *m_ptr;
IPC_STATUS_ADD(dst_ptr,
IPC_STATUS_FLAGS(IPC_FLG_MSG_FROM_KERNEL));
}
dst_ptr->p_delivermsg.m_source = caller_ptr->p_endpoint;
dst_ptr->p_misc_flags |= MF_DELIVERMSG;
call = (caller_ptr->p_misc_flags & MF_REPLY_PEND ? SENDREC
: (flags & NON_BLOCKING ? SENDNB : SEND));
IPC_STATUS_ADD(dst_ptr, IPC_STATUS_CALL_TO(call));
RTS_UNSET(dst_ptr, RTS_RECEIVING);
} else {
if(flags & NON_BLOCKING) {
return(ENOTREADY);
}
/* Check for a possible deadlock before actually blocking. */
if (deadlock(SEND, caller_ptr, dst_p)) {
return(ELOCKED);
}
2005-10-07 15:23:18 +02:00
/* Destination is not waiting. Block and dequeue caller. */
if (!(flags & FROM_KERNEL)) {
if(copy_msg_from_user(caller_ptr, m_ptr, &caller_ptr->p_sendmsg))
return EFAULT;
} else {
caller_ptr->p_sendmsg = *m_ptr;
/*
* we need to remember that this message is from kernel so we
* can set the delivery status flags when the message is
* actually delivered
*/
caller_ptr->p_misc_flags |= MF_SENDING_FROM_KERNEL;
}
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
RTS_SET(caller_ptr, RTS_SENDING);
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
caller_ptr->p_sendto_e = dst_e;
2005-04-21 16:53:53 +02:00
/* Process is now blocked. Put in on the destination's queue. */
xpp = &dst_ptr->p_caller_q; /* find end of list */
while (*xpp) xpp = &(*xpp)->p_q_link;
*xpp = caller_ptr; /* add caller to end */
caller_ptr->p_q_link = NULL; /* mark new end of list */
2005-04-21 16:53:53 +02:00
}
return(OK);
}
/*===========================================================================*
* mini_receive *
2005-04-21 16:53:53 +02:00
*===========================================================================*/
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
PRIVATE int mini_receive(caller_ptr, src_e, m_ptr, flags)
2005-04-21 16:53:53 +02:00
register struct proc *caller_ptr; /* process trying to get message */
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
int src_e; /* which message source is wanted */
2005-04-21 16:53:53 +02:00
message *m_ptr; /* pointer to message buffer */
2010-03-27 15:31:00 +01:00
const int flags;
2005-04-21 16:53:53 +02:00
{
/* A process or task wants to get a message. If a message is already queued,
2005-04-21 16:53:53 +02:00
* acquire it and deblock the sender. If no message from the desired source
* is available block the caller.
2005-04-21 16:53:53 +02:00
*/
register struct proc **xpp;
sys_map_t *map;
bitchunk_t *chunk;
int i, r, src_id, src_proc_nr, src_p;
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
phys_bytes linaddr;
assert(!(caller_ptr->p_misc_flags & MF_DELIVERMSG));
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
if(!(linaddr = umap_local(caller_ptr, D, (vir_bytes) m_ptr,
sizeof(message)))) {
return EFAULT;
}
/* This is where we want our message. */
caller_ptr->p_delivermsg_lin = linaddr;
caller_ptr->p_delivermsg_vir = (vir_bytes) m_ptr;
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
if(src_e == ANY) src_p = ANY;
else
{
okendpt(src_e, &src_p);
if (RTS_ISSET(proc_addr(src_p), RTS_NO_ENDPOINT))
{
return EDEADSRCDST;
}
}
2005-04-21 16:53:53 +02:00
/* Check to see if a message from desired source is already available. The
* caller's RTS_SENDING flag may be set if SENDREC couldn't send. If it is
* set, the process should be blocked.
*/
if (!RTS_ISSET(caller_ptr, RTS_SENDING)) {
2005-04-21 16:53:53 +02:00
/* Check if there are pending notifications, except for SENDREC. */
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
if (! (caller_ptr->p_misc_flags & MF_REPLY_PEND)) {
map = &priv(caller_ptr)->s_notify_pending;
for (chunk=&map->chunk[0]; chunk<&map->chunk[NR_SYS_CHUNKS]; chunk++) {
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
endpoint_t hisep;
/* Find a pending notification from the requested source. */
if (! *chunk) continue; /* no bits in chunk */
for (i=0; ! (*chunk & (1<<i)); ++i) {} /* look up the bit */
src_id = (chunk - &map->chunk[0]) * BITCHUNK_BITS + i;
if (src_id >= NR_SYS_PROCS) break; /* out of range */
src_proc_nr = id_to_nr(src_id); /* get source proc */
#if DEBUG_ENABLE_IPC_WARNINGS
if(src_proc_nr == NONE) {
printf("mini_receive: sending notify from NONE\n");
}
#endif
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
if (src_e!=ANY && src_p != src_proc_nr) continue;/* source not ok */
*chunk &= ~(1 << i); /* no longer pending */
/* Found a suitable source, deliver the notification message. */
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
hisep = proc_addr(src_proc_nr)->p_endpoint;
assert(!(caller_ptr->p_misc_flags & MF_DELIVERMSG));
assert(src_e == ANY || hisep == src_e);
/* assemble message */
BuildNotifyMessage(&caller_ptr->p_delivermsg, src_proc_nr, caller_ptr);
caller_ptr->p_delivermsg.m_source = hisep;
caller_ptr->p_misc_flags |= MF_DELIVERMSG;
IPC_STATUS_ADD(caller_ptr, IPC_STATUS_CALL_TO(NOTIFY));
return(OK);
}
}
/* Check if there are pending senda(). */
if (caller_ptr->p_misc_flags & MF_ASYNMSG)
{
if (src_e != ANY)
r= try_one(proc_addr(src_p), caller_ptr, NULL);
else
r= try_async(caller_ptr);
if (r == OK) {
IPC_STATUS_ADD(caller_ptr, IPC_STATUS_CALL_TO(SENDA));
return OK; /* Got a message */
}
}
/* Check caller queue. Use pointer pointers to keep code simple. */
xpp = &caller_ptr->p_caller_q;
while (*xpp) {
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
if (src_e == ANY || src_p == proc_nr(*xpp)) {
int call;
assert(!RTS_ISSET(*xpp, RTS_SLOT_FREE));
assert(!RTS_ISSET(*xpp, RTS_NO_ENDPOINT));
2006-03-08 13:06:33 +01:00
/* Found acceptable message. Copy it and update status. */
assert(!(caller_ptr->p_misc_flags & MF_DELIVERMSG));
caller_ptr->p_delivermsg = (*xpp)->p_sendmsg;
caller_ptr->p_delivermsg.m_source = (*xpp)->p_endpoint;
caller_ptr->p_misc_flags |= MF_DELIVERMSG;
RTS_UNSET(*xpp, RTS_SENDING);
call = ((*xpp)->p_misc_flags & MF_REPLY_PEND ? SENDREC : SEND);
IPC_STATUS_ADD(caller_ptr, IPC_STATUS_CALL_TO(call));
/*
* if the message is originaly from the kernel on behalf of this
* process, we must send the status flags accordingly
*/
if ((*xpp)->p_misc_flags & MF_SENDING_FROM_KERNEL) {
IPC_STATUS_ADD(caller_ptr,
IPC_STATUS_FLAGS(IPC_FLG_MSG_FROM_KERNEL));
/* we can clean the flag now, not need anymore */
(*xpp)->p_misc_flags &= ~MF_SENDING_FROM_KERNEL;
}
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
if ((*xpp)->p_misc_flags & MF_SIG_DELAY)
sig_delay_done(*xpp);
*xpp = (*xpp)->p_q_link; /* remove from queue */
return(OK); /* report success */
}
xpp = &(*xpp)->p_q_link; /* proceed to next */
}
2005-04-21 16:53:53 +02:00
}
/* No suitable message is available or the caller couldn't send in SENDREC.
* Block the process trying to receive, unless the flags tell otherwise.
2005-04-21 16:53:53 +02:00
*/
if ( ! (flags & NON_BLOCKING)) {
/* Check for a possible deadlock before actually blocking. */
if (deadlock(RECEIVE, caller_ptr, src_p)) {
return(ELOCKED);
}
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
caller_ptr->p_getfrom_e = src_e;
RTS_SET(caller_ptr, RTS_RECEIVING);
2005-04-21 16:53:53 +02:00
return(OK);
} else {
return(ENOTREADY);
2005-04-21 16:53:53 +02:00
}
}
/*===========================================================================*
* mini_notify *
*===========================================================================*/
2010-03-27 15:31:00 +01:00
PUBLIC int mini_notify(
const struct proc *caller_ptr, /* sender of the notification */
endpoint_t dst_e /* which process to notify */
)
{
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
register struct proc *dst_ptr;
int src_id; /* source id for late delivery */
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
int dst_p;
if (!isokendpt(dst_e, &dst_p)) {
util_stacktrace();
printf("mini_notify: bogus endpoint %d\n", dst_e);
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
return EDEADSRCDST;
}
dst_ptr = proc_addr(dst_p);
/* Check to see if target is blocked waiting for this message. A process
* can be both sending and receiving during a SENDREC system call.
*/
2009-01-20 16:47:00 +01:00
if (WILLRECEIVE(dst_ptr, caller_ptr->p_endpoint) &&
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
! (dst_ptr->p_misc_flags & MF_REPLY_PEND)) {
/* Destination is indeed waiting for a message. Assemble a notification
* message and deliver it. Copy from pseudo-source HARDWARE, since the
* message is in the kernel's address space.
*/
assert(!(dst_ptr->p_misc_flags & MF_DELIVERMSG));
BuildNotifyMessage(&dst_ptr->p_delivermsg, proc_nr(caller_ptr), dst_ptr);
dst_ptr->p_delivermsg.m_source = caller_ptr->p_endpoint;
dst_ptr->p_misc_flags |= MF_DELIVERMSG;
IPC_STATUS_ADD(dst_ptr, IPC_STATUS_CALL_TO(NOTIFY));
RTS_UNSET(dst_ptr, RTS_RECEIVING);
return(OK);
}
/* Destination is not ready to receive the notification. Add it to the
Rewrite of boot process KERNEL CHANGES: - The kernel only knows about privileges of kernel tasks and the root system process (now RS). - Kernel tasks and the root system process are the only processes that are made schedulable by the kernel at startup. All the other processes in the boot image don't get their privileges set at startup and are inhibited from running by the RTS_NO_PRIV flag. - Removed the assumption on the ordering of processes in the boot image table. System processes can now appear in any order in the boot image table. - Privilege ids can now be assigned both statically or dynamically. The kernel assigns static privilege ids to kernel tasks and the root system process. Each id is directly derived from the process number. - User processes now all share the static privilege id of the root user process (now INIT). - sys_privctl split: we have more calls now to let RS set privileges for system processes. SYS_PRIV_ALLOW / SYS_PRIV_DISALLOW are only used to flip the RTS_NO_PRIV flag and allow / disallow a process from running. SYS_PRIV_SET_SYS / SYS_PRIV_SET_USER are used to set privileges for a system / user process. - boot image table flags split: PROC_FULLVM is the only flag that has been moved out of the privilege flags and is still maintained in the boot image table. All the other privilege flags are out of the kernel now. RS CHANGES: - RS is the only user-space process who gets to run right after in-kernel startup. - RS uses the boot image table from the kernel and three additional boot image info table (priv table, sys table, dev table) to complete the initialization of the system. - RS checks that the entries in the priv table match the entries in the boot image table to make sure that every process in the boot image gets schedulable. - RS only uses static privilege ids to set privileges for system services in the boot image. - RS includes basic memory management support to allocate the boot image buffer dynamically during initialization. The buffer shall contain the executable image of all the system services we would like to restart after a crash. - First step towards decoupling between resource provisioning and resource requirements in RS: RS must know what resources it needs to restart a process and what resources it has currently available. This is useful to tradeoff reliability and resource consumption. When required resources are missing, the process cannot be restarted. In that case, in the future, a system flag will tell RS what to do. For example, if CORE_PROC is set, RS should trigger a system-wide panic because the system can no longer function correctly without a core system process. PM CHANGES: - The process tree built at initialization time is changed to have INIT as root with pid 0, RS child of INIT and all the system services children of RS. This is required to make RS in control of all the system services. - PM no longer registers labels for system services in the boot image. This is now part of RS's initialization process.
2009-12-11 01:08:19 +01:00
* bit map with pending notifications. Note the indirectness: the privilege id
* instead of the process number is used in the pending bit map.
*/
src_id = priv(caller_ptr)->s_id;
set_sys_bit(priv(dst_ptr)->s_notify_pending, src_id);
return(OK);
}
#define ASCOMPLAIN(caller, entry, field) \
printf("kernel:%s:%d: asyn failed for %s in %s " \
"(%d/%d, tab 0x%lx)\n",__FILE__,__LINE__, \
field, caller->p_name, entry, priv(caller)->s_asynsize, priv(caller)->s_asyntab)
#define A_RETRIEVE(entry, field) \
if(data_copy(caller_ptr->p_endpoint, \
table_v + (entry)*sizeof(asynmsg_t) + offsetof(struct asynmsg,field),\
KERNEL, (vir_bytes) &tabent.field, \
sizeof(tabent.field)) != OK) {\
ASCOMPLAIN(caller_ptr, entry, #field); \
return EFAULT; \
}
#define A_INSERT(entry, field) \
if(data_copy(KERNEL, (vir_bytes) &tabent.field, \
caller_ptr->p_endpoint, \
table_v + (entry)*sizeof(asynmsg_t) + offsetof(struct asynmsg,field),\
sizeof(tabent.field)) != OK) {\
ASCOMPLAIN(caller_ptr, entry, #field); \
return EFAULT; \
}
/*===========================================================================*
* mini_senda *
*===========================================================================*/
PRIVATE int mini_senda(struct proc *caller_ptr, asynmsg_t *table, size_t size)
{
int i, dst_p, done, do_notify;
unsigned flags;
struct proc *dst_ptr;
struct priv *privp;
asynmsg_t tabent;
2010-03-27 15:31:00 +01:00
const vir_bytes table_v = (vir_bytes) table;
privp= priv(caller_ptr);
if (!(privp->s_flags & SYS_PROC))
{
printf(
"mini_senda: warning caller has no privilege structure\n");
return EPERM;
}
/* Clear table */
privp->s_asyntab= -1;
privp->s_asynsize= 0;
if (size == 0)
{
/* Nothing to do, just return */
return OK;
}
/* Limit size to something reasonable. An arbitrary choice is 16
* times the number of process table entries.
*
* (this check has been duplicated in sys_call but is left here
* as a sanity check)
*/
if (size > 16*(NR_TASKS + NR_PROCS))
{
return EDOM;
}
/* Scan the table */
do_notify= FALSE;
done= TRUE;
for (i= 0; i<size; i++)
{
/* Read status word */
A_RETRIEVE(i, flags);
flags= tabent.flags;
/* Skip empty entries */
if (flags == 0)
continue;
/* Check for reserved bits in the flags field */
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
if (flags & ~(AMF_VALID|AMF_DONE|AMF_NOTIFY|AMF_NOREPLY) ||
!(flags & AMF_VALID))
{
return EINVAL;
}
/* Skip entry if AMF_DONE is already set */
if (flags & AMF_DONE)
continue;
/* Get destination */
A_RETRIEVE(i, dst);
if (!isokendpt(tabent.dst, &dst_p))
{
/* Bad destination, report the error */
tabent.result= EDEADSRCDST;
A_INSERT(i, result);
tabent.flags= flags | AMF_DONE;
A_INSERT(i, flags);
if (flags & AMF_NOTIFY)
do_notify= 1;
continue;
}
2009-11-28 14:15:07 +01:00
if (iskerneln(dst_p))
{
/* Asynchronous sends to the kernel are not allowed */
tabent.result= ECALLDENIED;
A_INSERT(i, result);
tabent.flags= flags | AMF_DONE;
A_INSERT(i, flags);
if (flags & AMF_NOTIFY)
do_notify= 1;
continue;
}
IPC privileges fixes Kernel: o Remove s_ipc_sendrec, instead using s_ipc_to for all send primitives o Centralize s_ipc_to bit manipulation, - disallowing assignment of bits pointing to unused priv structs; - preventing send-to-self by not setting bit for own priv struct; - preserving send mask matrix symmetry in all cases o Add IPC send mask checks to SENDA, which were missing entirely somehow o Slightly improve IPC stats accounting for SENDA o Remove SYSTEM from user processes' send mask o Half-fix the dependency between boot image order and process numbers, - correcting the table order of the boot processes; - documenting the order requirement needed for proper send masks; - warning at boot time if the order is violated RS: o Add support in /etc/drivers.conf for servers that talk to user processes, - disallowing IPC to user processes if no "ipc" field is present - adding a special "USER" label to explicitly allow IPC to user processes o Always apply IPC masks when specified; remove -i flag from service(8) o Use kernel send mask symmetry to delay adding IPC permissions for labels that do not exist yet, adding them to that label's process upon creation o Add VM to ipc permissions list for rtl8139 and fxp in drivers.conf Left to future fixes: o Removal of the table order vs process numbers dependency altogether, possibly using per-process send list structures as used for SYSTEM calls o Proper assignment of send masks to boot processes; some of the assigned (~0) masks are much wider than necessary o Proper assignment of IPC send masks for many more servers in drivers.conf o Removal of the debugging warning about the now legitimate case where RS's add_forward_ipc cannot find the IPC destination's label yet
2009-07-02 18:25:31 +02:00
if (!may_send_to(caller_ptr, dst_p))
{
/* Send denied by IPC mask */
tabent.result= ECALLDENIED;
A_INSERT(i, result);
tabent.flags= flags | AMF_DONE;
A_INSERT(i, flags);
if (flags & AMF_NOTIFY)
do_notify= 1;
continue;
}
#if 0
printf("mini_senda: entry[%d]: flags 0x%x dst %d/%d\n",
i, tabent.flags, tabent.dst, dst_p);
#endif
dst_ptr = proc_addr(dst_p);
/* RTS_NO_ENDPOINT should be removed */
if (RTS_ISSET(dst_ptr, RTS_NO_ENDPOINT))
{
tabent.result= EDEADSRCDST;
A_INSERT(i, result);
tabent.flags= flags | AMF_DONE;
A_INSERT(i, flags);
if (flags & AMF_NOTIFY)
do_notify= TRUE;
continue;
}
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
/* Check if 'dst' is blocked waiting for this message.
* If AMF_NOREPLY is set, do not satisfy the receiving part of
* a SENDREC.
*/
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
if (WILLRECEIVE(dst_ptr, caller_ptr->p_endpoint) &&
(!(flags & AMF_NOREPLY) ||
!(dst_ptr->p_misc_flags & MF_REPLY_PEND)))
{
/* Destination is indeed waiting for this message. */
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
/* Copy message from sender. */
if(copy_msg_from_user(caller_ptr, &table[i].msg,
&dst_ptr->p_delivermsg))
tabent.result = EFAULT;
else {
dst_ptr->p_delivermsg.m_source = caller_ptr->p_endpoint;
dst_ptr->p_misc_flags |= MF_DELIVERMSG;
IPC_STATUS_ADD(dst_ptr,
IPC_STATUS_CALL_TO(SENDA));
RTS_UNSET(dst_ptr, RTS_RECEIVING);
tabent.result = OK;
}
A_INSERT(i, result);
tabent.flags= flags | AMF_DONE;
A_INSERT(i, flags);
if (flags & AMF_NOTIFY)
do_notify= 1;
continue;
}
else
{
/* Should inform receiver that something is pending */
dst_ptr->p_misc_flags |= MF_ASYNMSG;
done= FALSE;
continue;
}
}
if (do_notify)
printf("mini_senda: should notify caller\n");
if (!done)
{
privp->s_asyntab= (vir_bytes)table;
privp->s_asynsize= size;
}
return OK;
}
/*===========================================================================*
* try_async *
*===========================================================================*/
PRIVATE int try_async(caller_ptr)
struct proc *caller_ptr;
{
int r;
struct priv *privp;
struct proc *src_ptr;
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
int postponed = FALSE;
/* Try all privilege structures */
for (privp = BEG_PRIV_ADDR; privp < END_PRIV_ADDR; ++privp)
{
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
if (privp->s_proc_nr == NONE)
continue;
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
src_ptr= proc_addr(privp->s_proc_nr);
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
assert(!(caller_ptr->p_misc_flags & MF_DELIVERMSG));
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
r= try_one(src_ptr, caller_ptr, &postponed);
if (r == OK)
return r;
}
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
/* Nothing found, clear MF_ASYNMSG unless messages were postponed */
if (postponed == FALSE)
caller_ptr->p_misc_flags &= ~MF_ASYNMSG;
return ESRCH;
}
/*===========================================================================*
* try_one *
*===========================================================================*/
PRIVATE int try_one(struct proc *src_ptr, struct proc *dst_ptr, int *postponed)
{
int i, done;
unsigned flags;
size_t size;
endpoint_t dst_e;
struct priv *privp;
asynmsg_t tabent;
vir_bytes table_v;
struct proc *caller_ptr;
privp= priv(src_ptr);
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
/* Basic validity checks */
if (privp->s_id == USER_PRIV_ID) return EAGAIN;
if (privp->s_asynsize == 0) return EAGAIN;
if (!may_send_to(src_ptr, proc_nr(dst_ptr))) return EAGAIN;
size= privp->s_asynsize;
table_v = privp->s_asyntab;
caller_ptr = src_ptr;
dst_e= dst_ptr->p_endpoint;
/* Scan the table */
done= TRUE;
for (i= 0; i<size; i++)
{
/* Read status word */
A_RETRIEVE(i, flags);
flags= tabent.flags;
/* Skip empty entries */
if (flags == 0)
{
continue;
}
/* Check for reserved bits in the flags field */
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
if (flags & ~(AMF_VALID|AMF_DONE|AMF_NOTIFY|AMF_NOREPLY) ||
!(flags & AMF_VALID))
{
printf("try_one: bad bits in table\n");
privp->s_asynsize= 0;
return EINVAL;
}
/* Skip entry is AMF_DONE is already set */
if (flags & AMF_DONE)
{
continue;
}
/* Clear done. We are done when all entries are either empty
* or done at the start of the call.
*/
done= FALSE;
/* Get destination */
A_RETRIEVE(i, dst);
if (tabent.dst != dst_e)
{
continue;
}
Merge of David's ptrace branch. Summary: o Support for ptrace T_ATTACH/T_DETACH and T_SYSCALL o PM signal handling logic should now work properly, even with debuggers being present o Asynchronous PM/VFS protocol, full IPC support for senda(), and AMF_NOREPLY senda() flag DETAILS Process stop and delay call handling of PM: o Added sys_runctl() kernel call with sys_stop() and sys_resume() aliases, for PM to stop and resume a process o Added exception for sending/syscall-traced processes to sys_runctl(), and matching SIGKREADY pseudo-signal to PM o Fixed PM signal logic to deal with requests from a process after stopping it (so-called "delay calls"), using the SIGKREADY facility o Fixed various PM panics due to race conditions with delay calls versus VFS calls o Removed special PRIO_STOP priority value o Added SYS_LOCK RTS kernel flag, to stop an individual process from running while modifying its process structure Signal and debugger handling in PM: o Fixed debugger signals being dropped if a second signal arrives when the debugger has not retrieved the first one o Fixed debugger signals being sent to the debugger more than once o Fixed debugger signals unpausing process in VFS; removed PM_UNPAUSE_TR protocol message o Detached debugger signals from general signal logic and from being blocked on VFS calls, meaning that even VFS can now be traced o Fixed debugger being unable to receive more than one pending signal in one process stop o Fixed signal delivery being delayed needlessly when multiple signals are pending o Fixed wait test for tracer, which was returning for children that were not waited for o Removed second parallel pending call from PM to VFS for any process o Fixed process becoming runnable between exec() and debugger trap o Added support for notifying the debugger before the parent when a debugged child exits o Fixed debugger death causing child to remain stopped forever o Fixed consistently incorrect use of _NSIG Extensions to ptrace(): o Added T_ATTACH and T_DETACH ptrace request, to attach and detach a debugger to and from a process o Added T_SYSCALL ptrace request, to trace system calls o Added T_SETOPT ptrace request, to set trace options o Added TO_TRACEFORK trace option, to attach automatically to children of a traced process o Added TO_ALTEXEC trace option, to send SIGSTOP instead of SIGTRAP upon a successful exec() of the tracee o Extended T_GETUSER ptrace support to allow retrieving a process's priv structure o Removed T_STOP ptrace request again, as it does not help implementing debuggers properly o Added MINIX3-specific ptrace test (test42) o Added proper manual page for ptrace(2) Asynchronous PM/VFS interface: o Fixed asynchronous messages not being checked when receive() is called with an endpoint other than ANY o Added AMF_NOREPLY senda() flag, preventing such messages from satisfying the receive part of a sendrec() o Added asynsend3() that takes optional flags; asynsend() is now a #define passing in 0 as third parameter o Made PM/VFS protocol asynchronous; reintroduced tell_fs() o Made PM_BASE request/reply number range unique o Hacked in a horrible temporary workaround into RS to deal with newly revealed RS-PM-VFS race condition triangle until VFS is asynchronous System signal handling: o Fixed shutdown logic of device drivers; removed old SIGKSTOP signal o Removed is-superuser check from PM's do_procstat() (aka getsigset()) o Added sigset macros to allow system processes to deal with the full signal set, rather than just the POSIX subset Miscellaneous PM fixes: o Split do_getset into do_get and do_set, merging common code and making structure clearer o Fixed setpriority() being able to put to sleep processes using an invalid parameter, or revive zombie processes o Made find_proc() global; removed obsolete proc_from_pid() o Cleanup here and there Also included: o Fixed false-positive boot order kernel warning o Removed last traces of old NOTIFY_FROM code THINGS OF POSSIBLE INTEREST o It should now be possible to run PM at any priority, even lower than user processes o No assumptions are made about communication speed between PM and VFS, although communication must be FIFO o A debugger will now receive incoming debuggee signals at kill time only; the process may not yet be fully stopped o A first step has been made towards making the SYSTEM task preemptible
2009-09-30 11:57:22 +02:00
/* If AMF_NOREPLY is set, do not satisfy the receiving part of
* a SENDREC. Do not unset MF_ASYNMSG later because of this,
* though: this message is still to be delivered later.
*/
if ((flags & AMF_NOREPLY) &&
(dst_ptr->p_misc_flags & MF_REPLY_PEND))
{
if (postponed != NULL)
*postponed = TRUE;
continue;
}
/* Deliver message */
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
A_RETRIEVE(i, msg);
dst_ptr->p_delivermsg = tabent.msg;
dst_ptr->p_delivermsg.m_source = src_ptr->p_endpoint;
dst_ptr->p_misc_flags |= MF_DELIVERMSG;
tabent.result = OK;
A_INSERT(i, result);
tabent.flags= flags | AMF_DONE;
A_INSERT(i, flags);
if (flags & AMF_NOTIFY)
{
printf("try_one: should notify caller\n");
}
return OK;
}
if (done)
privp->s_asynsize= 0;
return EAGAIN;
}
2005-04-21 16:53:53 +02:00
/*===========================================================================*
* enqueue *
2005-04-21 16:53:53 +02:00
*===========================================================================*/
2010-03-27 15:31:00 +01:00
PUBLIC void enqueue(
register struct proc *rp /* this process is now runnable */
)
2005-04-21 16:53:53 +02:00
{
/* Add 'rp' to one of the queues of runnable processes. This function is
* responsible for inserting a process into one of the scheduling queues.
* The mechanism is implemented here. The actual scheduling policy is
* defined in sched() and pick_proc().
*/
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
int q = rp->p_priority; /* scheduling queue to use */
2005-04-21 16:53:53 +02:00
assert(proc_is_runnable(rp));
assert(q >= 0);
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
/* Now add the process to the queue. */
if (!rdy_head[q]) { /* add to empty queue */
rdy_head[q] = rdy_tail[q] = rp; /* create a new queue */
rp->p_nextready = NULL; /* mark new end */
2005-04-21 16:53:53 +02:00
}
else { /* add to tail of queue */
rdy_tail[q]->p_nextready = rp; /* chain tail of queue */
rdy_tail[q] = rp; /* set new queue tail */
rp->p_nextready = NULL; /* mark new end */
2005-04-21 16:53:53 +02:00
}
/*
* enqueueing a process with a higher priority than the current one, it gets
* preempted. The current process must be preemptible. Testing the priority
* also makes sure that a process does not preempt itself
*/
assert(proc_ptr && proc_ptr_ok(proc_ptr));
if ((proc_ptr->p_priority > rp->p_priority) &&
(priv(proc_ptr)->s_flags & PREEMPTIBLE))
RTS_SET(proc_ptr, RTS_PREEMPTED); /* calls dequeue() */
#if DEBUG_SANITYCHECKS
assert(runqueues_ok());
#endif
2005-04-21 16:53:53 +02:00
}
/*===========================================================================*
* enqueue_head *
*===========================================================================*/
/*
* put a process at the front of its run queue. It comes handy when a process is
* preempted and removed from run queue to not to have a currently not-runnable
* process on a run queue. We have to put this process back at the fron to be
* fair
*/
PRIVATE void enqueue_head(struct proc *rp)
{
2010-03-27 15:31:00 +01:00
const int q = rp->p_priority; /* scheduling queue to use */
assert(proc_ptr_ok(rp));
assert(proc_is_runnable(rp));
/*
* the process was runnable without its quantum expired when dequeued. A
* process with no time left should vahe been handled else and differently
*/
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
assert(rp->p_ticks_left > 0);
assert(q >= 0);
/* Now add the process to the queue. */
if (!rdy_head[q]) { /* add to empty queue */
rdy_head[q] = rdy_tail[q] = rp; /* create a new queue */
rp->p_nextready = NULL; /* mark new end */
}
else /* add to head of queue */
rp->p_nextready = rdy_head[q]; /* chain head of queue */
rdy_head[q] = rp; /* set new queue head */
#if DEBUG_SANITYCHECKS
assert(runqueues_ok());
#endif
}
2005-04-21 16:53:53 +02:00
/*===========================================================================*
* dequeue *
2005-04-21 16:53:53 +02:00
*===========================================================================*/
2010-03-27 15:31:00 +01:00
PUBLIC void dequeue(const struct proc *rp)
/* this process is no longer runnable */
2005-04-21 16:53:53 +02:00
{
/* A process must be removed from the scheduling queues, for example, because
* it has blocked. If the currently active process is removed, a new process
* is picked to run by calling pick_proc().
*/
register int q = rp->p_priority; /* queue to use */
register struct proc **xpp; /* iterate over queue */
register struct proc *prev_xp;
2005-04-21 16:53:53 +02:00
assert(proc_ptr_ok(rp));
assert(!proc_is_runnable(rp));
2005-04-21 16:53:53 +02:00
/* Side-effect for kernel: check if the task's stack still is ok? */
assert (!iskernelp(rp) || *priv(rp)->s_stack_guard == STACK_GUARD);
2005-04-21 16:53:53 +02:00
/* Now make sure that the process is not in its ready queue. Remove the
* process if it is found. A process can be made unready even if it is not
* running by being sent a signal that kills it.
2005-04-21 16:53:53 +02:00
*/
prev_xp = NULL;
for (xpp = &rdy_head[q]; *xpp; xpp = &(*xpp)->p_nextready) {
if (*xpp == rp) { /* found process to remove */
*xpp = (*xpp)->p_nextready; /* replace with next chain */
if (rp == rdy_tail[q]) { /* queue tail removed */
rdy_tail[q] = prev_xp; /* set new tail */
}
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
break;
}
prev_xp = *xpp; /* save previous in chain */
2005-04-21 16:53:53 +02:00
}
#if DEBUG_SANITYCHECKS
assert(runqueues_ok());
#endif
2005-04-21 16:53:53 +02:00
}
/*===========================================================================*
* pick_proc *
*===========================================================================*/
PRIVATE struct proc * pick_proc(void)
{
/* Decide who to run now. A new process is selected an returned.
* When a billable process is selected, record it in 'bill_ptr', so that the
* clock task can tell who to bill for system time.
*/
register struct proc *rp; /* process to run */
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
int q; /* iterate over queues */
/* Check each of the scheduling queues for ready processes. The number of
* queues is defined in proc.h, and priorities are set in the task table.
* The lowest queue contains IDLE, which is always ready.
*/
for (q=0; q < NR_SCHED_QUEUES; q++) {
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
if(!(rp = rdy_head[q])) {
TRACE(VF_PICKPROC, printf("queue %d empty\n", q););
continue;
}
TRACE(VF_PICKPROC, printf("found %s / %d on queue %d\n",
rp->p_name, rp->p_endpoint, q););
assert(proc_is_runnable(rp));
Primary goal for these changes is: - no longer have kernel have its own page table that is loaded on every kernel entry (trap, interrupt, exception). the primary purpose is to reduce the number of required reloads. Result: - kernel can only access memory of process that was running when kernel was entered - kernel must be mapped into every process page table, so traps to kernel keep working Problem: - kernel must often access memory of arbitrary processes (e.g. send arbitrary processes messages); this can't happen directly any more; usually because that process' page table isn't loaded at all, sometimes because that memory isn't mapped in at all, sometimes because it isn't mapped in read-write. So: - kernel must be able to map in memory of any process, in its own address space. Implementation: - VM and kernel share a range of memory in which addresses of all page tables of all processes are available. This has two purposes: . Kernel has to know what data to copy in order to map in a range . Kernel has to know where to write the data in order to map it in That last point is because kernel has to write in the currently loaded page table. - Processes and kernel are separated through segments; kernel segments haven't changed. - The kernel keeps the process whose page table is currently loaded in 'ptproc.' - If it wants to map in a range of memory, it writes the value of the page directory entry for that range into the page directory entry in the currently loaded map. There is a slot reserved for such purposes. The kernel can then access this memory directly. - In order to do this, its segment has been increased (and the segments of processes start where it ends). - In the pagefault handler, detect if the kernel is doing 'trappable' memory access (i.e. a pagefault isn't a fatal error) and if so, - set the saved instruction pointer to phys_copy_fault, breaking out of phys_copy - set the saved eax register to the address of the page fault, both for sanity checking and for checking in which of the two ranges that phys_copy was called with the fault occured - Some boot-time processes do not have their own page table, and are mapped in with the kernel, and separated with segments. The kernel detects this using HASPT. If such a process has to be scheduled, any page table will work and no page table switch is done. Major changes in kernel are - When accessing user processes memory, kernel no longer explicitly checks before it does so if that memory is OK. It simply makes the mapping (if necessary), tries to do the operation, and traps the pagefault if that memory isn't present; if that happens, the copy function returns EFAULT. So all of the CHECKRANGE_OR_SUSPEND macros are gone. - Kernel no longer has to copy/read and parse page tables. - A message copying optimisation: when messages are copied, and the recipient isn't mapped in, they are copied into a buffer in the kernel. This is done in QueueMess. The next time the recipient is scheduled, this message is copied into its memory. This happens in schedcheck(). This eliminates the mapping/copying step for messages, and makes it easier to deliver messages. This eliminates soft_notify. - Kernel no longer creates a page table at all, so the vm_setbuf and pagetable writing in memory.c is gone. Minor changes in kernel are - ipc_stats thrown out, wasn't used - misc flags all renamed to MF_* - NOREC_* macros to enter and leave functions that should not be called recursively; just sanity checks really - code to fully decode segment selectors and descriptors to print on exceptions - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 16:31:52 +02:00
if (priv(rp)->s_flags & BILLABLE)
bill_ptr = rp; /* bill for system time */
return rp;
}
return NULL;
2005-04-21 16:53:53 +02:00
}
/*===========================================================================*
* endpoint_lookup *
*===========================================================================*/
PUBLIC struct proc *endpoint_lookup(endpoint_t e)
{
int n;
if(!isokendpt(e, &n)) return NULL;
return proc_addr(n);
}
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
/*===========================================================================*
* isokendpt_f *
*===========================================================================*/
#if DEBUG_ENABLE_IPC_WARNINGS
PUBLIC int isokendpt_f(file, line, e, p, fatalflag)
2010-03-27 15:31:00 +01:00
const char *file;
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
int line;
#else
PUBLIC int isokendpt_f(e, p, fatalflag)
#endif
endpoint_t e;
2010-03-27 15:31:00 +01:00
int *p;
const int fatalflag;
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
{
int ok = 0;
/* Convert an endpoint number into a process number.
* Return nonzero if the process is alive with the corresponding
* generation number, zero otherwise.
*
* This function is called with file and line number by the
* isokendpt_d macro if DEBUG_ENABLE_IPC_WARNINGS is defined,
* otherwise without. This allows us to print the where the
* conversion was attempted, making the errors verbose without
* adding code for that at every call.
*
* If fatalflag is nonzero, we must panic if the conversion doesn't
* succeed.
*/
*p = _ENDPOINT_P(e);
if(!isokprocn(*p)) {
#if DEBUG_ENABLE_IPC_WARNINGS
printf("kernel:%s:%d: bad endpoint %d: proc %d out of range\n",
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
file, line, e, *p);
#endif
} else if(isemptyn(*p)) {
#if 0
printf("kernel:%s:%d: bad endpoint %d: proc %d empty\n", file, line, e, *p);
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
#endif
} else if(proc_addr(*p)->p_endpoint != e) {
#if DEBUG_ENABLE_IPC_WARNINGS
printf("kernel:%s:%d: bad endpoint %d: proc %d has ept %d (generation %d vs. %d)\n", file, line,
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
e, *p, proc_addr(*p)->p_endpoint,
_ENDPOINT_G(e), _ENDPOINT_G(proc_addr(*p)->p_endpoint));
#endif
} else ok = 1;
if(!ok && fatalflag) {
panic("invalid endpoint: %d", e);
'proc number' is process slot, 'endpoint' are generation-aware process instance numbers, encoded and decoded using macros in <minix/endpoint.h>. proc number -> endpoint migration . proc_nr in the interrupt hook is now an endpoint, proc_nr_e. . m_source for messages and notifies is now an endpoint, instead of proc number. . isokendpt() converts an endpoint to a process number, returns success (but fails if the process number is out of range, the process slot is not a living process, or the given endpoint number does not match the endpoint number in the process slot, indicating an old process). . okendpt() is the same as isokendpt(), but panic()s if the conversion fails. This is mainly used for decoding message.m_source endpoints, and other endpoint numbers in kernel data structures, which should always be correct. . if DEBUG_ENABLE_IPC_WARNINGS is enabled, isokendpt() and okendpt() get passed the __FILE__ and __LINE__ of the calling lines, and print messages about what is wrong with the endpoint number (out of range proc, empty proc, or inconsistent endpoint number), with the caller, making finding where the conversion failed easy without having to include code for every call to print where things went wrong. Sometimes this is harmless (wrong arg to a kernel call), sometimes it's a fatal internal inconsistency (bogus m_source). . some process table fields have been appended an _e to indicate it's become and endpoint. . process endpoint is stored in p_endpoint, without generation number. it turns out the kernel never needs the generation number, except when fork()ing, so it's decoded then. . kernel calls all take endpoints as arguments, not proc numbers. the one exception is sys_fork(), which needs to know in which slot to put the child.
2006-03-03 11:00:02 +01:00
}
return ok;
}
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
PRIVATE void notify_scheduler(struct proc *p)
{
/* dequeue the process */
RTS_SET(p, RTS_NO_QUANTUM);
/*
* Notify the process's scheduler that it has run out of
* quantum. This is done by sending a message to the scheduler
* on the process's behalf
*/
if (p->p_scheduler == p) {
/*
* If a scheduler is scheduling itself, and runs out of
* quantum, we don't send a message. The RTS_NO_QUANTUM
* flag will be removed by schedcheck in proc.c.
*/
}
else if (p->p_scheduler != NULL) {
message m_no_quantum;
int err;
m_no_quantum.m_source = p->p_endpoint;
m_no_quantum.m_type = SCHEDULING_NO_QUANTUM;
if ((err = mini_send(p, p->p_scheduler->p_endpoint,
&m_no_quantum, FROM_KERNEL))) {
panic("WARNING: Scheduling: mini_send returned %d\n", err);
}
}
}
PUBLIC void check_ticks_left(struct proc * p)
{
if (p->p_ticks_left <= 0) {
p->p_ticks_left = 0;
if (priv(p)->s_flags & PREEMPTIBLE) {
/* this dequeues the process */
notify_scheduler(p);
}
else {
/*
* non-preemptible processes only need their quantum to
* be renewed. In fact, they by pass scheduling
*/
p->p_ticks_left = p->p_quantum_size;
}
}
}