minix/servers/pm/schedule.c

113 lines
3.4 KiB
C
Raw Normal View History

Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
#include "pm.h"
#include <assert.h>
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
#include <minix/callnr.h>
#include <minix/com.h>
#include <minix/config.h>
#include <minix/sched.h>
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
#include <minix/sysinfo.h>
#include <minix/type.h>
#include <machine/archtypes.h>
#include <lib.h>
#include "mproc.h"
#include <machine/archtypes.h>
#include <minix/timers.h>
#include "kernel/proc.h"
Scheduling server (by Bjorn Swift) In this second phase, scheduling is moved from PM to its own scheduler (see r6557 for phase one). In the next phase we hope to a) include useful information in the "out of quantum" message and b) create some simple scheduling policy that makes use of that information. When the system starts up, PM will iterate over its process table and ask SCHED to take over scheduling unprivileged processes. This is done by sending a SCHEDULING_START message to SCHED. This message includes the processes endpoint, the parent's endpoint and its nice level. The scheduler adds this process to its schedproc table, issues a schedctl, and returns its own endpoint to PM - as the endpoint of the effective scheduler. When a process terminates, a SCHEDULING_STOP message is sent to the scheduler. The reason for this effective endpoint is for future compatibility. Some day, we may have a scheduler that, instead of scheduling the process itself, forwards the SCHEDULING_START message on to another scheduler. PM has information on who schedules whom. As such, scheduling messages from user-land are sent through PM. An example is when processes change their priority, using nice(). In that case, a getsetpriority message is sent to PM, which then sends a SCHEDULING_SET_NICE to the process's effective scheduler. When a process is forked through PM, it inherits its parent's scheduler, but is spawned with an empty quantum. As before, a request to fork a process flows through VM before returning to PM, which then wakes up the child process. This flow has been modified slightly so that PM notifies the scheduler of the new process, before waking up the child process. If the scheduler fails to take over scheduling, the child process is torn down and the fork fails with an erroneous value. Process priority is entirely decided upon using nice levels. PM stores a copy of each process's nice level and when a child is forked, its parent's nice level is sent in the SCHEDULING_START message. How this level is mapped to a priority queue is up to the scheduler. It should be noted that the nice level is used to determine the max_priority and the parent could have been in a lower priority when it was spawned. To prevent a CPU intensive process from hawking the CPU by continuously forking children that get scheduled in the max_priority, the scheduler should determine in which queue the parent is currently scheduled, and schedule the child in that same queue. Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as NR_SCHED_QUEUES/2. That results in a "off by one" error when converting priority->nice->priority for nice=0. This also had the side effect that if someone were to set the MAX_USER_Q to something else than 0, then USER_Q would be off.
2010-05-18 15:39:04 +02:00
/*===========================================================================*
* init_scheduling *
*===========================================================================*/
2012-03-25 20:25:53 +02:00
void sched_init(void)
{
Scheduling server (by Bjorn Swift) In this second phase, scheduling is moved from PM to its own scheduler (see r6557 for phase one). In the next phase we hope to a) include useful information in the "out of quantum" message and b) create some simple scheduling policy that makes use of that information. When the system starts up, PM will iterate over its process table and ask SCHED to take over scheduling unprivileged processes. This is done by sending a SCHEDULING_START message to SCHED. This message includes the processes endpoint, the parent's endpoint and its nice level. The scheduler adds this process to its schedproc table, issues a schedctl, and returns its own endpoint to PM - as the endpoint of the effective scheduler. When a process terminates, a SCHEDULING_STOP message is sent to the scheduler. The reason for this effective endpoint is for future compatibility. Some day, we may have a scheduler that, instead of scheduling the process itself, forwards the SCHEDULING_START message on to another scheduler. PM has information on who schedules whom. As such, scheduling messages from user-land are sent through PM. An example is when processes change their priority, using nice(). In that case, a getsetpriority message is sent to PM, which then sends a SCHEDULING_SET_NICE to the process's effective scheduler. When a process is forked through PM, it inherits its parent's scheduler, but is spawned with an empty quantum. As before, a request to fork a process flows through VM before returning to PM, which then wakes up the child process. This flow has been modified slightly so that PM notifies the scheduler of the new process, before waking up the child process. If the scheduler fails to take over scheduling, the child process is torn down and the fork fails with an erroneous value. Process priority is entirely decided upon using nice levels. PM stores a copy of each process's nice level and when a child is forked, its parent's nice level is sent in the SCHEDULING_START message. How this level is mapped to a priority queue is up to the scheduler. It should be noted that the nice level is used to determine the max_priority and the parent could have been in a lower priority when it was spawned. To prevent a CPU intensive process from hawking the CPU by continuously forking children that get scheduled in the max_priority, the scheduler should determine in which queue the parent is currently scheduled, and schedule the child in that same queue. Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as NR_SCHED_QUEUES/2. That results in a "off by one" error when converting priority->nice->priority for nice=0. This also had the side effect that if someone were to set the MAX_USER_Q to something else than 0, then USER_Q would be off.
2010-05-18 15:39:04 +02:00
struct mproc *trmp;
endpoint_t parent_e;
int proc_nr, s;
Scheduling server (by Bjorn Swift) In this second phase, scheduling is moved from PM to its own scheduler (see r6557 for phase one). In the next phase we hope to a) include useful information in the "out of quantum" message and b) create some simple scheduling policy that makes use of that information. When the system starts up, PM will iterate over its process table and ask SCHED to take over scheduling unprivileged processes. This is done by sending a SCHEDULING_START message to SCHED. This message includes the processes endpoint, the parent's endpoint and its nice level. The scheduler adds this process to its schedproc table, issues a schedctl, and returns its own endpoint to PM - as the endpoint of the effective scheduler. When a process terminates, a SCHEDULING_STOP message is sent to the scheduler. The reason for this effective endpoint is for future compatibility. Some day, we may have a scheduler that, instead of scheduling the process itself, forwards the SCHEDULING_START message on to another scheduler. PM has information on who schedules whom. As such, scheduling messages from user-land are sent through PM. An example is when processes change their priority, using nice(). In that case, a getsetpriority message is sent to PM, which then sends a SCHEDULING_SET_NICE to the process's effective scheduler. When a process is forked through PM, it inherits its parent's scheduler, but is spawned with an empty quantum. As before, a request to fork a process flows through VM before returning to PM, which then wakes up the child process. This flow has been modified slightly so that PM notifies the scheduler of the new process, before waking up the child process. If the scheduler fails to take over scheduling, the child process is torn down and the fork fails with an erroneous value. Process priority is entirely decided upon using nice levels. PM stores a copy of each process's nice level and when a child is forked, its parent's nice level is sent in the SCHEDULING_START message. How this level is mapped to a priority queue is up to the scheduler. It should be noted that the nice level is used to determine the max_priority and the parent could have been in a lower priority when it was spawned. To prevent a CPU intensive process from hawking the CPU by continuously forking children that get scheduled in the max_priority, the scheduler should determine in which queue the parent is currently scheduled, and schedule the child in that same queue. Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as NR_SCHED_QUEUES/2. That results in a "off by one" error when converting priority->nice->priority for nice=0. This also had the side effect that if someone were to set the MAX_USER_Q to something else than 0, then USER_Q would be off.
2010-05-18 15:39:04 +02:00
for (proc_nr=0, trmp=mproc; proc_nr < NR_PROCS; proc_nr++, trmp++) {
/* Don't take over system processes. When the system starts,
* init is blocked on RTS_NO_QUANTUM until PM assigns a
* scheduler, from which other. Given that all other user
* processes are forked from init and system processes are
* managed by RS, there should be no other process that needs
* to be assigned a scheduler here */
Scheduling server (by Bjorn Swift) In this second phase, scheduling is moved from PM to its own scheduler (see r6557 for phase one). In the next phase we hope to a) include useful information in the "out of quantum" message and b) create some simple scheduling policy that makes use of that information. When the system starts up, PM will iterate over its process table and ask SCHED to take over scheduling unprivileged processes. This is done by sending a SCHEDULING_START message to SCHED. This message includes the processes endpoint, the parent's endpoint and its nice level. The scheduler adds this process to its schedproc table, issues a schedctl, and returns its own endpoint to PM - as the endpoint of the effective scheduler. When a process terminates, a SCHEDULING_STOP message is sent to the scheduler. The reason for this effective endpoint is for future compatibility. Some day, we may have a scheduler that, instead of scheduling the process itself, forwards the SCHEDULING_START message on to another scheduler. PM has information on who schedules whom. As such, scheduling messages from user-land are sent through PM. An example is when processes change their priority, using nice(). In that case, a getsetpriority message is sent to PM, which then sends a SCHEDULING_SET_NICE to the process's effective scheduler. When a process is forked through PM, it inherits its parent's scheduler, but is spawned with an empty quantum. As before, a request to fork a process flows through VM before returning to PM, which then wakes up the child process. This flow has been modified slightly so that PM notifies the scheduler of the new process, before waking up the child process. If the scheduler fails to take over scheduling, the child process is torn down and the fork fails with an erroneous value. Process priority is entirely decided upon using nice levels. PM stores a copy of each process's nice level and when a child is forked, its parent's nice level is sent in the SCHEDULING_START message. How this level is mapped to a priority queue is up to the scheduler. It should be noted that the nice level is used to determine the max_priority and the parent could have been in a lower priority when it was spawned. To prevent a CPU intensive process from hawking the CPU by continuously forking children that get scheduled in the max_priority, the scheduler should determine in which queue the parent is currently scheduled, and schedule the child in that same queue. Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as NR_SCHED_QUEUES/2. That results in a "off by one" error when converting priority->nice->priority for nice=0. This also had the side effect that if someone were to set the MAX_USER_Q to something else than 0, then USER_Q would be off.
2010-05-18 15:39:04 +02:00
if (trmp->mp_flags & IN_USE && !(trmp->mp_flags & PRIV_PROC)) {
assert(_ENDPOINT_P(trmp->mp_endpoint) == INIT_PROC_NR);
parent_e = mproc[trmp->mp_parent].mp_endpoint;
assert(parent_e == trmp->mp_endpoint);
s = sched_start(SCHED_PROC_NR, /* scheduler_e */
trmp->mp_endpoint, /* schedulee_e */
parent_e, /* parent_e */
USER_Q, /* maxprio */
USER_QUANTUM, /* quantum */
-1, /* don't change cpu */
&trmp->mp_scheduler); /* *newsched_e */
if (s != OK) {
printf("PM: SCHED denied taking over scheduling of %s: %d\n",
trmp->mp_name, s);
Scheduling server (by Bjorn Swift) In this second phase, scheduling is moved from PM to its own scheduler (see r6557 for phase one). In the next phase we hope to a) include useful information in the "out of quantum" message and b) create some simple scheduling policy that makes use of that information. When the system starts up, PM will iterate over its process table and ask SCHED to take over scheduling unprivileged processes. This is done by sending a SCHEDULING_START message to SCHED. This message includes the processes endpoint, the parent's endpoint and its nice level. The scheduler adds this process to its schedproc table, issues a schedctl, and returns its own endpoint to PM - as the endpoint of the effective scheduler. When a process terminates, a SCHEDULING_STOP message is sent to the scheduler. The reason for this effective endpoint is for future compatibility. Some day, we may have a scheduler that, instead of scheduling the process itself, forwards the SCHEDULING_START message on to another scheduler. PM has information on who schedules whom. As such, scheduling messages from user-land are sent through PM. An example is when processes change their priority, using nice(). In that case, a getsetpriority message is sent to PM, which then sends a SCHEDULING_SET_NICE to the process's effective scheduler. When a process is forked through PM, it inherits its parent's scheduler, but is spawned with an empty quantum. As before, a request to fork a process flows through VM before returning to PM, which then wakes up the child process. This flow has been modified slightly so that PM notifies the scheduler of the new process, before waking up the child process. If the scheduler fails to take over scheduling, the child process is torn down and the fork fails with an erroneous value. Process priority is entirely decided upon using nice levels. PM stores a copy of each process's nice level and when a child is forked, its parent's nice level is sent in the SCHEDULING_START message. How this level is mapped to a priority queue is up to the scheduler. It should be noted that the nice level is used to determine the max_priority and the parent could have been in a lower priority when it was spawned. To prevent a CPU intensive process from hawking the CPU by continuously forking children that get scheduled in the max_priority, the scheduler should determine in which queue the parent is currently scheduled, and schedule the child in that same queue. Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as NR_SCHED_QUEUES/2. That results in a "off by one" error when converting priority->nice->priority for nice=0. This also had the side effect that if someone were to set the MAX_USER_Q to something else than 0, then USER_Q would be off.
2010-05-18 15:39:04 +02:00
}
}
}
}
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
/*===========================================================================*
* sched_start_user *
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
*===========================================================================*/
2012-03-25 20:25:53 +02:00
int sched_start_user(endpoint_t ep, struct mproc *rmp)
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
{
unsigned maxprio;
endpoint_t inherit_from;
Scheduling server (by Bjorn Swift) In this second phase, scheduling is moved from PM to its own scheduler (see r6557 for phase one). In the next phase we hope to a) include useful information in the "out of quantum" message and b) create some simple scheduling policy that makes use of that information. When the system starts up, PM will iterate over its process table and ask SCHED to take over scheduling unprivileged processes. This is done by sending a SCHEDULING_START message to SCHED. This message includes the processes endpoint, the parent's endpoint and its nice level. The scheduler adds this process to its schedproc table, issues a schedctl, and returns its own endpoint to PM - as the endpoint of the effective scheduler. When a process terminates, a SCHEDULING_STOP message is sent to the scheduler. The reason for this effective endpoint is for future compatibility. Some day, we may have a scheduler that, instead of scheduling the process itself, forwards the SCHEDULING_START message on to another scheduler. PM has information on who schedules whom. As such, scheduling messages from user-land are sent through PM. An example is when processes change their priority, using nice(). In that case, a getsetpriority message is sent to PM, which then sends a SCHEDULING_SET_NICE to the process's effective scheduler. When a process is forked through PM, it inherits its parent's scheduler, but is spawned with an empty quantum. As before, a request to fork a process flows through VM before returning to PM, which then wakes up the child process. This flow has been modified slightly so that PM notifies the scheduler of the new process, before waking up the child process. If the scheduler fails to take over scheduling, the child process is torn down and the fork fails with an erroneous value. Process priority is entirely decided upon using nice levels. PM stores a copy of each process's nice level and when a child is forked, its parent's nice level is sent in the SCHEDULING_START message. How this level is mapped to a priority queue is up to the scheduler. It should be noted that the nice level is used to determine the max_priority and the parent could have been in a lower priority when it was spawned. To prevent a CPU intensive process from hawking the CPU by continuously forking children that get scheduled in the max_priority, the scheduler should determine in which queue the parent is currently scheduled, and schedule the child in that same queue. Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as NR_SCHED_QUEUES/2. That results in a "off by one" error when converting priority->nice->priority for nice=0. This also had the side effect that if someone were to set the MAX_USER_Q to something else than 0, then USER_Q would be off.
2010-05-18 15:39:04 +02:00
int rv;
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
/* convert nice to priority */
if ((rv = nice_to_priority(rmp->mp_nice, &maxprio)) != OK) {
Scheduling server (by Bjorn Swift) In this second phase, scheduling is moved from PM to its own scheduler (see r6557 for phase one). In the next phase we hope to a) include useful information in the "out of quantum" message and b) create some simple scheduling policy that makes use of that information. When the system starts up, PM will iterate over its process table and ask SCHED to take over scheduling unprivileged processes. This is done by sending a SCHEDULING_START message to SCHED. This message includes the processes endpoint, the parent's endpoint and its nice level. The scheduler adds this process to its schedproc table, issues a schedctl, and returns its own endpoint to PM - as the endpoint of the effective scheduler. When a process terminates, a SCHEDULING_STOP message is sent to the scheduler. The reason for this effective endpoint is for future compatibility. Some day, we may have a scheduler that, instead of scheduling the process itself, forwards the SCHEDULING_START message on to another scheduler. PM has information on who schedules whom. As such, scheduling messages from user-land are sent through PM. An example is when processes change their priority, using nice(). In that case, a getsetpriority message is sent to PM, which then sends a SCHEDULING_SET_NICE to the process's effective scheduler. When a process is forked through PM, it inherits its parent's scheduler, but is spawned with an empty quantum. As before, a request to fork a process flows through VM before returning to PM, which then wakes up the child process. This flow has been modified slightly so that PM notifies the scheduler of the new process, before waking up the child process. If the scheduler fails to take over scheduling, the child process is torn down and the fork fails with an erroneous value. Process priority is entirely decided upon using nice levels. PM stores a copy of each process's nice level and when a child is forked, its parent's nice level is sent in the SCHEDULING_START message. How this level is mapped to a priority queue is up to the scheduler. It should be noted that the nice level is used to determine the max_priority and the parent could have been in a lower priority when it was spawned. To prevent a CPU intensive process from hawking the CPU by continuously forking children that get scheduled in the max_priority, the scheduler should determine in which queue the parent is currently scheduled, and schedule the child in that same queue. Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as NR_SCHED_QUEUES/2. That results in a "off by one" error when converting priority->nice->priority for nice=0. This also had the side effect that if someone were to set the MAX_USER_Q to something else than 0, then USER_Q would be off.
2010-05-18 15:39:04 +02:00
return rv;
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
}
/* scheduler must know the parent, which is not the case for a child
* of a system process created by a regular fork; in this case the
* scheduler should inherit settings from init rather than the real
* parent
*/
if (mproc[rmp->mp_parent].mp_flags & PRIV_PROC) {
assert(mproc[rmp->mp_parent].mp_scheduler == NONE);
inherit_from = INIT_PROC_NR;
} else {
inherit_from = mproc[rmp->mp_parent].mp_endpoint;
}
/* inherit quantum */
return sched_inherit(ep, /* scheduler_e */
rmp->mp_endpoint, /* schedulee_e */
inherit_from, /* parent_e */
maxprio, /* maxprio */
&rmp->mp_scheduler); /* *newsched_e */
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
}
/*===========================================================================*
Scheduling server (by Bjorn Swift) In this second phase, scheduling is moved from PM to its own scheduler (see r6557 for phase one). In the next phase we hope to a) include useful information in the "out of quantum" message and b) create some simple scheduling policy that makes use of that information. When the system starts up, PM will iterate over its process table and ask SCHED to take over scheduling unprivileged processes. This is done by sending a SCHEDULING_START message to SCHED. This message includes the processes endpoint, the parent's endpoint and its nice level. The scheduler adds this process to its schedproc table, issues a schedctl, and returns its own endpoint to PM - as the endpoint of the effective scheduler. When a process terminates, a SCHEDULING_STOP message is sent to the scheduler. The reason for this effective endpoint is for future compatibility. Some day, we may have a scheduler that, instead of scheduling the process itself, forwards the SCHEDULING_START message on to another scheduler. PM has information on who schedules whom. As such, scheduling messages from user-land are sent through PM. An example is when processes change their priority, using nice(). In that case, a getsetpriority message is sent to PM, which then sends a SCHEDULING_SET_NICE to the process's effective scheduler. When a process is forked through PM, it inherits its parent's scheduler, but is spawned with an empty quantum. As before, a request to fork a process flows through VM before returning to PM, which then wakes up the child process. This flow has been modified slightly so that PM notifies the scheduler of the new process, before waking up the child process. If the scheduler fails to take over scheduling, the child process is torn down and the fork fails with an erroneous value. Process priority is entirely decided upon using nice levels. PM stores a copy of each process's nice level and when a child is forked, its parent's nice level is sent in the SCHEDULING_START message. How this level is mapped to a priority queue is up to the scheduler. It should be noted that the nice level is used to determine the max_priority and the parent could have been in a lower priority when it was spawned. To prevent a CPU intensive process from hawking the CPU by continuously forking children that get scheduled in the max_priority, the scheduler should determine in which queue the parent is currently scheduled, and schedule the child in that same queue. Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as NR_SCHED_QUEUES/2. That results in a "off by one" error when converting priority->nice->priority for nice=0. This also had the side effect that if someone were to set the MAX_USER_Q to something else than 0, then USER_Q would be off.
2010-05-18 15:39:04 +02:00
* sched_nice *
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
*===========================================================================*/
2012-03-25 20:25:53 +02:00
int sched_nice(struct mproc *rmp, int nice)
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
{
int rv;
Scheduling server (by Bjorn Swift) In this second phase, scheduling is moved from PM to its own scheduler (see r6557 for phase one). In the next phase we hope to a) include useful information in the "out of quantum" message and b) create some simple scheduling policy that makes use of that information. When the system starts up, PM will iterate over its process table and ask SCHED to take over scheduling unprivileged processes. This is done by sending a SCHEDULING_START message to SCHED. This message includes the processes endpoint, the parent's endpoint and its nice level. The scheduler adds this process to its schedproc table, issues a schedctl, and returns its own endpoint to PM - as the endpoint of the effective scheduler. When a process terminates, a SCHEDULING_STOP message is sent to the scheduler. The reason for this effective endpoint is for future compatibility. Some day, we may have a scheduler that, instead of scheduling the process itself, forwards the SCHEDULING_START message on to another scheduler. PM has information on who schedules whom. As such, scheduling messages from user-land are sent through PM. An example is when processes change their priority, using nice(). In that case, a getsetpriority message is sent to PM, which then sends a SCHEDULING_SET_NICE to the process's effective scheduler. When a process is forked through PM, it inherits its parent's scheduler, but is spawned with an empty quantum. As before, a request to fork a process flows through VM before returning to PM, which then wakes up the child process. This flow has been modified slightly so that PM notifies the scheduler of the new process, before waking up the child process. If the scheduler fails to take over scheduling, the child process is torn down and the fork fails with an erroneous value. Process priority is entirely decided upon using nice levels. PM stores a copy of each process's nice level and when a child is forked, its parent's nice level is sent in the SCHEDULING_START message. How this level is mapped to a priority queue is up to the scheduler. It should be noted that the nice level is used to determine the max_priority and the parent could have been in a lower priority when it was spawned. To prevent a CPU intensive process from hawking the CPU by continuously forking children that get scheduled in the max_priority, the scheduler should determine in which queue the parent is currently scheduled, and schedule the child in that same queue. Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as NR_SCHED_QUEUES/2. That results in a "off by one" error when converting priority->nice->priority for nice=0. This also had the side effect that if someone were to set the MAX_USER_Q to something else than 0, then USER_Q would be off.
2010-05-18 15:39:04 +02:00
message m;
unsigned maxprio;
Scheduling server (by Bjorn Swift) In this second phase, scheduling is moved from PM to its own scheduler (see r6557 for phase one). In the next phase we hope to a) include useful information in the "out of quantum" message and b) create some simple scheduling policy that makes use of that information. When the system starts up, PM will iterate over its process table and ask SCHED to take over scheduling unprivileged processes. This is done by sending a SCHEDULING_START message to SCHED. This message includes the processes endpoint, the parent's endpoint and its nice level. The scheduler adds this process to its schedproc table, issues a schedctl, and returns its own endpoint to PM - as the endpoint of the effective scheduler. When a process terminates, a SCHEDULING_STOP message is sent to the scheduler. The reason for this effective endpoint is for future compatibility. Some day, we may have a scheduler that, instead of scheduling the process itself, forwards the SCHEDULING_START message on to another scheduler. PM has information on who schedules whom. As such, scheduling messages from user-land are sent through PM. An example is when processes change their priority, using nice(). In that case, a getsetpriority message is sent to PM, which then sends a SCHEDULING_SET_NICE to the process's effective scheduler. When a process is forked through PM, it inherits its parent's scheduler, but is spawned with an empty quantum. As before, a request to fork a process flows through VM before returning to PM, which then wakes up the child process. This flow has been modified slightly so that PM notifies the scheduler of the new process, before waking up the child process. If the scheduler fails to take over scheduling, the child process is torn down and the fork fails with an erroneous value. Process priority is entirely decided upon using nice levels. PM stores a copy of each process's nice level and when a child is forked, its parent's nice level is sent in the SCHEDULING_START message. How this level is mapped to a priority queue is up to the scheduler. It should be noted that the nice level is used to determine the max_priority and the parent could have been in a lower priority when it was spawned. To prevent a CPU intensive process from hawking the CPU by continuously forking children that get scheduled in the max_priority, the scheduler should determine in which queue the parent is currently scheduled, and schedule the child in that same queue. Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as NR_SCHED_QUEUES/2. That results in a "off by one" error when converting priority->nice->priority for nice=0. This also had the side effect that if someone were to set the MAX_USER_Q to something else than 0, then USER_Q would be off.
2010-05-18 15:39:04 +02:00
/* If the kernel is the scheduler, we don't allow messing with the
* priority. If you want to control process priority, assign the process
* to a user-space scheduler */
if (rmp->mp_scheduler == KERNEL || rmp->mp_scheduler == NONE)
return (EINVAL);
if ((rv = nice_to_priority(nice, &maxprio)) != OK) {
return rv;
}
m.m_pm_sched_scheduling_set_nice.endpoint = rmp->mp_endpoint;
m.m_pm_sched_scheduling_set_nice.maxprio = maxprio;
Scheduling server (by Bjorn Swift) In this second phase, scheduling is moved from PM to its own scheduler (see r6557 for phase one). In the next phase we hope to a) include useful information in the "out of quantum" message and b) create some simple scheduling policy that makes use of that information. When the system starts up, PM will iterate over its process table and ask SCHED to take over scheduling unprivileged processes. This is done by sending a SCHEDULING_START message to SCHED. This message includes the processes endpoint, the parent's endpoint and its nice level. The scheduler adds this process to its schedproc table, issues a schedctl, and returns its own endpoint to PM - as the endpoint of the effective scheduler. When a process terminates, a SCHEDULING_STOP message is sent to the scheduler. The reason for this effective endpoint is for future compatibility. Some day, we may have a scheduler that, instead of scheduling the process itself, forwards the SCHEDULING_START message on to another scheduler. PM has information on who schedules whom. As such, scheduling messages from user-land are sent through PM. An example is when processes change their priority, using nice(). In that case, a getsetpriority message is sent to PM, which then sends a SCHEDULING_SET_NICE to the process's effective scheduler. When a process is forked through PM, it inherits its parent's scheduler, but is spawned with an empty quantum. As before, a request to fork a process flows through VM before returning to PM, which then wakes up the child process. This flow has been modified slightly so that PM notifies the scheduler of the new process, before waking up the child process. If the scheduler fails to take over scheduling, the child process is torn down and the fork fails with an erroneous value. Process priority is entirely decided upon using nice levels. PM stores a copy of each process's nice level and when a child is forked, its parent's nice level is sent in the SCHEDULING_START message. How this level is mapped to a priority queue is up to the scheduler. It should be noted that the nice level is used to determine the max_priority and the parent could have been in a lower priority when it was spawned. To prevent a CPU intensive process from hawking the CPU by continuously forking children that get scheduled in the max_priority, the scheduler should determine in which queue the parent is currently scheduled, and schedule the child in that same queue. Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as NR_SCHED_QUEUES/2. That results in a "off by one" error when converting priority->nice->priority for nice=0. This also had the side effect that if someone were to set the MAX_USER_Q to something else than 0, then USER_Q would be off.
2010-05-18 15:39:04 +02:00
if ((rv = _taskcall(rmp->mp_scheduler, SCHEDULING_SET_NICE, &m))) {
return rv;
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
}
Scheduling server (by Bjorn Swift) In this second phase, scheduling is moved from PM to its own scheduler (see r6557 for phase one). In the next phase we hope to a) include useful information in the "out of quantum" message and b) create some simple scheduling policy that makes use of that information. When the system starts up, PM will iterate over its process table and ask SCHED to take over scheduling unprivileged processes. This is done by sending a SCHEDULING_START message to SCHED. This message includes the processes endpoint, the parent's endpoint and its nice level. The scheduler adds this process to its schedproc table, issues a schedctl, and returns its own endpoint to PM - as the endpoint of the effective scheduler. When a process terminates, a SCHEDULING_STOP message is sent to the scheduler. The reason for this effective endpoint is for future compatibility. Some day, we may have a scheduler that, instead of scheduling the process itself, forwards the SCHEDULING_START message on to another scheduler. PM has information on who schedules whom. As such, scheduling messages from user-land are sent through PM. An example is when processes change their priority, using nice(). In that case, a getsetpriority message is sent to PM, which then sends a SCHEDULING_SET_NICE to the process's effective scheduler. When a process is forked through PM, it inherits its parent's scheduler, but is spawned with an empty quantum. As before, a request to fork a process flows through VM before returning to PM, which then wakes up the child process. This flow has been modified slightly so that PM notifies the scheduler of the new process, before waking up the child process. If the scheduler fails to take over scheduling, the child process is torn down and the fork fails with an erroneous value. Process priority is entirely decided upon using nice levels. PM stores a copy of each process's nice level and when a child is forked, its parent's nice level is sent in the SCHEDULING_START message. How this level is mapped to a priority queue is up to the scheduler. It should be noted that the nice level is used to determine the max_priority and the parent could have been in a lower priority when it was spawned. To prevent a CPU intensive process from hawking the CPU by continuously forking children that get scheduled in the max_priority, the scheduler should determine in which queue the parent is currently scheduled, and schedule the child in that same queue. Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as NR_SCHED_QUEUES/2. That results in a "off by one" error when converting priority->nice->priority for nice=0. This also had the side effect that if someone were to set the MAX_USER_Q to something else than 0, then USER_Q would be off.
2010-05-18 15:39:04 +02:00
return (OK);
Userspace scheduling - cotributed by Bjorn Swift - In this first phase, scheduling is moved from the kernel to the PM server. The next steps are to a) moving scheduling to its own server and b) include useful information in the "out of quantum" message, so that the scheduler can make use of this information. - The kernel process table now keeps record of who is responsible for scheduling each process (p_scheduler). When this pointer is NULL, the process will be scheduled by the kernel. If such a process runs out of quantum, the kernel will simply renew its quantum an requeue it. - When PM loads, it will take over scheduling of all running processes, except system processes, using sys_schedctl(). Essentially, this only results in taking over init. As children inherit a scheduler from their parent, user space programs forked by init will inherit PM (for now) as their scheduler. - Once a process has been assigned a scheduler, and runs out of quantum, its RTS_NO_QUANTUM flag will be set and the process dequeued. The kernel will send a message to the scheduler, on the process' behalf, informing the scheduler that it has run out of quantum. The scheduler can take what ever action it pleases, based on its policy, and then reschedule the process using the sys_schedule() system call. - Balance queues does not work as before. While the old in-kernel function used to renew the quantum of processes in the highest priority run queue, the user-space implementation only acts on processes that have been bumped down to a lower priority queue. This approach reacts slower to changes than the old one, but saves us sending a sys_schedule message for each process every time we balance the queues. Currently, when processes are moved up a priority queue, their quantum is also renewed, but this can be fiddled with. - do_nice has been removed from kernel. PM answers to get- and setpriority calls, updates it's own nice variable as well as the max_run_queue. This will be refactored once scheduling is moved to a separate server. We will probably have PM update it's local nice value and then send a message to whoever is scheduling the process. - changes to fix an issue in do_fork() where processes could run out of quantum but bypassing the code path that handles it correctly. The future plan is to remove the policy from do_fork() and implement it in userspace too.
2010-03-29 13:07:20 +02:00
}