minix/servers/vfs/dmap.c

271 lines
8.7 KiB
C
Raw Normal View History

2005-04-21 16:53:53 +02:00
/* This file contains the table with device <-> driver mappings. It also
* contains some routines to dynamically add and/ or remove device drivers
* or change mappings.
*/
#include "fs.h"
#include "fproc.h"
#include <string.h>
#include <stdlib.h>
#include <ctype.h>
#include <unistd.h>
2005-04-21 16:53:53 +02:00
#include <minix/com.h>
2007-08-07 14:52:47 +02:00
#include <minix/ds.h>
#include "param.h"
2005-04-21 16:53:53 +02:00
/* Some devices may or may not be there in the next table. */
2007-08-07 14:52:47 +02:00
#define DT(enable, opcl, io, driver, flags, label) \
2005-04-21 16:53:53 +02:00
{ (enable?(opcl):no_dev), (enable?(io):0), \
(enable?(driver):0), (flags), label, FALSE },
2005-08-03 13:53:36 +02:00
#define NC(x) (NR_CTRLRS >= (x))
2005-04-21 16:53:53 +02:00
/* The order of the entries here determines the mapping between major device
* numbers and tasks. The first entry (major device 0) is not used. The
* next entry is major device 1, etc. Character and block devices can be
* intermixed at random. The ordering determines the device numbers in /dev/.
* Note that FS knows the device number of /dev/ram/ to load the RAM disk.
* Also note that the major device numbers used in /dev/ are NOT the same as
* the process numbers of the device drivers.
*/
/*
Driver enabled Open/Cls I/O Driver # Flags Device File
-------------- -------- ------ ----------- ----- ------ ----
*/
struct dmap dmap[NR_DEVICES]; /* actual map */
PRIVATE struct dmap init_dmap[] = {
2007-08-07 14:52:47 +02:00
DT(1, no_dev, 0, 0, 0, "") /* 0 = not used */
DT(1, gen_opcl, gen_io, MEM_PROC_NR, 0, "memory") /* 1 = /dev/mem */
DT(0, no_dev, 0, 0, DMAP_MUTABLE, "") /* 2 = /dev/fd0 */
DT(0, no_dev, 0, 0, DMAP_MUTABLE, "") /* 3 = /dev/c0 */
DT(1, tty_opcl, gen_io, TTY_PROC_NR, 0, "") /* 4 = /dev/tty00 */
DT(1, ctty_opcl,ctty_io, TTY_PROC_NR, 0, "") /* 5 = /dev/tty */
DT(0, no_dev, 0, NONE, DMAP_MUTABLE, "") /* 6 = /dev/lp */
2005-04-21 16:53:53 +02:00
#if (MACHINE == IBM_PC)
2007-08-07 14:52:47 +02:00
DT(1, no_dev, 0, 0, DMAP_MUTABLE, "") /* 7 = /dev/ip */
DT(0, no_dev, 0, NONE, DMAP_MUTABLE, "") /* 8 = /dev/c1 */
DT(0, 0, 0, 0, DMAP_MUTABLE, "") /* 9 = not used */
DT(0, no_dev, 0, 0, DMAP_MUTABLE, "") /*10 = /dev/c2 */
2009-12-02 11:08:58 +01:00
DT(0, no_dev, 0, 0, DMAP_MUTABLE, "") /*11 = /dev/filter*/
2007-08-07 14:52:47 +02:00
DT(0, no_dev, 0, NONE, DMAP_MUTABLE, "") /*12 = /dev/c3 */
DT(0, no_dev, 0, NONE, DMAP_MUTABLE, "") /*13 = /dev/audio */
DT(0, 0, 0, 0, DMAP_MUTABLE, "") /*14 = not used */
DT(1, gen_opcl, gen_io, LOG_PROC_NR, 0, "") /*15 = /dev/klog */
2007-08-07 14:52:47 +02:00
DT(0, no_dev, 0, NONE, DMAP_MUTABLE, "") /*16 = /dev/random*/
New RS and new signal handling for system processes. UPDATING INFO: 20100317: /usr/src/etc/system.conf updated to ignore default kernel calls: copy it (or merge it) to /etc/system.conf. The hello driver (/dev/hello) added to the distribution: # cd /usr/src/commands/scripts && make clean install # cd /dev && MAKEDEV hello KERNEL CHANGES: - Generic signal handling support. The kernel no longer assumes PM as a signal manager for every process. The signal manager of a given process can now be specified in its privilege slot. When a signal has to be delivered, the kernel performs the lookup and forwards the signal to the appropriate signal manager. PM is the default signal manager for user processes, RS is the default signal manager for system processes. To enable ptrace()ing for system processes, it is sufficient to change the default signal manager to PM. This will temporarily disable crash recovery, though. - sys_exit() is now split into sys_exit() (i.e. exit() for system processes, which generates a self-termination signal), and sys_clear() (i.e. used by PM to ask the kernel to clear a process slot when a process exits). - Added a new kernel call (i.e. sys_update()) to swap two process slots and implement live update. PM CHANGES: - Posix signal handling is no longer allowed for system processes. System signals are split into two fixed categories: termination and non-termination signals. When a non-termination signaled is processed, PM transforms the signal into an IPC message and delivers the message to the system process. When a termination signal is processed, PM terminates the process. - PM no longer assumes itself as the signal manager for system processes. It now makes sure that every system signal goes through the kernel before being actually processes. The kernel will then dispatch the signal to the appropriate signal manager which may or may not be PM. SYSLIB CHANGES: - Simplified SEF init and LU callbacks. - Added additional predefined SEF callbacks to debug crash recovery and live update. - Fixed a temporary ack in the SEF init protocol. SEF init reply is now completely synchronous. - Added SEF signal event type to provide a uniform interface for system processes to deal with signals. A sef_cb_signal_handler() callback is available for system processes to handle every received signal. A sef_cb_signal_manager() callback is used by signal managers to process system signals on behalf of the kernel. - Fixed a few bugs with memory mapping and DS. VM CHANGES: - Page faults and memory requests coming from the kernel are now implemented using signals. - Added a new VM call to swap two process slots and implement live update. - The call is used by RS at update time and in turn invokes the kernel call sys_update(). RS CHANGES: - RS has been reworked with a better functional decomposition. - Better kernel call masks. com.h now defines the set of very basic kernel calls every system service is allowed to use. This makes system.conf simpler and easier to maintain. In addition, this guarantees a higher level of isolation for system libraries that use one or more kernel calls internally (e.g. printf). - RS is the default signal manager for system processes. By default, RS intercepts every signal delivered to every system process. This makes crash recovery possible before bringing PM and friends in the loop. - RS now supports fast rollback when something goes wrong while initializing the new version during a live update. - Live update is now implemented by keeping the two versions side-by-side and swapping the process slots when the old version is ready to update. - Crash recovery is now implemented by keeping the two versions side-by-side and cleaning up the old version only when the recovery process is complete. DS CHANGES: - Fixed a bug when the process doing ds_publish() or ds_delete() is not known by DS. - Fixed the completely broken support for strings. String publishing is now implemented in the system library and simply wraps publishing of memory ranges. Ideally, we should adopt a similar approach for other data types as well. - Test suite fixed. DRIVER CHANGES: - The hello driver has been added to the Minix distribution to demonstrate basic live update and crash recovery functionalities. - Other drivers have been adapted to conform the new SEF interface.
2010-03-17 02:15:29 +01:00
DT(0, no_dev, 0, 0, DMAP_MUTABLE, "") /*17 = /dev/hello */
DT(0, 0, 0, 0, DMAP_MUTABLE, "") /*18 = not used */
2005-04-21 16:53:53 +02:00
#endif /* IBM_PC */
};
2007-08-07 14:52:47 +02:00
/*===========================================================================*
* do_mapdriver *
*===========================================================================*/
PUBLIC int do_mapdriver()
{
int r, force, major, proc_nr_n;
Driver refactory for live update and crash recovery. SYSLIB CHANGES: - DS calls to publish / retrieve labels consider endpoints instead of u32_t. VFS CHANGES: - mapdriver() only adds an entry in the dmap table in VFS. - dev_up() is only executed upon reception of a driver up event. INET CHANGES: - INET no longer searches for existing drivers instances at startup. - A newtwork driver is (re)initialized upon reception of a driver up event. - Networking startup is now race-free by design. No need to waste 5 seconds at startup any more. DRIVER CHANGES: - Every driver publishes driver up events when starting for the first time or in case of restart when recovery actions must be taken in the upper layers. - Driver up events are published by drivers through DS. - For regular drivers, VFS is normally the only subscriber, but not necessarily. For instance, when the filter driver is in use, it must subscribe to driver up events to initiate recovery. - For network drivers, inet is the only subscriber for now. - Every VFS driver is statically linked with libdriver, every network driver is statically linked with libnetdriver. DRIVER LIBRARIES CHANGES: - Libdriver is extended to provide generic receive() and ds_publish() interfaces for VFS drivers. - driver_receive() is a wrapper for sef_receive() also used in driver_task() to discard spurious messages that were meant to be delivered to a previous version of the driver. - driver_receive_mq() is the same as driver_receive() but integrates support for queued messages. - driver_announce() publishes a driver up event for VFS drivers and marks the driver as initialized and expecting a DEV_OPEN message. - Libnetdriver is introduced to provide similar receive() and ds_publish() interfaces for network drivers (netdriver_announce() and netdriver_receive()). - Network drivers all support live update with no state transfer now. KERNEL CHANGES: - Added kernel call statectl for state management. Used by driver_announce() to unblock eventual callers sendrecing to the driver.
2010-04-08 15:41:35 +02:00
endpoint_t endpoint;
2007-08-07 14:52:47 +02:00
vir_bytes label_vir;
size_t label_len;
char label[LABEL_MAX];
2007-08-07 14:52:47 +02:00
if (!super_user)
{
printf("FS: unauthorized call of do_mapdriver by proc %d\n",
who_e);
return(EPERM); /* only su (should be only RS or some drivers)
* may call do_mapdriver.
*/
}
/* Get the label */
label_vir= (vir_bytes)m_in.md_label;
label_len= m_in.md_label_len;
if (label_len+1 > sizeof(label))
{
printf("vfs:do_mapdriver: label too long\n");
return EINVAL;
}
r= sys_vircopy(who_e, D, label_vir, SELF, D, (vir_bytes)label,
label_len);
if (r != OK)
{
printf("vfs:do_mapdriver: sys_vircopy failed: %d\n", r);
return EINVAL;
}
label[label_len]= '\0';
Driver refactory for live update and crash recovery. SYSLIB CHANGES: - DS calls to publish / retrieve labels consider endpoints instead of u32_t. VFS CHANGES: - mapdriver() only adds an entry in the dmap table in VFS. - dev_up() is only executed upon reception of a driver up event. INET CHANGES: - INET no longer searches for existing drivers instances at startup. - A newtwork driver is (re)initialized upon reception of a driver up event. - Networking startup is now race-free by design. No need to waste 5 seconds at startup any more. DRIVER CHANGES: - Every driver publishes driver up events when starting for the first time or in case of restart when recovery actions must be taken in the upper layers. - Driver up events are published by drivers through DS. - For regular drivers, VFS is normally the only subscriber, but not necessarily. For instance, when the filter driver is in use, it must subscribe to driver up events to initiate recovery. - For network drivers, inet is the only subscriber for now. - Every VFS driver is statically linked with libdriver, every network driver is statically linked with libnetdriver. DRIVER LIBRARIES CHANGES: - Libdriver is extended to provide generic receive() and ds_publish() interfaces for VFS drivers. - driver_receive() is a wrapper for sef_receive() also used in driver_task() to discard spurious messages that were meant to be delivered to a previous version of the driver. - driver_receive_mq() is the same as driver_receive() but integrates support for queued messages. - driver_announce() publishes a driver up event for VFS drivers and marks the driver as initialized and expecting a DEV_OPEN message. - Libnetdriver is introduced to provide similar receive() and ds_publish() interfaces for network drivers (netdriver_announce() and netdriver_receive()). - Network drivers all support live update with no state transfer now. KERNEL CHANGES: - Added kernel call statectl for state management. Used by driver_announce() to unblock eventual callers sendrecing to the driver.
2010-04-08 15:41:35 +02:00
r= ds_retrieve_label_endpt(label, &endpoint);
2007-08-07 14:52:47 +02:00
if (r != OK)
{
printf("vfs:do_mapdriver: ds doesn't know '%s'\n", label);
return EINVAL;
}
Driver refactory for live update and crash recovery. SYSLIB CHANGES: - DS calls to publish / retrieve labels consider endpoints instead of u32_t. VFS CHANGES: - mapdriver() only adds an entry in the dmap table in VFS. - dev_up() is only executed upon reception of a driver up event. INET CHANGES: - INET no longer searches for existing drivers instances at startup. - A newtwork driver is (re)initialized upon reception of a driver up event. - Networking startup is now race-free by design. No need to waste 5 seconds at startup any more. DRIVER CHANGES: - Every driver publishes driver up events when starting for the first time or in case of restart when recovery actions must be taken in the upper layers. - Driver up events are published by drivers through DS. - For regular drivers, VFS is normally the only subscriber, but not necessarily. For instance, when the filter driver is in use, it must subscribe to driver up events to initiate recovery. - For network drivers, inet is the only subscriber for now. - Every VFS driver is statically linked with libdriver, every network driver is statically linked with libnetdriver. DRIVER LIBRARIES CHANGES: - Libdriver is extended to provide generic receive() and ds_publish() interfaces for VFS drivers. - driver_receive() is a wrapper for sef_receive() also used in driver_task() to discard spurious messages that were meant to be delivered to a previous version of the driver. - driver_receive_mq() is the same as driver_receive() but integrates support for queued messages. - driver_announce() publishes a driver up event for VFS drivers and marks the driver as initialized and expecting a DEV_OPEN message. - Libnetdriver is introduced to provide similar receive() and ds_publish() interfaces for network drivers (netdriver_announce() and netdriver_receive()). - Network drivers all support live update with no state transfer now. KERNEL CHANGES: - Added kernel call statectl for state management. Used by driver_announce() to unblock eventual callers sendrecing to the driver.
2010-04-08 15:41:35 +02:00
if (isokendpt(endpoint, &proc_nr_n) != OK)
{
Driver refactory for live update and crash recovery. SYSLIB CHANGES: - DS calls to publish / retrieve labels consider endpoints instead of u32_t. VFS CHANGES: - mapdriver() only adds an entry in the dmap table in VFS. - dev_up() is only executed upon reception of a driver up event. INET CHANGES: - INET no longer searches for existing drivers instances at startup. - A newtwork driver is (re)initialized upon reception of a driver up event. - Networking startup is now race-free by design. No need to waste 5 seconds at startup any more. DRIVER CHANGES: - Every driver publishes driver up events when starting for the first time or in case of restart when recovery actions must be taken in the upper layers. - Driver up events are published by drivers through DS. - For regular drivers, VFS is normally the only subscriber, but not necessarily. For instance, when the filter driver is in use, it must subscribe to driver up events to initiate recovery. - For network drivers, inet is the only subscriber for now. - Every VFS driver is statically linked with libdriver, every network driver is statically linked with libnetdriver. DRIVER LIBRARIES CHANGES: - Libdriver is extended to provide generic receive() and ds_publish() interfaces for VFS drivers. - driver_receive() is a wrapper for sef_receive() also used in driver_task() to discard spurious messages that were meant to be delivered to a previous version of the driver. - driver_receive_mq() is the same as driver_receive() but integrates support for queued messages. - driver_announce() publishes a driver up event for VFS drivers and marks the driver as initialized and expecting a DEV_OPEN message. - Libnetdriver is introduced to provide similar receive() and ds_publish() interfaces for network drivers (netdriver_announce() and netdriver_receive()). - Network drivers all support live update with no state transfer now. KERNEL CHANGES: - Added kernel call statectl for state management. Used by driver_announce() to unblock eventual callers sendrecing to the driver.
2010-04-08 15:41:35 +02:00
printf("vfs:do_mapdriver: bad endpoint %d\n", endpoint);
2007-08-07 14:52:47 +02:00
return(EINVAL);
}
2007-08-07 14:52:47 +02:00
/* Try to update device mapping. */
major= m_in.md_major;
force= m_in.md_force;
Driver refactory for live update and crash recovery. SYSLIB CHANGES: - DS calls to publish / retrieve labels consider endpoints instead of u32_t. VFS CHANGES: - mapdriver() only adds an entry in the dmap table in VFS. - dev_up() is only executed upon reception of a driver up event. INET CHANGES: - INET no longer searches for existing drivers instances at startup. - A newtwork driver is (re)initialized upon reception of a driver up event. - Networking startup is now race-free by design. No need to waste 5 seconds at startup any more. DRIVER CHANGES: - Every driver publishes driver up events when starting for the first time or in case of restart when recovery actions must be taken in the upper layers. - Driver up events are published by drivers through DS. - For regular drivers, VFS is normally the only subscriber, but not necessarily. For instance, when the filter driver is in use, it must subscribe to driver up events to initiate recovery. - For network drivers, inet is the only subscriber for now. - Every VFS driver is statically linked with libdriver, every network driver is statically linked with libnetdriver. DRIVER LIBRARIES CHANGES: - Libdriver is extended to provide generic receive() and ds_publish() interfaces for VFS drivers. - driver_receive() is a wrapper for sef_receive() also used in driver_task() to discard spurious messages that were meant to be delivered to a previous version of the driver. - driver_receive_mq() is the same as driver_receive() but integrates support for queued messages. - driver_announce() publishes a driver up event for VFS drivers and marks the driver as initialized and expecting a DEV_OPEN message. - Libnetdriver is introduced to provide similar receive() and ds_publish() interfaces for network drivers (netdriver_announce() and netdriver_receive()). - Network drivers all support live update with no state transfer now. KERNEL CHANGES: - Added kernel call statectl for state management. Used by driver_announce() to unblock eventual callers sendrecing to the driver.
2010-04-08 15:41:35 +02:00
r= map_driver(label, major, endpoint, m_in.md_style, force);
2007-08-07 14:52:47 +02:00
return(r);
}
2005-04-21 16:53:53 +02:00
/*===========================================================================*
* map_driver *
*===========================================================================*/
PUBLIC int map_driver(label, major, proc_nr_e, style, force)
2007-08-07 14:52:47 +02:00
char *label; /* name of the driver */
int major; /* major number of the device */
endpoint_t proc_nr_e; /* process number of the driver */
int style; /* style of the device */
int force;
{
/* Set a new device driver mapping in the dmap table. Given that correct
* arguments are given, this only works if the entry is mutable and the
* current driver is not busy. If the proc_nr is set to NONE, we're supposed
* to unmap it.
*
* Normal error codes are returned so that this function can be used from
* a system call that tries to dynamically install a new driver.
*/
int proc_nr_n;
size_t len;
struct dmap *dp;
/* Get pointer to device entry in the dmap table. */
if (major < 0 || major >= NR_DEVICES) return(ENODEV);
dp = &dmap[major];
/* Check if we're supposed to unmap it. If so, do it even
* if busy or unmutable, as unmap is called when driver has
* exited.
*/
if(proc_nr_e == NONE) {
dp->dmap_opcl = no_dev;
dp->dmap_io = no_dev_io;
dp->dmap_driver = NONE;
dp->dmap_flags = DMAP_MUTABLE; /* When gone, not busy or reserved. */
return(OK);
}
/* See if updating the entry is allowed. */
if (! (dp->dmap_flags & DMAP_MUTABLE)) return(EPERM);
if (!force)
{
/* Check process number of new driver. */
if (isokendpt(proc_nr_e, &proc_nr_n) != OK)
return(EINVAL);
}
if (label != NULL) {
len= strlen(label);
if (len+1 > sizeof(dp->dmap_label))
panic("map_driver: label too long: %d", len);
strcpy(dp->dmap_label, label);
}
2007-08-07 14:52:47 +02:00
/* Try to update the entry. */
switch (style) {
case STYLE_DEV: dp->dmap_opcl = gen_opcl; break;
case STYLE_TTY: dp->dmap_opcl = tty_opcl; break;
case STYLE_CLONE: dp->dmap_opcl = clone_opcl; break;
default: return(EINVAL);
}
dp->dmap_io = gen_io;
dp->dmap_driver = proc_nr_e;
if (dp->dmap_async_driver)
dp->dmap_io= asyn_io;
2007-08-07 14:52:47 +02:00
return(OK);
}
/*===========================================================================*
endpoint-aware conversion of servers. 'who', indicating caller number in pm and fs and some other servers, has been removed in favour of 'who_e' (endpoint) and 'who_p' (proc nr.). In both PM and FS, isokendpt() convert endpoints to process slot numbers, returning OK if it was a valid and consistent endpoint number. okendpt() does the same but panic()s if it doesn't succeed. (In PM, this is pm_isok..) pm and fs keep their own records of process endpoints in their proc tables, which are needed to make kernel calls about those processes. message field names have changed. fs drivers are endpoints. fs now doesn't try to get out of driver deadlock, as the protocol isn't supposed to let that happen any more. (A warning is printed if ELOCKED is detected though.) fproc[].fp_task (indicating which driver the process is suspended on) became an int. PM and FS now get endpoint numbers of initial boot processes from the kernel. These happen to be the same as the old proc numbers, to let user processes reach them with the old numbers, but FS and PM don't know that. All new processes after INIT, even after the generation number wraps around, get endpoint numbers with generation 1 and higher, so the first instances of the boot processes are the only processes ever to have endpoint numbers in the old proc number range. More return code checks of sys_* functions have been added. IS has become endpoint-aware. Ditched the 'text' and 'data' fields in the kernel dump (which show locations, not sizes, so aren't terribly useful) in favour of the endpoint number. Proc number is still visible. Some other dumps (e.g. dmap, rs) show endpoint numbers now too which got the formatting changed. PM reading segments using rw_seg() has changed - it uses other fields in the message now instead of encoding the segment and process number and fd in the fd field. For that it uses _read_pm() and _write_pm() which to _taskcall()s directly in pm/misc.c. PM now sys_exit()s itself on panic(), instead of sys_abort(). RS also talks in endpoints instead of process numbers.
2006-03-03 11:20:58 +01:00
* dmap_unmap_by_endpt *
*===========================================================================*/
endpoint-aware conversion of servers. 'who', indicating caller number in pm and fs and some other servers, has been removed in favour of 'who_e' (endpoint) and 'who_p' (proc nr.). In both PM and FS, isokendpt() convert endpoints to process slot numbers, returning OK if it was a valid and consistent endpoint number. okendpt() does the same but panic()s if it doesn't succeed. (In PM, this is pm_isok..) pm and fs keep their own records of process endpoints in their proc tables, which are needed to make kernel calls about those processes. message field names have changed. fs drivers are endpoints. fs now doesn't try to get out of driver deadlock, as the protocol isn't supposed to let that happen any more. (A warning is printed if ELOCKED is detected though.) fproc[].fp_task (indicating which driver the process is suspended on) became an int. PM and FS now get endpoint numbers of initial boot processes from the kernel. These happen to be the same as the old proc numbers, to let user processes reach them with the old numbers, but FS and PM don't know that. All new processes after INIT, even after the generation number wraps around, get endpoint numbers with generation 1 and higher, so the first instances of the boot processes are the only processes ever to have endpoint numbers in the old proc number range. More return code checks of sys_* functions have been added. IS has become endpoint-aware. Ditched the 'text' and 'data' fields in the kernel dump (which show locations, not sizes, so aren't terribly useful) in favour of the endpoint number. Proc number is still visible. Some other dumps (e.g. dmap, rs) show endpoint numbers now too which got the formatting changed. PM reading segments using rw_seg() has changed - it uses other fields in the message now instead of encoding the segment and process number and fd in the fd field. For that it uses _read_pm() and _write_pm() which to _taskcall()s directly in pm/misc.c. PM now sys_exit()s itself on panic(), instead of sys_abort(). RS also talks in endpoints instead of process numbers.
2006-03-03 11:20:58 +01:00
PUBLIC void dmap_unmap_by_endpt(int proc_nr_e)
{
int i, r;
for (i=0; i<NR_DEVICES; i++)
endpoint-aware conversion of servers. 'who', indicating caller number in pm and fs and some other servers, has been removed in favour of 'who_e' (endpoint) and 'who_p' (proc nr.). In both PM and FS, isokendpt() convert endpoints to process slot numbers, returning OK if it was a valid and consistent endpoint number. okendpt() does the same but panic()s if it doesn't succeed. (In PM, this is pm_isok..) pm and fs keep their own records of process endpoints in their proc tables, which are needed to make kernel calls about those processes. message field names have changed. fs drivers are endpoints. fs now doesn't try to get out of driver deadlock, as the protocol isn't supposed to let that happen any more. (A warning is printed if ELOCKED is detected though.) fproc[].fp_task (indicating which driver the process is suspended on) became an int. PM and FS now get endpoint numbers of initial boot processes from the kernel. These happen to be the same as the old proc numbers, to let user processes reach them with the old numbers, but FS and PM don't know that. All new processes after INIT, even after the generation number wraps around, get endpoint numbers with generation 1 and higher, so the first instances of the boot processes are the only processes ever to have endpoint numbers in the old proc number range. More return code checks of sys_* functions have been added. IS has become endpoint-aware. Ditched the 'text' and 'data' fields in the kernel dump (which show locations, not sizes, so aren't terribly useful) in favour of the endpoint number. Proc number is still visible. Some other dumps (e.g. dmap, rs) show endpoint numbers now too which got the formatting changed. PM reading segments using rw_seg() has changed - it uses other fields in the message now instead of encoding the segment and process number and fd in the fd field. For that it uses _read_pm() and _write_pm() which to _taskcall()s directly in pm/misc.c. PM now sys_exit()s itself on panic(), instead of sys_abort(). RS also talks in endpoints instead of process numbers.
2006-03-03 11:20:58 +01:00
if(dmap[i].dmap_driver && dmap[i].dmap_driver == proc_nr_e)
if((r=map_driver(NULL, i, NONE, 0, 0)) != OK)
endpoint-aware conversion of servers. 'who', indicating caller number in pm and fs and some other servers, has been removed in favour of 'who_e' (endpoint) and 'who_p' (proc nr.). In both PM and FS, isokendpt() convert endpoints to process slot numbers, returning OK if it was a valid and consistent endpoint number. okendpt() does the same but panic()s if it doesn't succeed. (In PM, this is pm_isok..) pm and fs keep their own records of process endpoints in their proc tables, which are needed to make kernel calls about those processes. message field names have changed. fs drivers are endpoints. fs now doesn't try to get out of driver deadlock, as the protocol isn't supposed to let that happen any more. (A warning is printed if ELOCKED is detected though.) fproc[].fp_task (indicating which driver the process is suspended on) became an int. PM and FS now get endpoint numbers of initial boot processes from the kernel. These happen to be the same as the old proc numbers, to let user processes reach them with the old numbers, but FS and PM don't know that. All new processes after INIT, even after the generation number wraps around, get endpoint numbers with generation 1 and higher, so the first instances of the boot processes are the only processes ever to have endpoint numbers in the old proc number range. More return code checks of sys_* functions have been added. IS has become endpoint-aware. Ditched the 'text' and 'data' fields in the kernel dump (which show locations, not sizes, so aren't terribly useful) in favour of the endpoint number. Proc number is still visible. Some other dumps (e.g. dmap, rs) show endpoint numbers now too which got the formatting changed. PM reading segments using rw_seg() has changed - it uses other fields in the message now instead of encoding the segment and process number and fd in the fd field. For that it uses _read_pm() and _write_pm() which to _taskcall()s directly in pm/misc.c. PM now sys_exit()s itself on panic(), instead of sys_abort(). RS also talks in endpoints instead of process numbers.
2006-03-03 11:20:58 +01:00
printf("FS: unmap of p %d / d %d failed: %d\n", proc_nr_e,i,r);
return;
}
2005-04-21 16:53:53 +02:00
/*===========================================================================*
* build_dmap *
2005-04-21 16:53:53 +02:00
*===========================================================================*/
PUBLIC void build_dmap()
2005-04-21 16:53:53 +02:00
{
/* Initialize the table with all device <-> driver mappings. Then, map
* the boot driver to a controller and update the dmap table to that
* selection. The boot driver and the controller it handles are set at
* the boot monitor.
2005-04-21 16:53:53 +02:00
*/
int i;
struct dmap *dp;
/* Build table with device <-> driver mappings. */
for (i=0; i<NR_DEVICES; i++) {
dp = &dmap[i];
if (i < sizeof(init_dmap)/sizeof(struct dmap) &&
init_dmap[i].dmap_opcl != no_dev) { /* a preset driver */
dp->dmap_opcl = init_dmap[i].dmap_opcl;
dp->dmap_io = init_dmap[i].dmap_io;
dp->dmap_driver = init_dmap[i].dmap_driver;
dp->dmap_flags = init_dmap[i].dmap_flags;
2007-08-07 14:52:47 +02:00
strcpy(dp->dmap_label, init_dmap[i].dmap_label);
dp->dmap_async_driver= FALSE;
} else { /* no default */
dp->dmap_opcl = no_dev;
dp->dmap_io = no_dev_io;
dp->dmap_driver = NONE;
dp->dmap_flags = DMAP_MUTABLE;
}
}
dmap[13].dmap_async_driver= TRUE; /* Audio */
dmap[15].dmap_async_driver= TRUE; /* Log */
dmap[15].dmap_io= asyn_io;
dmap[16].dmap_async_driver= TRUE; /* Random */
2005-04-21 16:53:53 +02:00
}
/*===========================================================================*
* dmap_driver_match *
*===========================================================================*/
PUBLIC int dmap_driver_match(endpoint_t proc, int major)
{
if (major < 0 || major >= NR_DEVICES) return(0);
if(dmap[major].dmap_driver != NONE && dmap[major].dmap_driver == proc)
return 1;
return 0;
}
/*===========================================================================*
endpoint-aware conversion of servers. 'who', indicating caller number in pm and fs and some other servers, has been removed in favour of 'who_e' (endpoint) and 'who_p' (proc nr.). In both PM and FS, isokendpt() convert endpoints to process slot numbers, returning OK if it was a valid and consistent endpoint number. okendpt() does the same but panic()s if it doesn't succeed. (In PM, this is pm_isok..) pm and fs keep their own records of process endpoints in their proc tables, which are needed to make kernel calls about those processes. message field names have changed. fs drivers are endpoints. fs now doesn't try to get out of driver deadlock, as the protocol isn't supposed to let that happen any more. (A warning is printed if ELOCKED is detected though.) fproc[].fp_task (indicating which driver the process is suspended on) became an int. PM and FS now get endpoint numbers of initial boot processes from the kernel. These happen to be the same as the old proc numbers, to let user processes reach them with the old numbers, but FS and PM don't know that. All new processes after INIT, even after the generation number wraps around, get endpoint numbers with generation 1 and higher, so the first instances of the boot processes are the only processes ever to have endpoint numbers in the old proc number range. More return code checks of sys_* functions have been added. IS has become endpoint-aware. Ditched the 'text' and 'data' fields in the kernel dump (which show locations, not sizes, so aren't terribly useful) in favour of the endpoint number. Proc number is still visible. Some other dumps (e.g. dmap, rs) show endpoint numbers now too which got the formatting changed. PM reading segments using rw_seg() has changed - it uses other fields in the message now instead of encoding the segment and process number and fd in the fd field. For that it uses _read_pm() and _write_pm() which to _taskcall()s directly in pm/misc.c. PM now sys_exit()s itself on panic(), instead of sys_abort(). RS also talks in endpoints instead of process numbers.
2006-03-03 11:20:58 +01:00
* dmap_endpt_up *
*===========================================================================*/
endpoint-aware conversion of servers. 'who', indicating caller number in pm and fs and some other servers, has been removed in favour of 'who_e' (endpoint) and 'who_p' (proc nr.). In both PM and FS, isokendpt() convert endpoints to process slot numbers, returning OK if it was a valid and consistent endpoint number. okendpt() does the same but panic()s if it doesn't succeed. (In PM, this is pm_isok..) pm and fs keep their own records of process endpoints in their proc tables, which are needed to make kernel calls about those processes. message field names have changed. fs drivers are endpoints. fs now doesn't try to get out of driver deadlock, as the protocol isn't supposed to let that happen any more. (A warning is printed if ELOCKED is detected though.) fproc[].fp_task (indicating which driver the process is suspended on) became an int. PM and FS now get endpoint numbers of initial boot processes from the kernel. These happen to be the same as the old proc numbers, to let user processes reach them with the old numbers, but FS and PM don't know that. All new processes after INIT, even after the generation number wraps around, get endpoint numbers with generation 1 and higher, so the first instances of the boot processes are the only processes ever to have endpoint numbers in the old proc number range. More return code checks of sys_* functions have been added. IS has become endpoint-aware. Ditched the 'text' and 'data' fields in the kernel dump (which show locations, not sizes, so aren't terribly useful) in favour of the endpoint number. Proc number is still visible. Some other dumps (e.g. dmap, rs) show endpoint numbers now too which got the formatting changed. PM reading segments using rw_seg() has changed - it uses other fields in the message now instead of encoding the segment and process number and fd in the fd field. For that it uses _read_pm() and _write_pm() which to _taskcall()s directly in pm/misc.c. PM now sys_exit()s itself on panic(), instead of sys_abort(). RS also talks in endpoints instead of process numbers.
2006-03-03 11:20:58 +01:00
PUBLIC void dmap_endpt_up(int proc_e)
{
int i;
for (i=0; i<NR_DEVICES; i++) {
if(dmap[i].dmap_driver != NONE
Driver refactory for live update and crash recovery. SYSLIB CHANGES: - DS calls to publish / retrieve labels consider endpoints instead of u32_t. VFS CHANGES: - mapdriver() only adds an entry in the dmap table in VFS. - dev_up() is only executed upon reception of a driver up event. INET CHANGES: - INET no longer searches for existing drivers instances at startup. - A newtwork driver is (re)initialized upon reception of a driver up event. - Networking startup is now race-free by design. No need to waste 5 seconds at startup any more. DRIVER CHANGES: - Every driver publishes driver up events when starting for the first time or in case of restart when recovery actions must be taken in the upper layers. - Driver up events are published by drivers through DS. - For regular drivers, VFS is normally the only subscriber, but not necessarily. For instance, when the filter driver is in use, it must subscribe to driver up events to initiate recovery. - For network drivers, inet is the only subscriber for now. - Every VFS driver is statically linked with libdriver, every network driver is statically linked with libnetdriver. DRIVER LIBRARIES CHANGES: - Libdriver is extended to provide generic receive() and ds_publish() interfaces for VFS drivers. - driver_receive() is a wrapper for sef_receive() also used in driver_task() to discard spurious messages that were meant to be delivered to a previous version of the driver. - driver_receive_mq() is the same as driver_receive() but integrates support for queued messages. - driver_announce() publishes a driver up event for VFS drivers and marks the driver as initialized and expecting a DEV_OPEN message. - Libnetdriver is introduced to provide similar receive() and ds_publish() interfaces for network drivers (netdriver_announce() and netdriver_receive()). - Network drivers all support live update with no state transfer now. KERNEL CHANGES: - Added kernel call statectl for state management. Used by driver_announce() to unblock eventual callers sendrecing to the driver.
2010-04-08 15:41:35 +02:00
&& dmap[i].dmap_driver == proc_e) {
dev_up(i);
}
}
return;
}