No more intel/minix segments.

This commit removes all traces of Minix segments (the text/data/stack
memory map abstraction in the kernel) and significance of Intel segments
(hardware segments like CS, DS that add offsets to all addressing before
page table translation). This ultimately simplifies the memory layout
and addressing and makes the same layout possible on non-Intel
architectures.

There are only two types of addresses in the world now: virtual
and physical; even the kernel and processes have the same virtual
address space. Kernel and user processes can be distinguished at a
glance as processes won't use 0xF0000000 and above.

No static pre-allocated memory sizes exist any more.

Changes to booting:
        . The pre_init.c leaves the kernel and modules exactly as
          they were left by the bootloader in physical memory
        . The kernel starts running using physical addressing,
          loaded at a fixed location given in its linker script by the
          bootloader.  All code and data in this phase are linked to
          this fixed low location.
        . It makes a bootstrap pagetable to map itself to a
          fixed high location (also in linker script) and jumps to
          the high address. All code and data then use this high addressing.
        . All code/data symbols linked at the low addresses is prefixed by
          an objcopy step with __k_unpaged_*, so that that code cannot
          reference highly-linked symbols (which aren't valid yet) or vice
          versa (symbols that aren't valid any more).
        . The two addressing modes are separated in the linker script by
          collecting the unpaged_*.o objects and linking them with low
          addresses, and linking the rest high. Some objects are linked
          twice, once low and once high.
        . The bootstrap phase passes a lot of information (e.g. free memory
          list, physical location of the modules, etc.) using the kinfo
          struct.
        . After this bootstrap the low-linked part is freed.
        . The kernel maps in VM into the bootstrap page table so that VM can
          begin executing. Its first job is to make page tables for all other
          boot processes. So VM runs before RS, and RS gets a fully dynamic,
          VM-managed address space. VM gets its privilege info from RS as usual
          but that happens after RS starts running.
        . Both the kernel loading VM and VM organizing boot processes happen
	  using the libexec logic. This removes the last reason for VM to
	  still know much about exec() and vm/exec.c is gone.

Further Implementation:
        . All segments are based at 0 and have a 4 GB limit.
        . The kernel is mapped in at the top of the virtual address
          space so as not to constrain the user processes.
        . Processes do not use segments from the LDT at all; there are
          no segments in the LDT any more, so no LLDT is needed.
        . The Minix segments T/D/S are gone and so none of the
          user-space or in-kernel copy functions use them. The copy
          functions use a process endpoint of NONE to realize it's
          a physical address, virtual otherwise.
        . The umap call only makes sense to translate a virtual address
          to a physical address now.
        . Segments-related calls like newmap and alloc_segments are gone.
        . All segments-related translation in VM is gone (vir2map etc).
        . Initialization in VM is simpler as no moving around is necessary.
        . VM and all other boot processes can be linked wherever they wish
          and will be mapped in at the right location by the kernel and VM
          respectively.

Other changes:
        . The multiboot code is less special: it does not use mb_print
          for its diagnostics any more but uses printf() as normal, saving
          the output into the diagnostics buffer, only printing to the
          screen using the direct print functions if a panic() occurs.
        . The multiboot code uses the flexible 'free memory map list'
          style to receive the list of free memory if available.
        . The kernel determines the memory layout of the processes to
          a degree: it tells VM where the kernel starts and ends and
          where the kernel wants the top of the process to be. VM then
          uses this entire range, i.e. the stack is right at the top,
          and mmap()ped bits of memory are placed below that downwards,
          and the break grows upwards.

Other Consequences:
        . Every process gets its own page table as address spaces
          can't be separated any more by segments.
        . As all segments are 0-based, there is no distinction between
          virtual and linear addresses, nor between userspace and
          kernel addresses.
        . Less work is done when context switching, leading to a net
          performance increase. (8% faster on my machine for 'make servers'.)
	. The layout and configuration of the GDT makes sysenter and syscall
	  possible.
This commit is contained in:
Ben Gras 2012-05-07 16:03:35 +02:00
parent cfe1ed4df4
commit 50e2064049
139 changed files with 2468 additions and 4380 deletions

View file

@ -17,7 +17,7 @@ SUBDIR= add_route arp ash at \
intr ipcrm ipcs irdpd isoread join kill last \
less loadkeys loadramdisk logger look lp \
lpd ls lspci mail MAKEDEV \
mdb mesg mined mkfifo mknod \
mesg mined mkfifo mknod \
mkproto mount mt netconf nice acknm nohup \
nonamed od paste patch pax \
ping postinstall poweroff pr prep printf printroot \

View file

@ -1448,38 +1448,16 @@ static void complete_bridges()
*===========================================================================*/
static void complete_bars(void)
{
int i, j, r, bar_nr, reg;
int i, j, bar_nr, reg;
u32_t memgap_low, memgap_high, iogap_low, iogap_high, io_high,
base, size, v32, diff1, diff2;
char *cp, *next;
char memstr[256];
kinfo_t kinfo;
if(OK != sys_getkinfo(&kinfo))
panic("can't get kinfo");
r= env_get_param("memory", memstr, sizeof(memstr));
if (r != OK)
panic("env_get_param failed: %d", r);
/* Set memgap_low to just above physical memory */
memgap_low= 0;
cp= memstr;
while (*cp != '\0')
{
base= strtoul(cp, &next, 16);
if (!(*next) || next == cp || *next != ':')
goto bad_mem_string;
cp= next+1;
size= strtoul(cp, &next, 16);
if (next == cp || (*next != ',' && *next != '\0'))
if (!*next)
goto bad_mem_string;
if (base+size > memgap_low)
memgap_low= base+size;
if (*next)
cp= next+1;
else
break;
}
memgap_low= kinfo.mem_high_phys;
memgap_high= 0xfe000000; /* Leave space for the CPU (APIC) */
if (debug)
@ -1661,10 +1639,6 @@ static void complete_bars(void)
}
}
return;
bad_mem_string:
printf("PCI: bad memory environment string '%s'\n", memstr);
panic(NULL);
}
/*===========================================================================*

View file

@ -971,7 +971,6 @@ tty_t *tp;
if(font_memory == MAP_FAILED)
panic("Console couldn't map font memory");
vid_size >>= 1; /* word count */
vid_mask = vid_size - 1;
@ -983,6 +982,7 @@ tty_t *tp;
if (nr_cons > NR_CONS) nr_cons = NR_CONS;
if (nr_cons > 1) wrap = 0;
if (nr_cons < 1) panic("no consoles");
page_size = vid_size / nr_cons;
}

View file

@ -14,24 +14,19 @@ struct segdesc_s { /* segment descriptor for protected mode */
u8_t access; /* |P|DL|1|X|E|R|A| */
u8_t granularity; /* |G|X|0|A|LIMT| */
u8_t base_high;
};
} __attribute__((packed));
#define LDT_SIZE 2 /* CS and DS */
/* Fixed local descriptors. */
#define CS_LDT_INDEX 0 /* process CS */
#define DS_LDT_INDEX 1 /* process DS=ES=FS=GS=SS */
struct desctableptr_s {
u16_t limit;
u32_t base;
} __attribute__((packed));
typedef struct segframe {
reg_t p_ldt_sel; /* selector in gdt with ldt base and limit */
reg_t p_cr3; /* page table root */
u32_t *p_cr3_v;
char *fpu_state;
struct segdesc_s p_ldt[LDT_SIZE]; /* CS, DS and remote */
} segframe_t;
#define INMEMORY(p) (!p->p_seg.p_cr3 || get_cpulocal_var(ptproc) == p)
typedef u32_t atomic_t; /* access to an aligned 32bit value is atomic on i386 */
#endif /* #ifndef _I386_TYPES_H */

View file

@ -14,6 +14,9 @@
/* Magic numbers for interrupt controller. */
#define END_OF_INT 0x20 /* code used to re-enable after an interrupt */
#define IRQ0_VECTOR 0x50 /* nice vectors to relocate IRQ0-7 to */
#define IRQ8_VECTOR 0x70 /* no need to move IRQ8-15 */
/* Interrupt vectors defined/reserved by processor. */
#define DIVIDE_VECTOR 0 /* divide error */
#define DEBUG_VECTOR 1 /* single step (trace) */
@ -25,15 +28,6 @@
#define KERN_CALL_VECTOR 32 /* system calls are made with int SYSVEC */
#define IPC_VECTOR 33 /* interrupt vector for ipc */
/* Suitable irq bases for hardware interrupts. Reprogram the 8259(s) from
* the PC BIOS defaults since the BIOS doesn't respect all the processor's
* reserved vectors (0 to 31).
*/
#define BIOS_IRQ0_VEC 0x08 /* base of IRQ0-7 vectors used by BIOS */
#define BIOS_IRQ8_VEC 0x70 /* base of IRQ8-15 vectors used by BIOS */
#define IRQ0_VECTOR 0x50 /* nice vectors to relocate IRQ0-7 to */
#define IRQ8_VECTOR 0x70 /* no need to move IRQ8-15 */
/* Hardware interrupt numbers. */
#ifndef USE_APIC
#define NR_IRQ_VECTORS 16
@ -55,11 +49,8 @@
#define AT_WINI_0_IRQ 14 /* at winchester controller 0 */
#define AT_WINI_1_IRQ 15 /* at winchester controller 1 */
/* Interrupt number to hardware vector. */
#define BIOS_VECTOR(irq) \
(((irq) < 8 ? BIOS_IRQ0_VEC : BIOS_IRQ8_VEC) + ((irq) & 0x07))
#define VECTOR(irq) \
(((irq) < 8 ? IRQ0_VECTOR : IRQ8_VECTOR) + ((irq) & 0x07))
#define VECTOR(irq) \
(((irq) < 8 ? IRQ0_VECTOR : IRQ8_VECTOR) + ((irq) & 0x07))
#endif /* (CHIP == INTEL) */

View file

@ -14,8 +14,6 @@
#define MULTIBOOT_AOUT_KLUDGE 0x00010000
#define MULTIBOOT_FLAGS (MULTIBOOT_MEMORY_INFO | MULTIBOOT_PAGE_ALIGN)
/* consts used for Multiboot pre-init */
#define MULTIBOOT_VIDEO_MODE_EGA 1
@ -28,13 +26,18 @@
#define MULTIBOOT_CONSOLE_LINES 25
#define MULTIBOOT_CONSOLE_COLS 80
#define MULTIBOOT_VIDEO_BUFFER_BYTES \
(MULTIBOOT_CONSOLE_LINES*MULTIBOOT_CONSOLE_COLS*2)
#define MULTIBOOT_STACK_SIZE 4096
#define MULTIBOOT_PARAM_BUF_SIZE 1024
#define MULTIBOOT_MAX_MODS 20
/* Flags to be set in the flags member of the multiboot info structure. */
#define MULTIBOOT_INFO_MEMORY 0x00000001
#define MULTIBOOT_INFO_MEM_MAP 0x00000040
/* Is there a boot device set? */
#define MULTIBOOT_INFO_BOOTDEV 0x00000002
@ -45,6 +48,8 @@
/* Are there modules to do something with? */
#define MULTIBOOT_INFO_MODS 0x00000008
#define MULTIBOOT_HIGH_MEM_BASE 0x100000
#ifndef __ASSEMBLY__
#include <minix/types.h>
@ -73,8 +78,8 @@ struct multiboot_info
/* Multiboot info version number */
u32_t flags;
/* Available memory from BIOS */
u32_t mem_lower;
u32_t mem_upper;
u32_t mem_lower_unused; /* minix uses memmap instead */
u32_t mem_upper_unused;
/* "root" partition */
u32_t boot_device;
/* Kernel command line */
@ -121,8 +126,18 @@ struct multiboot_mod_list
};
typedef struct multiboot_mod_list multiboot_module_t;
/* Buffer for multiboot parameters */
extern char multiboot_param_buf[];
#define MULTIBOOT_MEMORY_AVAILABLE 1
#define MULTIBOOT_MEMORY_RESERVED 2
struct multiboot_mmap_entry
{
u32_t size;
u64_t addr;
u64_t len;
#define MULTIBOOT_MEMORY_AVAILABLE 1
#define MULTIBOOT_MEMORY_RESERVED 2
u32_t type;
} __attribute__((packed));
typedef struct multiboot_mmap_entry multiboot_memory_map_t;
#endif /* __ASSEMBLY__ */
#endif /* __MULTIBOOT_H__ */

View file

@ -49,9 +49,4 @@
#define PAGE_SIZE (1 << PAGE_SHIFT)
#define PAGE_MASK (PAGE_SIZE - 1)
/* As visible from the user space process, where is the top of the
* stack (first non-stack byte), when in paged mode?
*/
#define VM_STACKTOP 0x80000000
#endif /* _I386_VMPARAM_H_ */

View file

@ -89,9 +89,6 @@
#define ROOT_SYS_PROC_NR RS_PROC_NR
#define ROOT_USR_PROC_NR INIT_PROC_NR
/* Number of processes contained in the system image. */
#define NR_BOOT_PROCS (NR_TASKS + LAST_SPECIAL_PROC_NR + 1)
/*===========================================================================*
* Kernel notification types *
*===========================================================================*/
@ -307,7 +304,6 @@
# define SYS_SIGSEND (KERNEL_CALL + 9) /* sys_sigsend() */
# define SYS_SIGRETURN (KERNEL_CALL + 10) /* sys_sigreturn() */
# define SYS_NEWMAP (KERNEL_CALL + 11) /* sys_newmap() */
# define SYS_MEMSET (KERNEL_CALL + 13) /* sys_memset() */
# define SYS_UMAP (KERNEL_CALL + 14) /* sys_umap() */
@ -533,16 +529,13 @@
#define SIG_MAP m2_l1 /* used by kernel to pass signal bit map */
#define SIG_CTXT_PTR m2_p1 /* pointer to info to restore signal context */
/* Field names for SYS_FORK, _EXEC, _EXIT, _NEWMAP, GETMCONTEXT, SETMCONTEXT.*/
/* Field names for SYS_FORK, _EXEC, _EXIT, GETMCONTEXT, SETMCONTEXT.*/
#define PR_ENDPT m1_i1 /* indicates a process */
#define PR_PRIORITY m1_i2 /* process priority */
#define PR_SLOT m1_i2 /* indicates a process slot */
#define PR_STACK_PTR m1_p1 /* used for stack ptr in sys_exec, sys_getsp */
#define PR_NAME_PTR m1_p2 /* tells where program name is for dmp */
#define PR_IP_PTR m1_p3 /* initial value for ip after exec */
#define PR_MEM_PTR m1_p1 /* tells where memory map is for sys_newmap
* and sys_fork
*/
#define PR_FORK_FLAGS m1_i3 /* optional flags for fork operation */
#define PR_FORK_MSGADDR m1_p1 /* reply message address of forked child */
#define PR_CTX_PTR m1_p1 /* pointer to mcontext_t structure */
@ -624,11 +617,8 @@
#define VMCTL_I386_GETCR3 13
#define VMCTL_MEMREQ_GET 14
#define VMCTL_MEMREQ_REPLY 15
#define VMCTL_INCSP 16
#define VMCTL_NOPAGEZERO 18
#define VMCTL_I386_KERNELLIMIT 19
#define VMCTL_I386_FREEPDE 23
#define VMCTL_ENABLE_PAGING 24
#define VMCTL_I386_INVLPG 25
#define VMCTL_FLUSHTLB 26
#define VMCTL_KERN_PHYSMAP 27
@ -636,6 +626,7 @@
#define VMCTL_SETADDRSPACE 29
#define VMCTL_VMINHIBIT_SET 30
#define VMCTL_VMINHIBIT_CLEAR 31
#define VMCTL_CLEARMAPCACHE 32
/* Codes and field names for SYS_SYSCTL. */
#define SYSCTL_CODE m1_i1 /* SYSCTL_CODE_* below */
@ -830,7 +821,7 @@
/* Parameters for the EXEC_NEWMEM call */
#define EXC_NM_PROC m1_i1 /* process that needs new map */
#define EXC_NM_PTR m1_p1 /* parameters in struct exec_newmem */
#define EXC_NM_PTR m1_p1 /* parameters in struct exec_info */
/* Results:
* the status will be in m_type.
* the top of the stack will be in m1_i1.

View file

@ -57,18 +57,14 @@
#define SEGMENT_TYPE 0xFF00 /* bit mask to get segment type */
#define SEGMENT_INDEX 0x00FF /* bit mask to get segment index */
#define LOCAL_SEG 0x0000 /* flags indicating local memory segment */
#define NR_LOCAL_SEGS 3 /* # local segments per process (fixed) */
#define T 0 /* proc[i].mem_map[T] is for text */
#define D 1 /* proc[i].mem_map[D] is for data */
#define S 2 /* proc[i].mem_map[S] is for stack */
#define D_OBSOLETE 1 /* proc[i].mem_map[D] is for data */
#define PHYS_SEG 0x0400 /* flag indicating entire physical memory */
#define LOCAL_VM_SEG 0x1000 /* same as LOCAL_SEG, but with vm lookup */
#define VM_D (LOCAL_VM_SEG | D)
#define VM_T (LOCAL_VM_SEG | T)
#define MEM_GRANT 3
#define VIR_ADDR 1
#define VM_D (LOCAL_VM_SEG | VIR_ADDR)
#define VM_GRANT (LOCAL_VM_SEG | MEM_GRANT)
/* Labels used to disable code sections for different reasons. */
@ -76,6 +72,9 @@
#define FUTURE_CODE 0 /* new code to be activated + tested later */
#define TEMP_CODE 1 /* active code to be removed later */
/* Number of processes contained in the system image. */
#define NR_BOOT_PROCS (NR_TASKS + LAST_SPECIAL_PROC_NR + 1)
/* Process name length in the PM process table, including '\0'. */
#define PROC_NAME_LEN 16
@ -157,9 +156,6 @@
#define SERVARNAME "cttyline"
#define SERBAUDVARNAME "cttybaud"
/* Bits for the system property flags in boot image processes. */
#define PROC_FULLVM 0x100 /* VM sets and manages full pagetable */
/* Bits for s_flags in the privilege structure. */
#define PREEMPTIBLE 0x002 /* kernel tasks are not preemptible */
#define BILLABLE 0x004 /* some processes are not billable */

View file

@ -31,7 +31,7 @@ struct sprof_sample {
struct sprof_proc {
endpoint_t proc;
char name[8];
char name[PROC_NAME_LEN];
};
#include <minix/types.h>

View file

@ -37,9 +37,8 @@ int sys_abort(int how, ...);
int sys_enable_iop(endpoint_t proc_ep);
int sys_exec(endpoint_t proc_ep, char *ptr, char *aout, vir_bytes
initpc);
int sys_fork(endpoint_t parent, endpoint_t child, endpoint_t *, struct
mem_map *ptr, u32_t vm, vir_bytes *);
int sys_newmap(endpoint_t proc_ep, struct mem_map *ptr);
int sys_fork(endpoint_t parent, endpoint_t child, endpoint_t *,
u32_t vm, vir_bytes *);
int sys_clear(endpoint_t proc_ep);
int sys_exit(void);
int sys_trace(int req, endpoint_t proc_ep, long addr, long *data_p);

View file

@ -1,5 +1,6 @@
#ifndef _TYPE_H
#define _TYPE_H
#include <machine/multiboot.h>
#ifndef _MINIX_SYS_CONFIG_H
#include <minix/sys_config.h>
@ -9,6 +10,9 @@
#include <minix/types.h>
#endif
#include <minix/const.h>
#include <minix/com.h>
#include <stdint.h>
/* Type definitions. */
@ -16,39 +20,12 @@ typedef unsigned int vir_clicks; /* virtual addr/length in clicks */
typedef unsigned long phys_bytes; /* physical addr/length in bytes */
typedef unsigned int phys_clicks; /* physical addr/length in clicks */
typedef int endpoint_t; /* process identifier */
typedef int32_t cp_grant_id_t; /* A grant ID. */
#if (_MINIX_CHIP == _CHIP_INTEL)
typedef long unsigned int vir_bytes; /* virtual addresses/lengths in bytes */
#endif
#if (_MINIX_CHIP == _CHIP_M68000)
typedef unsigned long vir_bytes;/* virtual addresses and lengths in bytes */
#endif
#if (_MINIX_CHIP == _CHIP_SPARC)
typedef unsigned long vir_bytes;/* virtual addresses and lengths in bytes */
#endif
/* Memory map for local text, stack, data segments. */
struct mem_map {
vir_clicks mem_vir; /* virtual address */
phys_clicks mem_phys; /* physical address */
vir_clicks mem_len; /* length */
};
/* Memory map for remote memory areas, e.g., for the RAM disk. */
struct far_mem {
int in_use; /* entry in use, unless zero */
phys_clicks mem_phys; /* physical address */
vir_clicks mem_len; /* length */
};
/* Structure for virtual copying by means of a vector with requests. */
struct vir_addr {
endpoint_t proc_nr_e;
int segment;
endpoint_t proc_nr_e; /* NONE for phys, otherwise process endpoint */
vir_bytes offset;
};
@ -99,27 +76,6 @@ struct sigmsg {
vir_bytes sm_stkptr; /* user stack pointer */
};
/* This is used to obtain system information through SYS_GETINFO. */
struct kinfo {
phys_bytes code_base; /* base of kernel code */
phys_bytes code_size;
phys_bytes data_base; /* base of kernel data */
phys_bytes data_size;
vir_bytes proc_addr; /* virtual address of process table */
phys_bytes _kmem_base; /* kernel memory layout (/dev/kmem) */
phys_bytes _kmem_size;
phys_bytes bootdev_base; /* boot device from boot image (/dev/boot) */
phys_bytes bootdev_size;
phys_bytes ramdev_base; /* boot device from boot image (/dev/boot) */
phys_bytes ramdev_size;
phys_bytes _params_base; /* parameters passed by boot monitor */
phys_bytes _params_size;
int nr_procs; /* number of user processes */
int nr_tasks; /* number of kernel tasks */
char release[6]; /* kernel release number */
char version[6]; /* kernel version number */
};
/* Load data accounted every this no. of seconds. */
#define _LOAD_UNIT_SECS 6 /* Changing this breaks ABI. */
@ -166,12 +122,55 @@ struct mem_range
phys_bytes mr_limit; /* Highest memory address in range */
};
/* List of boot-time processes set in kernel/table.c. */
struct boot_image {
int proc_nr; /* process number to use */
char proc_name[PROC_NAME_LEN]; /* name in process table */
endpoint_t endpoint; /* endpoint number when started */
phys_bytes start_addr; /* Where it's in memory */
phys_bytes len;
};
/* Memory chunks. */
struct memory {
phys_bytes base;
phys_bytes size;
};
/* This is used to obtain system information through SYS_GETINFO. */
#define MAXMEMMAP 40
typedef struct kinfo {
/* Straight multiboot-provided info */
multiboot_info_t mbi;
multiboot_module_t module_list[MULTIBOOT_MAX_MODS];
multiboot_memory_map_t memmap[MAXMEMMAP]; /* free mem list */
phys_bytes mem_high_phys;
int mmap_size;
/* Multiboot-derived */
int mods_with_kernel; /* no. of mods incl kernel */
int kern_mod; /* which one is kernel */
/* Minix stuff, started at bootstrap phase */
int freepde_start; /* lowest pde unused kernel pde */
char param_buf[MULTIBOOT_PARAM_BUF_SIZE];
/* Minix stuff */
struct kmessages *kmess;
int do_serial_debug; /* system serial output */
int serial_debug_baud; /* serial baud rate */
int minix_panicing; /* are we panicing? */
vir_bytes user_sp; /* where does kernel want stack set */
vir_bytes user_end; /* upper proc limit */
vir_bytes vir_kern_start; /* kernel addrspace starts */
vir_bytes bootstrap_start, bootstrap_len;
struct boot_image boot_procs[NR_BOOT_PROCS];
int nr_procs; /* number of user processes */
int nr_tasks; /* number of kernel tasks */
char release[6]; /* kernel release number */
char version[6]; /* kernel version number */
} kinfo_t;
#define STATICINIT(v, n) \
if(!(v)) { \
if(!((v) = alloc_contig(sizeof(*(v)) * (n), 0, NULL))) { \
@ -184,6 +183,8 @@ struct kmessages {
int km_next; /* next index to write */
int km_size; /* current size in buffer */
char km_buf[_KMESS_BUF_SIZE]; /* buffer for messages */
char kmess_buf[80*25]; /* printable copy of message buffer */
int blpos; /* kmess_buf position */
};
#include <minix/config.h>

View file

@ -55,7 +55,6 @@ struct vm_usage_info {
};
struct vm_region_info {
int vri_seg; /* segment of virtual region (T or D) */
vir_bytes vri_addr; /* base address of region */
vir_bytes vri_length; /* length of region */
int vri_prot; /* protection flags (PROT_) */

View file

@ -5,16 +5,18 @@ PROG= kernel
.include "arch/${MACHINE_ARCH}/Makefile.inc"
SRCS+= clock.c cpulocals.c interrupt.c main.c proc.c start.c system.c \
SRCS+= clock.c cpulocals.c interrupt.c main.c proc.c system.c \
table.c utility.c
DPADD+= ${LIBTIMERS} ${LIBSYS} ${LIBEXEC}
LINKERSCRIPT=${.CURDIR}/arch/${ARCH}/kernel.lds
DPADD+= ${LIBTIMERS} ${LIBSYS} ${LIBEXEC} $(LINKERSCRIPT)
LDADD+= -ltimers -lsys -lexec
CFLAGS += -D__kernel__
CFLAGS += -D__kernel__
CPPFLAGS+= -fno-stack-protector -D_NETBSD_SOURCE
LDFLAGS+= -T ${.CURDIR}/arch/${MACHINE_ARCH}/kernel.lds
LDFLAGS+= -T $(LINKERSCRIPT)
LDFLAGS+= -nostdlib -L${DESTDIR}/${LIBDIR}
LDADD+= -lminlib
DPADD+= ${LIBMINLIB}
@ -87,3 +89,5 @@ extracted-mtype.h: extract-mtype.sh ../include/minix/com.h
clean:
rm -f extracted-errno.h extracted-mfield.h extracted-mtype.h

View file

@ -4,10 +4,37 @@
HERE=${.CURDIR}/arch/${MACHINE_ARCH}
.PATH: ${HERE}
# objects we want unpaged from -lminlib, -lminc
MINLIB_OBJS_UNPAGED=_cpufeature.o _cpuid.o get_bp.o
MINC_OBJS_UNPAGED=strcat.o strlen.o memcpy.o strcpy.o strncmp.o memset.o \
memmove.o strcmp.o atoi.o ctype_.o _stdfile.o strtol.o _errno.o errno.o
SYS_OBJS_UNPAGED=kprintf.o vprintf.o assert.o stacktrace.o
# some object files we give a symbol prefix (or namespace) of __k_unpaged_
# that must live in their own unique namespace.
#
.for UNPAGED_OBJ in head.o pre_init.o direct_tty_utils.o io_outb.o \
io_inb.o pg_utils.o klib.o utility.o arch_reset.o \
$(MINLIB_OBJS_UNPAGED) $(MINC_OBJS_UNPAGED) $(SYS_OBJS_UNPAGED)
unpaged_${UNPAGED_OBJ}: ${UNPAGED_OBJ}
objcopy --prefix-symbols=__k_unpaged_ ${UNPAGED_OBJ} unpaged_${UNPAGED_OBJ}
UNPAGED_OBJS += unpaged_${UNPAGED_OBJ}
.endfor
# we have to extract some object files from libminc.a and libminlib.a
$(MINLIB_OBJS_UNPAGED) $(MINC_OBJS_UNPAGED) $(SYS_OBJS_UNPAGED): $(LIBMINLIB) $(LIBMINC) $(LIBSYS)
ar x $(LIBMINLIB) $(MINLIB_OBJS_UNPAGED)
ar x $(LIBMINC) $(MINC_OBJS_UNPAGED)
ar x $(LIBSYS) $(SYS_OBJS_UNPAGED)
SRCS+= mpx.S arch_clock.c arch_do_vmctl.c arch_system.c \
do_iopenable.c do_readbios.c do_sdevio.c exception.c i8259.c io_inb.S \
io_inl.S io_intr.S io_inw.S io_outb.S io_outl.S io_outw.S klib.S klib16.S memory.c multiboot.S \
oxpcie.c pre_init.c protect.c
io_inl.S io_intr.S io_inw.S io_outb.S io_outl.S io_outw.S klib.S klib16.S memory.c \
oxpcie.c protect.c direct_tty_utils.c arch_reset.c \
pg_utils.c
OBJS.kernel+= ${UNPAGED_OBJS}
.if ${USE_ACPI} != "no"
SRCS+= acpi.c
@ -28,7 +55,7 @@ SRCS+= arch_watchdog.c
CPPFLAGS+= -DUSE_WATCHDOG
.endif
apic_asm.d klib.d mpx.d: procoffsets.h
apic_asm.d klib.d mpx.d head.d: procoffsets.h
# It's OK to hardcode the arch as i386 here as this and procoffsets.cf
# are i386-specific.

View file

@ -361,8 +361,6 @@ void ioapic_disable_all(void)
apic_idt_init(TRUE); /* reset */
idt_reload();
intr_init(INTS_ORIG, 0); /* no auto eoi */
}
static void ioapic_disable_irq(unsigned irq)
@ -649,12 +647,12 @@ static int lapic_enable_in_msr(void)
* update it
*/
addr = (msr_lo >> 12) | ((msr_hi & 0xf) << 20);
if (phys2vir(addr) != (lapic_addr >> 12)) {
if (addr != (lapic_addr >> 12)) {
if (msr_hi & 0xf) {
printf("ERROR : APIC address needs more then 32 bits\n");
return 0;
}
lapic_addr = phys2vir(msr_lo & ~((1 << 12) - 1));
lapic_addr = msr_lo & ~((1 << 12) - 1);
}
#endif
@ -848,7 +846,7 @@ static void lapic_set_dummy_handlers(void)
handler += vect * LAPIC_INTR_DUMMY_HANDLER_SIZE;
for(; handler < &lapic_intr_dummy_handles_end;
handler += LAPIC_INTR_DUMMY_HANDLER_SIZE) {
int_gate(vect++, (vir_bytes) handler,
int_gate_idt(vect++, (vir_bytes) handler,
PRESENT | INT_GATE_TYPE |
(INTR_PRIVILEGE << DPL_SHIFT));
}
@ -862,14 +860,16 @@ void apic_idt_init(const int reset)
/* Set up idt tables for smp mode.
*/
int is_bsp = is_boot_apic(apicid());
int is_bsp;
if (reset) {
idt_copy_vectors(gate_table_pic);
idt_copy_vectors_pic();
idt_copy_vectors(gate_table_common);
return;
}
is_bsp = is_boot_apic(apicid());
#ifdef APIC_DEBUG
if (is_bsp)
printf("APIC debugging is enabled\n");
@ -880,7 +880,7 @@ void apic_idt_init(const int reset)
if (ioapic_enabled)
idt_copy_vectors(gate_table_ioapic);
else
idt_copy_vectors(gate_table_pic);
idt_copy_vectors_pic();
idt_copy_vectors(gate_table_common);
@ -899,7 +899,7 @@ void apic_idt_init(const int reset)
if (is_bsp) {
BOOT_VERBOSE(printf("Initiating APIC timer handler\n"));
/* register the timer interrupt handler for this CPU */
int_gate(APIC_TIMER_INT_VECTOR, (vir_bytes) lapic_timer_int_handler,
int_gate_idt(APIC_TIMER_INT_VECTOR, (vir_bytes) lapic_timer_int_handler,
PRESENT | INT_GATE_TYPE | (INTR_PRIVILEGE << DPL_SHIFT));
}
@ -916,7 +916,7 @@ static int acpi_get_ioapics(struct io_apic * ioa, unsigned * nioa, unsigned max)
break;
ioa[n].id = acpi_ioa->id;
ioa[n].addr = phys2vir(acpi_ioa->address);
ioa[n].addr = acpi_ioa->address;
ioa[n].paddr = (phys_bytes) acpi_ioa->address;
ioa[n].gsi_base = acpi_ioa->global_int_base;
ioa[n].pins = ((ioapic_read(ioa[n].addr,
@ -936,13 +936,15 @@ int detect_ioapics(void)
{
int status;
if (machine.acpi_rsdp)
if (machine.acpi_rsdp) {
status = acpi_get_ioapics(io_apic, &nioapics, MAX_NR_IOAPICS);
else
} else {
status = 0;
}
if (!status) {
/* try something different like MPS */
}
return status;
}
@ -1113,7 +1115,7 @@ int apic_single_cpu_init(void)
if (!cpu_feature_apic_on_chip())
return 0;
lapic_addr = phys2vir(LOCAL_APIC_DEF_ADDR);
lapic_addr = LOCAL_APIC_DEF_ADDR;
ioapic_enabled = 0;
if (!lapic_enable(0)) {
@ -1234,8 +1236,6 @@ void ioapic_reset_pic(void)
* master and slave. */
outb(0x22, 0x70);
outb(0x23, 0x00);
intr_init(INTS_ORIG, 0); /* no auto eoi */
}
static void irq_lapic_status(int irq)

View file

@ -8,10 +8,31 @@
*/
#include "kernel/system.h"
#include <assert.h>
#include <minix/type.h>
#include "arch_proto.h"
extern phys_bytes video_mem_vaddr;
extern char *video_mem;
static void setcr3(struct proc *p, u32_t cr3, u32_t *v)
{
/* Set process CR3. */
p->p_seg.p_cr3 = cr3;
assert(p->p_seg.p_cr3);
p->p_seg.p_cr3_v = v;
if(p == get_cpulocal_var(ptproc)) {
write_cr3(p->p_seg.p_cr3);
}
if(p->p_nr == VM_PROC_NR) {
if (arch_enable_paging(p) != OK)
panic("arch_enable_paging failed");
}
RTS_UNSET(p, RTS_VMINHIBIT);
}
/*===========================================================================*
* arch_do_vmctl *
*===========================================================================*/
@ -25,37 +46,8 @@ struct proc *p;
m_ptr->SVMCTL_VALUE = p->p_seg.p_cr3;
return OK;
case VMCTL_SETADDRSPACE:
/* Set process CR3. */
if(m_ptr->SVMCTL_PTROOT) {
p->p_seg.p_cr3 = m_ptr->SVMCTL_PTROOT;
p->p_seg.p_cr3_v = (u32_t *) m_ptr->SVMCTL_PTROOT_V;
p->p_misc_flags |= MF_FULLVM;
if(p == get_cpulocal_var(ptproc)) {
write_cr3(p->p_seg.p_cr3);
}
} else {
p->p_seg.p_cr3 = 0;
p->p_seg.p_cr3_v = NULL;
p->p_misc_flags &= ~MF_FULLVM;
}
RTS_UNSET(p, RTS_VMINHIBIT);
setcr3(p, m_ptr->SVMCTL_PTROOT, (u32_t *) m_ptr->SVMCTL_PTROOT_V);
return OK;
case VMCTL_INCSP:
/* Increase process SP. */
p->p_reg.sp += m_ptr->SVMCTL_VALUE;
return OK;
case VMCTL_I386_KERNELLIMIT:
{
int r;
/* VM wants kernel to increase its segment. */
r = prot_set_kern_seg_limit(m_ptr->SVMCTL_VALUE);
return r;
}
case VMCTL_I386_FREEPDE:
{
i386_freepde(m_ptr->SVMCTL_VALUE);
return OK;
}
case VMCTL_FLUSHTLB:
{
reload_cr3();
@ -66,7 +58,6 @@ struct proc *p;
i386_invlpg(m_ptr->SVMCTL_VALUE);
return OK;
}
}

View file

@ -0,0 +1,154 @@
#include "kernel/kernel.h"
#include <unistd.h>
#include <ctype.h>
#include <string.h>
#include <machine/cmos.h>
#include <machine/bios.h>
#include <machine/cpu.h>
#include <minix/portio.h>
#include <minix/cpufeature.h>
#include <assert.h>
#include <signal.h>
#include <machine/vm.h>
#include <minix/u64.h>
#include "archconst.h"
#include "arch_proto.h"
#include "serial.h"
#include "oxpcie.h"
#include "kernel/proc.h"
#include "kernel/debug.h"
#include "direct_utils.h"
#include <machine/multiboot.h>
#define KBCMDP 4 /* kbd controller port (O) */
#define KBC_PULSE0 0xfe /* pulse output bit 0 */
#define IO_KBD 0x060 /* 8042 Keyboard */
int cpu_has_tsc;
void
reset(void)
{
uint8_t b;
/*
* The keyboard controller has 4 random output pins, one of which is
* connected to the RESET pin on the CPU in many PCs. We tell the
* keyboard controller to pulse this line a couple of times.
*/
outb(IO_KBD + KBCMDP, KBC_PULSE0);
busy_delay_ms(100);
outb(IO_KBD + KBCMDP, KBC_PULSE0);
busy_delay_ms(100);
/*
* Attempt to force a reset via the Reset Control register at
* I/O port 0xcf9. Bit 2 forces a system reset when it
* transitions from 0 to 1. Bit 1 selects the type of reset
* to attempt: 0 selects a "soft" reset, and 1 selects a
* "hard" reset. We try a "hard" reset. The first write sets
* bit 1 to select a "hard" reset and clears bit 2. The
* second write forces a 0 -> 1 transition in bit 2 to trigger
* a reset.
*/
outb(0xcf9, 0x2);
outb(0xcf9, 0x6);
busy_delay_ms(500); /* wait 0.5 sec to see if that did it */
/*
* Attempt to force a reset via the Fast A20 and Init register
* at I/O port 0x92. Bit 1 serves as an alternate A20 gate.
* Bit 0 asserts INIT# when set to 1. We are careful to only
* preserve bit 1 while setting bit 0. We also must clear bit
* 0 before setting it if it isn't already clear.
*/
b = inb(0x92);
if (b != 0xff) {
if ((b & 0x1) != 0)
outb(0x92, b & 0xfe);
outb(0x92, b | 0x1);
busy_delay_ms(500); /* wait 0.5 sec to see if that did it */
}
/* Triple fault */
x86_triplefault();
/* Give up on resetting */
while(1) {
;
}
}
__dead void arch_shutdown(int how)
{
unsigned char unused_ch;
/* Mask all interrupts, including the clock. */
outb( INT_CTLMASK, ~0);
/* Empty buffer */
while(direct_read_char(&unused_ch))
;
if(kinfo.minix_panicing) {
/* Printing is done synchronously over serial. */
if (kinfo.do_serial_debug)
reset();
/* Print accumulated diagnostics buffer and reset. */
direct_cls();
direct_print("Minix panic. System diagnostics buffer:\n\n");
direct_print(kmess.kmess_buf);
direct_print("\nSystem has panicked, press any key to reboot");
while (!direct_read_char(&unused_ch))
;
reset();
}
if (how == RBT_DEFAULT) {
how = RBT_RESET;
}
switch (how) {
case RBT_HALT:
/* Stop */
for (; ; ) halt_cpu();
NOT_REACHABLE;
default:
case RBT_REBOOT:
case RBT_RESET:
/* Reset the system by forcing a processor shutdown.
* First stop the BIOS memory test by setting a soft
* reset flag.
*/
reset();
NOT_REACHABLE;
}
NOT_REACHABLE;
}
#ifdef DEBUG_SERIAL
void ser_putc(char c)
{
int i;
int lsr, thr;
#if CONFIG_OXPCIE
oxpcie_putc(c);
#else
lsr= COM1_LSR;
thr= COM1_THR;
for (i= 0; i<100000; i++)
{
if (inb( lsr) & LSR_THRE)
break;
}
outb( thr, c);
#endif
}
#endif

View file

@ -32,7 +32,7 @@ void trampoline(void);
* 16-bit mode
*/
extern volatile u32_t __ap_id;
extern volatile struct segdesc_s __ap_gdt, __ap_idt;
extern volatile struct desctableptr_s __ap_gdt, __ap_idt;
extern void * __trampoline_end;
extern u32_t busclock[CONFIG_MAX_CPUS];
@ -93,6 +93,8 @@ static phys_bytes copy_trampoline(void)
return tramp_base;
}
extern struct desctableptr_s gdt_desc, idt_desc;
static void smp_start_aps(void)
{
/*
@ -111,8 +113,8 @@ static void smp_start_aps(void)
outb(RTC_IO, 0xA);
/* prepare gdt and idt for the new cpus */
__ap_gdt = gdt[GDT_INDEX];
__ap_idt = gdt[IDT_INDEX];
__ap_gdt = gdt_desc;
__ap_idt = idt_desc;
if (!(trampoline_base = copy_trampoline())) {
printf("Copying trampoline code failed, cannot boot SMP\n");
@ -136,7 +138,8 @@ static void smp_start_aps(void)
}
__ap_id = cpu;
phys_copy(vir2phys(__ap_id), __ap_id_phys, sizeof(__ap_id));
phys_copy(vir2phys((void *) &__ap_id),
__ap_id_phys, sizeof(__ap_id));
mfence();
if (apic_send_init_ipi(cpu, trampoline_base) ||
apic_send_startup_ipi(cpu, trampoline_base)) {
@ -216,7 +219,7 @@ static void ap_finish_booting(void)
/* inform the world of our presence. */
ap_cpu_ready = cpu;
while(!i386_paging_enabled)
while(!bootstrap_pagetable_done)
arch_pause();
/*
@ -232,7 +235,8 @@ static void ap_finish_booting(void)
* we must load some page tables befre we turn paging on. As VM is
* always present we use those
*/
segmentation2paging(proc_addr(VM_PROC_NR));
pg_load(); /* load bootstrap pagetable built by BSP */
vm_enable_paging();
printf("CPU %d paging is on\n", cpu);
@ -301,7 +305,7 @@ void smp_init (void)
goto uniproc_fallback;
}
lapic_addr = phys2vir(LOCAL_APIC_DEF_ADDR);
lapic_addr = LOCAL_APIC_DEF_ADDR;
ioapic_enabled = 0;
tss_init_all();
@ -347,7 +351,7 @@ uniproc_fallback:
apic_idt_init(1); /* Reset to PIC idt ! */
idt_reload();
smp_reinit_vars (); /* revert to a single proc system. */
intr_init (INTS_MINIX, 0); /* no auto eoi */
intr_init(0); /* no auto eoi */
printf("WARNING : SMP initialization failed\n");
}

View file

@ -21,7 +21,7 @@
#include "oxpcie.h"
#include "kernel/proc.h"
#include "kernel/debug.h"
#include "mb_utils.h"
#include "direct_utils.h"
#include <machine/multiboot.h>
#include "glo.h"
@ -36,10 +36,6 @@
static int osfxsr_feature; /* FXSAVE/FXRSTOR instructions support (SSEx) */
extern __dead void poweroff_jmp();
extern void poweroff16();
extern void poweroff16_end();
/* set MP and NE flags to handle FPU exceptions in native mode. */
#define CR0_MP_NE 0x0022
/* set CR4.OSFXSR[bit 9] if FXSR is supported. */
@ -57,142 +53,6 @@ static void ser_dump_proc_cpu(void);
static void ser_init(void);
#endif
#define KBCMDP 4 /* kbd controller port (O) */
#define KBC_PULSE0 0xfe /* pulse output bit 0 */
#define IO_KBD 0x060 /* 8042 Keyboard */
void
reset(void)
{
uint8_t b;
/*
* The keyboard controller has 4 random output pins, one of which is
* connected to the RESET pin on the CPU in many PCs. We tell the
* keyboard controller to pulse this line a couple of times.
*/
outb(IO_KBD + KBCMDP, KBC_PULSE0);
busy_delay_ms(100);
outb(IO_KBD + KBCMDP, KBC_PULSE0);
busy_delay_ms(100);
/*
* Attempt to force a reset via the Reset Control register at
* I/O port 0xcf9. Bit 2 forces a system reset when it
* transitions from 0 to 1. Bit 1 selects the type of reset
* to attempt: 0 selects a "soft" reset, and 1 selects a
* "hard" reset. We try a "hard" reset. The first write sets
* bit 1 to select a "hard" reset and clears bit 2. The
* second write forces a 0 -> 1 transition in bit 2 to trigger
* a reset.
*/
outb(0xcf9, 0x2);
outb(0xcf9, 0x6);
busy_delay_ms(500); /* wait 0.5 sec to see if that did it */
/*
* Attempt to force a reset via the Fast A20 and Init register
* at I/O port 0x92. Bit 1 serves as an alternate A20 gate.
* Bit 0 asserts INIT# when set to 1. We are careful to only
* preserve bit 1 while setting bit 0. We also must clear bit
* 0 before setting it if it isn't already clear.
*/
b = inb(0x92);
if (b != 0xff) {
if ((b & 0x1) != 0)
outb(0x92, b & 0xfe);
outb(0x92, b | 0x1);
busy_delay_ms(500); /* wait 0.5 sec to see if that did it */
}
/* Triple fault */
x86_triplefault();
/* Give up on resetting */
while(1) {
;
}
}
static __dead void arch_bios_poweroff(void)
{
u32_t cr0;
/* Disable paging */
cr0 = read_cr0();
cr0 &= ~I386_CR0_PG;
write_cr0(cr0);
/* Copy 16-bit poweroff code to below 1M */
phys_copy(
(u32_t)&poweroff16,
BIOS_POWEROFF_ENTRY,
(u32_t)&poweroff16_end-(u32_t)&poweroff16);
poweroff_jmp();
}
int cpu_has_tsc;
__dead void arch_shutdown(int how)
{
vm_stop();
/* Mask all interrupts, including the clock. */
outb( INT_CTLMASK, ~0);
if(minix_panicing) {
unsigned char unused_ch;
/* We're panicing? Then retrieve and decode currently
* loaded segment selectors.
*/
printseg("cs: ", 1, get_cpulocal_var(proc_ptr), read_cs());
printseg("ds: ", 0, get_cpulocal_var(proc_ptr), read_ds());
if(read_ds() != read_ss()) {
printseg("ss: ", 0, NULL, read_ss());
}
/* Printing is done synchronously over serial. */
if (do_serial_debug)
reset();
/* Print accumulated diagnostics buffer and reset. */
mb_cls();
mb_print("Minix panic. System diagnostics buffer:\n\n");
mb_print(kmess_buf);
mb_print("\nSystem has panicked, press any key to reboot");
while (!mb_read_char(&unused_ch))
;
reset();
}
if (how == RBT_DEFAULT) {
how = RBT_RESET;
}
switch (how) {
case RBT_HALT:
/* Poweroff without boot monitor */
arch_bios_poweroff();
NOT_REACHABLE;
case RBT_PANIC:
/* Allow user to read panic message */
for (; ; ) halt_cpu();
NOT_REACHABLE;
default:
case RBT_REBOOT:
case RBT_RESET:
/* Reset the system by forcing a processor shutdown.
* First stop the BIOS memory test by setting a soft
* reset flag.
*/
reset();
NOT_REACHABLE;
}
NOT_REACHABLE;
}
void fpu_init(void)
{
unsigned short cw, sw;
@ -288,19 +148,38 @@ void save_fpu(struct proc *pr)
*/
static char fpu_state[NR_PROCS][FPU_XFP_SIZE] __aligned(FPUALIGN);
void arch_proc_init(int nr, struct proc *pr)
void arch_proc_reset(struct proc *pr)
{
if(nr < 0) return;
char *v;
char *v = NULL;
assert(nr < NR_PROCS);
assert(pr->p_nr < NR_PROCS);
v = fpu_state[nr];
if(pr->p_nr >= 0) {
v = fpu_state[pr->p_nr];
/* verify alignment */
assert(!((vir_bytes)v % FPUALIGN));
/* initialize state */
memset(v, 0, FPU_XFP_SIZE);
}
/* Clear process state. */
memset(&pr->p_reg, 0, sizeof(pr->p_reg));
if(iskerneln(pr->p_nr))
pr->p_reg.psw = INIT_TASK_PSW;
else
pr->p_reg.psw = INIT_PSW;
/* verify alignment */
assert(!((vir_bytes)v % FPUALIGN));
pr->p_seg.fpu_state = v;
/* Initialize the fundamentals that are (initially) the same for all
* processes - the segment selectors it gets to use.
*/
pr->p_reg.cs = USER_CS_SELECTOR;
pr->p_reg.gs =
pr->p_reg.fs =
pr->p_reg.ss =
pr->p_reg.es =
pr->p_reg.ds = USER_DS_SELECTOR;
}
int restore_fpu(struct proc *pr)
@ -362,18 +241,6 @@ void cpu_identify(void)
void arch_init(void)
{
#ifdef USE_APIC
/*
* this is setting kernel segments to cover most of the phys memory. The
* value is high enough to reach local APIC nad IOAPICs before paging is
* turned on.
*/
prot_set_kern_seg_limit(0xfff00000);
reload_ds();
#endif
idt_init();
/* FIXME stupid a.out
* align the stacks in the stack are to the K_STACK_SIZE which is a
* power of 2
@ -405,29 +272,12 @@ void arch_init(void)
BOOT_VERBOSE(printf("APIC not present, using legacy PIC\n"));
}
#endif
/* Reserve some BIOS ranges */
cut_memmap(&kinfo, BIOS_MEM_BEGIN, BIOS_MEM_END);
cut_memmap(&kinfo, BASE_MEM_TOP, UPPER_MEM_END);
}
#ifdef DEBUG_SERIAL
void ser_putc(char c)
{
int i;
int lsr, thr;
#if CONFIG_OXPCIE
oxpcie_putc(c);
#else
lsr= COM1_LSR;
thr= COM1_THR;
for (i= 0; i<100000; i++)
{
if (inb( lsr) & LSR_THRE)
break;
}
outb( thr, c);
#endif
}
/*===========================================================================*
* do_ser_debug *
*===========================================================================*/
@ -484,22 +334,6 @@ static void ser_dump_queues(void)
#endif
}
static void ser_dump_segs(void)
{
struct proc *pp;
for (pp= BEG_PROC_ADDR; pp < END_PROC_ADDR; pp++)
{
if (isemptyp(pp))
continue;
printf("%d: %s ep %d\n", proc_nr(pp), pp->p_name, pp->p_endpoint);
printseg("cs: ", 1, pp, pp->p_reg.cs);
printseg("ds: ", 0, pp, pp->p_reg.ds);
if(pp->p_reg.ss != pp->p_reg.ds) {
printseg("ss: ", 0, pp, pp->p_reg.ss);
}
}
}
#ifdef CONFIG_SMP
static void dump_bkl_usage(void)
{
@ -548,9 +382,6 @@ static void ser_debug(const int c)
case '2':
ser_dump_queues();
break;
case '3':
ser_dump_segs();
break;
#ifdef CONFIG_SMP
case '4':
ser_dump_proc_cpu();
@ -580,6 +411,7 @@ static void ser_debug(const int c)
serial_debug_active = 0;
}
#if DEBUG_SERIAL
void ser_dump_proc()
{
struct proc *pp;
@ -650,25 +482,6 @@ void arch_ack_profile_clock(void)
#endif
/* Saved by mpx386.s into these variables. */
u32_t params_size, params_offset, mon_ds;
int arch_get_params(char *params, int maxsize)
{
phys_copy(seg2phys(mon_ds) + params_offset, vir2phys(params),
MIN(maxsize, params_size));
params[maxsize-1] = '\0';
return OK;
}
int arch_set_params(char *params, int size)
{
if(size > params_size)
return E2BIG;
phys_copy(vir2phys(params), seg2phys(mon_ds) + params_offset, size);
return OK;
}
void arch_do_syscall(struct proc *proc)
{
/* do_ipc assumes that it's running because of the current process */
@ -691,6 +504,12 @@ struct proc * arch_finish_switch_to_user(void)
/* set pointer to the process to run on the stack */
p = get_cpulocal_var(proc_ptr);
*((reg_t *)stk) = (reg_t) p;
/* make sure IF is on in FLAGS so that interrupts won't be disabled
* once p's context is restored. this should not be possible.
*/
assert(p->p_reg.psw & (1L << 9));
return p;
}
@ -734,14 +553,14 @@ static void ser_init(void)
unsigned divisor;
/* keep BIOS settings if cttybaud is not set */
if (serial_debug_baud <= 0) return;
if (kinfo.serial_debug_baud <= 0) return;
/* set DLAB to make baud accessible */
lcr = LCR_8BIT | LCR_1STOP | LCR_NPAR;
outb(COM1_LCR, lcr | LCR_DLAB);
/* set baud rate */
divisor = UART_BASE_FREQ / serial_debug_baud;
divisor = UART_BASE_FREQ / kinfo.serial_debug_baud;
if (divisor < 1) divisor = 1;
if (divisor > 65535) divisor = 65535;

View file

@ -0,0 +1,140 @@
#include "kernel.h"
#include <minix/minlib.h>
#include <minix/const.h>
#include <minix/cpufeature.h>
#include <minix/types.h>
#include <minix/type.h>
#include <minix/com.h>
#include <sys/param.h>
#include <machine/partition.h>
#include <libexec.h>
#include "string.h"
#include "arch_proto.h"
#include "libexec.h"
#include "direct_utils.h"
#include "serial.h"
#include "glo.h"
#include <machine/multiboot.h>
/* Give non-zero values to avoid them in BSS */
static int print_line = 1, print_col = 1;
#include <sys/video.h>
extern char *video_mem;
#define VIDOFFSET(line, col) ((line) * MULTIBOOT_CONSOLE_COLS * 2 + (col) * 2)
#define VIDSIZE VIDOFFSET(MULTIBOOT_CONSOLE_LINES-1,MULTIBOOT_CONSOLE_COLS-1)
void direct_put_char(char c, int line, int col)
{
int offset = VIDOFFSET(line, col);
video_mem[offset] = c;
video_mem[offset+1] = 0x07; /* grey-on-black */
}
static char direct_get_char(int line, int col)
{
return video_mem[VIDOFFSET(line, col)];
}
void direct_cls(void)
{
/* Clear screen */
int i,j;
for(i = 0; i < MULTIBOOT_CONSOLE_COLS; i++)
for(j = 0; j < MULTIBOOT_CONSOLE_LINES; j++)
direct_put_char(' ', j, i);
print_line = print_col = 0;
/* Tell video hardware origin is 0. */
outb(C_6845+INDEX, VID_ORG);
outb(C_6845+DATA, 0);
outb(C_6845+INDEX, VID_ORG+1);
outb(C_6845+DATA, 0);
}
static void direct_scroll_up(int lines)
{
int i, j;
for (i = 0; i < MULTIBOOT_CONSOLE_LINES; i++ ) {
for (j = 0; j < MULTIBOOT_CONSOLE_COLS; j++ ) {
char c = 0;
if(i < MULTIBOOT_CONSOLE_LINES-lines)
c = direct_get_char(i + lines, j);
direct_put_char(c, i, j);
}
}
print_line-= lines;
}
void direct_print_char(char c)
{
while (print_line >= MULTIBOOT_CONSOLE_LINES)
direct_scroll_up(1);
#define TABWIDTH 8
if(c == '\t') {
if(print_col >= MULTIBOOT_CONSOLE_COLS - TABWIDTH) {
c = '\n';
} else {
do {
direct_put_char(' ', print_line, print_col++);
} while(print_col % TABWIDTH);
return;
}
}
if (c == '\n') {
while (print_col < MULTIBOOT_CONSOLE_COLS)
direct_put_char(' ', print_line, print_col++);
print_line++;
print_col = 0;
return;
}
direct_put_char(c, print_line, print_col++);
if (print_col >= MULTIBOOT_CONSOLE_COLS) {
print_line++;
print_col = 0;
}
while (print_line >= MULTIBOOT_CONSOLE_LINES)
direct_scroll_up(1);
}
void direct_print(const char *str)
{
while (*str) {
direct_print_char(*str);
str++;
}
}
/* Standard and AT keyboard. (PS/2 MCA implies AT throughout.) */
#define KEYBD 0x60 /* I/O port for keyboard data */
#define KB_STATUS 0x64 /* I/O port for status on AT */
#define KB_OUT_FULL 0x01 /* status bit set when keypress char pending */
#define KB_AUX_BYTE 0x20 /* Auxiliary Device Output Buffer Full */
int direct_read_char(unsigned char *ch)
{
unsigned long b, sb;
sb = inb(KB_STATUS);
if (!(sb & KB_OUT_FULL)) {
return 0;
}
b = inb(KEYBD);
if (!(sb & KB_AUX_BYTE))
return 1;
return 0;
}

View file

@ -18,8 +18,6 @@ int do_readbios(struct proc * caller, message * m_ptr)
struct vir_addr src, dst;
vir_bytes len = m_ptr->RDB_SIZE, limit;
src.segment = PHYS_SEG;
dst.segment = D;
src.offset = m_ptr->RDB_ADDR;
dst.offset = (vir_bytes) m_ptr->RDB_BUF;
src.proc_nr_e = NONE;

View file

@ -29,7 +29,7 @@ int do_sdevio(struct proc * caller, message *m_ptr)
endpoint_t proc_nr_e = m_ptr->DIO_VEC_ENDPT;
vir_bytes count = m_ptr->DIO_VEC_SIZE;
long port = m_ptr->DIO_PORT;
phys_bytes phys_buf;
phys_bytes vir_buf;
int i, req_type, req_dir, size, nr_io_range;
struct priv *privp;
struct io_range *iorp;
@ -79,11 +79,7 @@ int do_sdevio(struct proc * caller, message *m_ptr)
if(!isokendpt(newep, &proc_nr))
return(EINVAL);
destproc = proc_addr(proc_nr);
if ((phys_buf = umap_local(destproc, D,
(vir_bytes) newoffset, count)) == 0) {
printf("do_sdevio: umap_local failed\n");
return(EFAULT);
}
vir_buf = newoffset;
} else {
if(proc_nr != _ENDPOINT_P(caller->p_endpoint))
{
@ -92,9 +88,7 @@ int do_sdevio(struct proc * caller, message *m_ptr)
return EPERM;
}
/* Get and check physical address. */
if ((phys_buf = umap_local(proc_addr(proc_nr), D,
(vir_bytes) m_ptr->DIO_VEC_ADDR, count)) == 0)
return(EFAULT);
vir_buf = (phys_bytes) m_ptr->DIO_VEC_ADDR;
destproc = proc_addr(proc_nr);
}
/* current process must be target for phys_* to be OK */
@ -139,16 +133,16 @@ int do_sdevio(struct proc * caller, message *m_ptr)
/* Perform device I/O for bytes and words. Longs are not supported. */
if (req_dir == _DIO_INPUT) {
switch (req_type) {
case _DIO_BYTE: phys_insb(port, phys_buf, count); break;
case _DIO_WORD: phys_insw(port, phys_buf, count); break;
case _DIO_BYTE: phys_insb(port, vir_buf, count); break;
case _DIO_WORD: phys_insw(port, vir_buf, count); break;
default:
retval = EINVAL;
goto return_error;
}
} else if (req_dir == _DIO_OUTPUT) {
switch (req_type) {
case _DIO_BYTE: phys_outsb(port, phys_buf, count); break;
case _DIO_WORD: phys_outsw(port, phys_buf, count); break;
case _DIO_BYTE: phys_outsb(port, vir_buf, count); break;
case _DIO_WORD: phys_outsw(port, vir_buf, count); break;
default:
retval = EINVAL;
goto return_error;

View file

@ -98,27 +98,18 @@ static void pagefault( struct proc *pr,
inkernel_disaster(pr, frame, NULL, is_nested);
}
/* System processes that don't have their own page table can't
* have page faults. VM does have its own page table but also
* can't have page faults (because VM has to handle them).
*/
if((pr->p_endpoint <= INIT_PROC_NR &&
!(pr->p_misc_flags & MF_FULLVM)) || pr->p_endpoint == VM_PROC_NR) {
/* VM can't handle page faults. */
if(pr->p_endpoint == VM_PROC_NR) {
/* Page fault we can't / don't want to
* handle.
*/
printf("pagefault for process %d ('%s') on CPU %d, "
printf("pagefault for VM on CPU %d, "
"pc = 0x%x, addr = 0x%x, flags = 0x%x, is_nested %d\n",
pr->p_endpoint, pr->p_name, cpuid, pr->p_reg.pc,
pagefaultcr2, frame->errcode, is_nested);
if(!is_nested) {
printf("process vir addr of pagefault is 0x%lx\n",
pagefaultcr2 -
(pr->p_memmap[D].mem_phys << CLICK_SHIFT));
}
cpuid, pr->p_reg.pc, pagefaultcr2, frame->errcode,
is_nested);
proc_stacktrace(pr);
printf("pc of pagefault: 0x%lx\n", frame->eip);
cause_sig(proc_nr(pr), SIGSEGV);
panic("pagefault in VM");
return;
}
@ -172,13 +163,9 @@ static void inkernel_disaster(struct proc *saved_proc,
proc_stacktrace_execute(proc_addr(SYSTEM), k_ebp, frame->eip);
}
printseg("ker cs: ", 1, NULL, frame->cs);
printseg("ker ds: ", 0, NULL, DS_SELECTOR);
if (saved_proc) {
printf("scheduled was: process %d (%s), ", saved_proc->p_endpoint, saved_proc->p_name);
printf("pc = %u:0x%x\n", (unsigned) saved_proc->p_reg.cs,
(unsigned) saved_proc->p_reg.pc);
printf("pc = 0x%x\n", (unsigned) saved_proc->p_reg.pc);
proc_stacktrace(saved_proc);
panic("Unhandled kernel exception");

98
kernel/arch/i386/head.S Normal file
View file

@ -0,0 +1,98 @@
#include "kernel/kernel.h" /* configures the kernel */
/* sections */
#include <machine/vm.h>
#include "../../kernel.h"
#include <minix/config.h>
#include <minix/const.h>
#include <minix/com.h>
#include <machine/asm.h>
#include <machine/interrupt.h>
#include "archconst.h"
#include "kernel/const.h"
#include "kernel/proc.h"
#include "sconst.h"
#include <machine/multiboot.h>
#include "arch_proto.h" /* K_STACK_SIZE */
#ifdef CONFIG_SMP
#include "kernel/smp.h"
#endif
/* Selected 386 tss offsets. */
#define TSS3_S_SP0 4
IMPORT(copr_not_available_handler)
IMPORT(params_size)
IMPORT(params_offset)
IMPORT(mon_ds)
IMPORT(switch_to_user)
IMPORT(multiboot_init)
.text
/*===========================================================================*/
/* MINIX */
/*===========================================================================*/
.global MINIX
MINIX:
/* this is the entry point for the MINIX kernel */
jmp multiboot_init
/* Multiboot header here*/
.balign 8
#define MULTIBOOT_FLAGS (MULTIBOOT_MEMORY_INFO | MULTIBOOT_PAGE_ALIGN)
multiboot_magic:
.long MULTIBOOT_HEADER_MAGIC
multiboot_flags:
.long MULTIBOOT_FLAGS
multiboot_checksum:
.long -(MULTIBOOT_HEADER_MAGIC + MULTIBOOT_FLAGS)
.long 0
.long 0
.long 0
.long 0
.long 0
/* Video mode */
multiboot_mode_type:
.long MULTIBOOT_VIDEO_MODE_EGA
multiboot_width:
.long MULTIBOOT_CONSOLE_COLS
multiboot_height:
.long MULTIBOOT_CONSOLE_LINES
multiboot_depth:
.long 0
multiboot_init:
mov $load_stack_start, %esp /* make usable stack */
mov $0, %ebp
push $0 /* set flags to known good state */
popf /* esp, clear nested task and int enable */
push $0
push %ebx /* multiboot information struct */
push %eax /* multiboot magic number */
call _C_LABEL(pre_init)
/* Kernel is mapped high now and ready to go, with
* the boot info pointer returnd in %eax. Set the
* highly mapped stack, initialize it, push the boot
* info pointer and jump to the highly mapped kernel.
*/
mov $k_initial_stktop, %esp
push $0 /* Terminate stack */
push %eax
call _C_LABEL(kmain)
/* not reached */
hang:
jmp hang
.data
load_stack:
.space 4096
load_stack_start:

View file

@ -27,20 +27,11 @@
/*===========================================================================*
* intr_init *
*===========================================================================*/
int intr_init(const int mine, const int auto_eoi)
int intr_init(const int auto_eoi)
{
/* Initialize the 8259s, finishing with all interrupts disabled. This is
* only done in protected mode, in real mode we don't touch the 8259s, but
* use the BIOS locations instead. The flag "mine" is set if the 8259s are
* to be programmed for MINIX, or to be reset to what the BIOS expects.
*/
/* The AT and newer PS/2 have two interrupt controllers, one master,
* one slaved at IRQ 2. (We don't have to deal with the PC that
* has just one controller, because it must run in real mode.)
*/
/* Initialize the 8259s, finishing with all interrupts disabled. */
outb( INT_CTL, ICW1_AT);
outb( INT_CTLMASK, mine == INTS_MINIX ? IRQ0_VECTOR : BIOS_IRQ0_VEC);
outb( INT_CTLMASK, IRQ0_VECTOR);
/* ICW2 for master */
outb( INT_CTLMASK, (1 << CASCADE_IRQ));
/* ICW3 tells slaves */
@ -50,7 +41,7 @@ int intr_init(const int mine, const int auto_eoi)
outb( INT_CTLMASK, ICW4_AT_MASTER);
outb( INT_CTLMASK, ~(1 << CASCADE_IRQ)); /* IRQ 0-7 mask */
outb( INT2_CTL, ICW1_AT);
outb( INT2_CTLMASK, mine == INTS_MINIX ? IRQ8_VECTOR : BIOS_IRQ8_VEC);
outb( INT2_CTLMASK, IRQ8_VECTOR);
/* ICW2 for slave */
outb( INT2_CTLMASK, CASCADE_IRQ); /* ICW3 is slave nr */
if (auto_eoi)
@ -59,16 +50,6 @@ int intr_init(const int mine, const int auto_eoi)
outb( INT2_CTLMASK, ICW4_AT_SLAVE);
outb( INT2_CTLMASK, ~0); /* IRQ 8-15 mask */
/* Copy the BIOS vectors from the BIOS to the Minix location, so we
* can still make BIOS calls without reprogramming the i8259s.
*/
#if IRQ0_VECTOR != BIOS_IRQ0_VEC
phys_copy(BIOS_VECTOR(0) * 4L, VECTOR(0) * 4L, 8 * 4L);
#endif
#if IRQ8_VECTOR != BIOS_IRQ8_VEC
phys_copy(BIOS_VECTOR(8) * 4L, VECTOR(8) * 4L, 8 * 4L);
#endif
return OK;
}

View file

@ -53,12 +53,6 @@ void ipc_entry(void);
void kernel_call_entry(void);
void level0_call(void);
/* memory.c */
void segmentation2paging(struct proc * current);
void i386_freepde(int pde);
void getcr3val(void);
/* exception.c */
struct exception_frame {
reg_t vector; /* which interrupt vector was triggered */
@ -83,6 +77,7 @@ unsigned long read_cr4(void);
void write_cr4(unsigned long value);
void write_cr3(unsigned long value);
unsigned long read_cpu_flags(void);
phys_bytes vir2phys(void *);
void phys_insb(u16_t port, phys_bytes buf, size_t count);
void phys_insw(u16_t port, phys_bytes buf, size_t count);
void phys_outsb(u16_t port, phys_bytes buf, size_t count);
@ -105,6 +100,17 @@ int __frstor_end(void *);
int __frstor_failure(void *);
unsigned short fnstsw(void);
void fnstcw(unsigned short* cw);
void x86_lgdt(void *);
void x86_lldt(u32_t);
void x86_ltr(u32_t);
void x86_lidt(void *);
void x86_load_kerncs(void);
void x86_load_ds(u32_t);
void x86_load_ss(u32_t);
void x86_load_es(u32_t);
void x86_load_fs(u32_t);
void x86_load_gs(u32_t);
void switch_k_stack(void * esp, void (* continuation)(void));
@ -147,19 +153,25 @@ struct tss_s {
u16_t trap;
u16_t iobase;
/* u8_t iomap[0]; */
};
} __attribute__((packed));
void prot_init(void);
void idt_init(void);
void init_dataseg(struct segdesc_s *segdp, phys_bytes base, vir_bytes
size, int privilege);
void enable_iop(struct proc *pp);
int prot_set_kern_seg_limit(vir_bytes limit);
void printseg(char *banner, int iscs, struct proc *pr, u32_t selector);
u32_t read_cs(void);
u32_t read_ds(void);
u32_t read_ss(void);
void add_memmap(kinfo_t *cbi, u64_t addr, u64_t len);
void vm_enable_paging(void);
void cut_memmap(kinfo_t *cbi, phys_bytes start, phys_bytes end);
phys_bytes pg_roundup(phys_bytes b);
void pg_info(reg_t *, u32_t **);
void pg_clear(void);
void pg_identity(void);
phys_bytes pg_load(void);
void pg_map(phys_bytes phys, vir_bytes vaddr, vir_bytes vaddr_end, kinfo_t *cbi);
int pg_mapkernel(void);
void pg_mapproc(struct proc *p, struct boot_image *ip, kinfo_t *cbi);
/* prototype of an interrupt vector table entry */
struct gate_table_s {
void(*gate) (void);
@ -167,13 +179,11 @@ struct gate_table_s {
unsigned char privilege;
};
extern struct gate_table_s gate_table_pic[];
/* copies an array of vectors to the IDT. The last vector must be zero filled */
void idt_copy_vectors(struct gate_table_s * first);
void idt_copy_vectors_pic(void);
void idt_reload(void);
EXTERN void * k_boot_stktop;
EXTERN void * k_stacks_start;
extern void * k_stacks;
@ -196,9 +206,9 @@ reg_t read_ebp(void);
/*
* sets up TSS for a cpu and assigns kernel stack and cpu id
*/
void tss_init(unsigned cpu, void * kernel_stack);
int tss_init(unsigned cpu, void * kernel_stack);
void int_gate(unsigned vec_nr, vir_bytes offset, unsigned dpl_type);
void int_gate_idt(unsigned vec_nr, vir_bytes offset, unsigned dpl_type);
void __copy_msg_from_user_end(void);
void __copy_msg_to_user_end(void);
@ -210,6 +220,7 @@ int platform_tbl_ptr(phys_bytes start, phys_bytes end, unsigned
cmp_f)(void *)));
/* breakpoints.c */
int breakpoint_set(phys_bytes linaddr, int bp, const int flags);
#define BREAKPOINT_COUNT 4
#define BREAKPOINT_FLAG_RW_MASK (3 << 0)
#define BREAKPOINT_FLAG_RW_EXEC (0 << 0)

View file

@ -23,6 +23,6 @@ struct nmi_frame {
int i386_watchdog_start(void);
#define nmi_in_kernel(f) ((f)->cs == CS_SELECTOR)
#define nmi_in_kernel(f) ((f)->cs == KERN_CS_SELECTOR)
#endif /* __I386_WATCHDOG_H__ */

View file

@ -8,44 +8,25 @@
/* Constants for protected mode. */
/* Table sizes. */
#define GDT_SIZE (FIRST_LDT_INDEX + NR_TASKS + NR_PROCS)
/* spec. and LDT's */
#define IDT_SIZE 256 /* the table is set to it's maximal size */
/* Fixed global descriptors. 1 to 7 are prescribed by the BIOS. */
#define GDT_INDEX 1 /* GDT descriptor */
#define IDT_INDEX 2 /* IDT descriptor */
#define DS_INDEX 3 /* kernel DS */
#define ES_INDEX 4 /* kernel ES (386: flag 4 Gb at startup) */
#define SS_INDEX 5 /* kernel SS (386: monitor SS at startup) */
#define CS_INDEX 6 /* kernel CS */
#define MON_CS_INDEX 7 /* temp for BIOS (386: monitor CS at startup) */
#define TSS_INDEX_FIRST 8 /* first kernel TSS */
#define TSS_INDEX_BOOT TSS_INDEX_FIRST
#define TSS_INDEX(cpu) (TSS_INDEX_FIRST + (cpu)) /* per cpu kernel tss */
#define FIRST_LDT_INDEX TSS_INDEX(CONFIG_MAX_CPUS) /* rest of descriptors are LDT's */
/* GDT layout (SYSENTER/SYSEXIT compliant) */
#define KERN_CS_INDEX 1
#define KERN_DS_INDEX 2
#define USER_CS_INDEX 3
#define USER_DS_INDEX 4
#define LDT_INDEX 5
#define TSS_INDEX_FIRST 6
#define TSS_INDEX(cpu) (TSS_INDEX_FIRST + (cpu)) /* per cpu kernel tss */
#define GDT_SIZE (TSS_INDEX(CONFIG_MAX_CPUS)) /* LDT descriptor */
/* Descriptor structure offsets. */
#define DESC_BASE 2 /* to base_low */
#define DESC_BASE_MIDDLE 4 /* to base_middle */
#define DESC_ACCESS 5 /* to access byte */
#define DESC_SIZE 8 /* sizeof (struct segdesc_s) */
/*
* WARNING no () around the macros, be careful. This is because of ACK assembler
* and will be fixed after switching to GAS
*/
#define GDT_SELECTOR GDT_INDEX * DESC_SIZE
#define IDT_SELECTOR IDT_INDEX * DESC_SIZE
#define DS_SELECTOR DS_INDEX * DESC_SIZE
#define ES_SELECTOR ES_INDEX * DESC_SIZE
/* flat DS is less privileged ES */
#define FLAT_DS_SELECTOR ES_SELECTOR
#define SS_SELECTOR SS_INDEX * DESC_SIZE
#define CS_SELECTOR CS_INDEX * DESC_SIZE
#define MON_CS_SELECTOR MON_CS_INDEX * DESC_SIZE
#define TSS_SELECTOR(cpu) (TSS_INDEX(cpu) * DESC_SIZE)
#define TSS_SELECTOR_BOOT (TSS_INDEX_BOOT * DESC_SIZE)
#define SEG_SELECTOR(i) ((i)*8)
#define KERN_CS_SELECTOR SEG_SELECTOR(KERN_CS_INDEX)
#define KERN_DS_SELECTOR SEG_SELECTOR(KERN_DS_INDEX)
#define USER_CS_SELECTOR (SEG_SELECTOR(USER_CS_INDEX) | USER_PRIVILEGE)
#define USER_DS_SELECTOR (SEG_SELECTOR(USER_DS_INDEX) | USER_PRIVILEGE)
#define LDT_SELECTOR SEG_SELECTOR(LDT_INDEX)
#define TSS_SELECTOR(cpu) SEG_SELECTOR(TSS_INDEX(cpu))
/* Privileges. */
#define INTR_PRIVILEGE 0 /* kernel and interrupt handlers */
@ -140,9 +121,6 @@
#define IF_MASK 0x00000200
#define IOPL_MASK 0x003000
#define vir2phys(vir) ((phys_bytes)((kinfo.data_base + (vir_bytes) (vir))))
#define phys2vir(ph) ((vir_bytes)((vir_bytes) (ph) - kinfo.data_base))
#define INTEL_CPUID_GEN_EBX 0x756e6547 /* ASCII value of "Genu" */
#define INTEL_CPUID_GEN_EDX 0x49656e69 /* ASCII value of "ineI" */
#define INTEL_CPUID_GEN_ECX 0x6c65746e /* ASCII value of "ntel" */
@ -168,4 +146,6 @@
*/
#define X86_STACK_TOP_RESERVED (2 * sizeof(reg_t))
#define PG_ALLOCATEME ((phys_bytes)-1)
#endif /* _I386_ACONST_H */

View file

@ -0,0 +1,11 @@
#ifndef MB_UTILS_H
#define MB_UTILS_H
#include "kernel/kernel.h"
void direct_cls(void);
void direct_print(const char*);
void direct_print_char(char);
int direct_read_char(unsigned char*);
#endif

View file

@ -1,44 +1,30 @@
OUTPUT_ARCH("i386")
ENTRY(MINIX)
ENTRY(__k_unpaged_MINIX)
_kern_phys_base = 0x00400000; /* phys 4MB aligned for convenient remapping */
_kern_vir_base = 0xF0400000; /* map kernel high for max. user vir space */
_kern_offset = (_kern_vir_base - _kern_phys_base);
__k_unpaged__kern_offset = _kern_offset;
__k_unpaged__kern_vir_base = _kern_vir_base;
__k_unpaged__kern_phys_base = _kern_phys_base;
SECTIONS
{
. = 0x200000 + SIZEOF_HEADERS;
.text . : AT (ADDR(.text) - 0x0000) {
*(.text)
*(.text.*)
}
_etext = .;
etext = .;
. = ALIGN(4096);
. = _kern_phys_base;
__k_unpaged__kern_unpaged_start = .;
.data . : AT (ADDR(.data) - 0x0000) {
_rodata = .;
/* kernel data starts with this magic number */
SHORT(0x526f);
*(.rodata)
*(.rodata.*)
_erodata = .;
*(.data)
*(.data.*)
. = ALIGN(4096);
}
_edata = .;
.unpaged_text : { unpaged_*.o(.text) }
.unpaged_data ALIGN(4096) : { unpaged_*.o(.data .rodata*) }
.unpaged_bss ALIGN(4096) : { unpaged_*.o(.bss COMMON) }
__k_unpaged__kern_unpaged_end = .;
.bss . : AT (ADDR(.bss) - 0x0000) {
*(.bss)
*(.bss.*)
*(COMMON)
}
_end = .;
end = .;
. += _kern_offset;
/DISCARD/ :
{
*(.eh_frame)
*(.comment)
*(.comment.*)
*(.note)
*(.note.*)
*(.ident)
.text : AT(ADDR(.text) - _kern_offset) { *(.text*) }
.data ALIGN(4096) : AT(ADDR(.data) - _kern_offset) { *(.data .rodata* ) }
.bss ALIGN(4096) : AT(ADDR(.bss) - _kern_offset) { *(.bss* COMMON)
__k_unpaged__kern_size = . - _kern_vir_base;
_kern_size = __k_unpaged__kern_size;
}
}

View file

@ -80,16 +80,12 @@ ENTRY(phys_insw)
mov %esp, %ebp
cld
push %edi
push %es
mov $FLAT_DS_SELECTOR, %ecx
mov %cx, %es
mov 8(%ebp), %edx /* port to read from */
mov 12(%ebp), %edi /* destination addr */
mov 16(%ebp), %ecx /* byte count */
shr $1, %ecx /* word count */
rep insw /* input many words */
pop %es
pop %edi
pop %ebp
ret
@ -108,15 +104,11 @@ ENTRY(phys_insb)
mov %esp, %ebp
cld
push %edi
push %es
mov $FLAT_DS_SELECTOR, %ecx
mov %cx, %es
mov 8(%ebp), %edx /* port to read from */
mov 12(%ebp), %edi /* destination addr */
mov 16(%ebp), %ecx /* byte count */
rep insb /* input many bytes */
pop %es
pop %edi
pop %ebp
ret
@ -135,16 +127,12 @@ ENTRY(phys_outsw)
mov %esp, %ebp
cld
push %esi
push %ds
mov $FLAT_DS_SELECTOR, %ecx
mov %cx, %ds
mov 8(%ebp), %edx /* port to write to */
mov 12(%ebp), %esi /* source addr */
mov 16(%ebp), %ecx /* byte count */
shr $1, %ecx /* word count */
rep outsw /* output many words */
pop %ds
pop %esi
pop %ebp
ret
@ -163,15 +151,11 @@ ENTRY(phys_outsb)
mov %esp, %ebp
cld
push %esi
push %ds
mov $FLAT_DS_SELECTOR, %ecx
mov %cx, %ds
mov 8(%ebp), %edx /* port to write to */
mov 12(%ebp), %esi /* source addr */
mov 16(%ebp), %ecx /* byte count */
rep outsb /* output many bytes */
pop %ds
pop %esi
pop %ebp
ret
@ -185,20 +169,18 @@ ENTRY(phys_outsb)
* phys_bytes bytecount);
* Copy a block of data from anywhere to anywhere in physical memory.
*/
PC_ARGS = 4+4+4+4 /* 4 + 4 + 4 */
/* es edi esi eip src dst len */
ENTRY(phys_copy)
push %ebp
mov %esp, %ebp
cld
push %esi
push %edi
push %es
mov $FLAT_DS_SELECTOR, %eax
mov %ax, %es
mov PC_ARGS(%esp), %esi
mov PC_ARGS+4(%esp), %edi
mov PC_ARGS+4+4(%esp), %eax
mov 8(%ebp), %esi
mov 12(%ebp), %edi
mov 16(%ebp), %eax
cmp $10, %eax /* avoid align overhead for small counts */
jb pc_small
@ -207,43 +189,40 @@ ENTRY(phys_copy)
and $3, %ecx /* count for alignment */
sub %ecx, %eax
rep movsb %es:(%esi), %es:(%edi)
rep movsb (%esi), (%edi)
mov %eax, %ecx
shr $2, %ecx /* count of dwords */
rep movsl %es:(%esi), %es:(%edi)
rep movsl (%esi), (%edi)
and $3, %eax
pc_small:
xchg %eax, %ecx /* remainder */
rep movsb %es:(%esi), %es:(%edi)
rep movsb (%esi), (%edi)
mov $0, %eax /* 0 means: no fault */
LABEL(phys_copy_fault) /* kernel can send us here */
pop %es
pop %edi
pop %esi
pop %ebp
ret
LABEL(phys_copy_fault_in_kernel) /* kernel can send us here */
pop %es
pop %edi
pop %esi
pop %ebp
mov %cr2, %eax
ret
/*===========================================================================*/
/* copy_msg_from_user */
/*===========================================================================*/
/*
* int copy_msg_from_user(struct proc * p, message * user_mbuf, message * dst);
* int copy_msg_from_user(message * user_mbuf, message * dst);
*
* Copies a message of 36 bytes from user process space to a kernel buffer. This
* function assumes that the process address space is installed (cr3 loaded) and
* the local descriptor table of this process is loaded too.
*
* The %gs segment register is used to access the userspace memory. We load the
* process' data segment in this register.
* function assumes that the process address space is installed (cr3 loaded).
*
* This function from the callers point of view either succeeds or returns an
* error which gives the caller a chance to respond accordingly. In fact it
@ -255,39 +234,31 @@ LABEL(phys_copy_fault_in_kernel) /* kernel can send us here */
* userspace as if wrong values or request were passed to the kernel
*/
ENTRY(copy_msg_from_user)
push %gs
mov 8(%esp), %eax
movw DSREG(%eax), %gs
/* load the source pointer */
mov 12(%esp), %ecx
mov 4(%esp), %ecx
/* load the destination pointer */
mov 16(%esp), %edx
mov 8(%esp), %edx
mov %gs:0*4(%ecx), %eax
mov %eax, 0*4(%edx)
mov %gs:1*4(%ecx), %eax
/* mov 0*4(%ecx), %eax
mov %eax, 0*4(%edx) */
mov 1*4(%ecx), %eax
mov %eax, 1*4(%edx)
mov %gs:2*4(%ecx), %eax
mov 2*4(%ecx), %eax
mov %eax, 2*4(%edx)
mov %gs:3*4(%ecx), %eax
mov 3*4(%ecx), %eax
mov %eax, 3*4(%edx)
mov %gs:4*4(%ecx), %eax
mov 4*4(%ecx), %eax
mov %eax, 4*4(%edx)
mov %gs:5*4(%ecx), %eax
mov 5*4(%ecx), %eax
mov %eax, 5*4(%edx)
mov %gs:6*4(%ecx), %eax
mov 6*4(%ecx), %eax
mov %eax, 6*4(%edx)
mov %gs:7*4(%ecx), %eax
mov 7*4(%ecx), %eax
mov %eax, 7*4(%edx)
mov %gs:8*4(%ecx), %eax
mov 8*4(%ecx), %eax
mov %eax, 8*4(%edx)
LABEL(__copy_msg_from_user_end)
pop %gs
movl $0, %eax
ret
@ -295,48 +266,38 @@ LABEL(__copy_msg_from_user_end)
/* copy_msg_to_user */
/*===========================================================================*/
/*
* void copy_msg_to_user(struct proc * p, message * src, message * user_mbuf);
* void copy_msg_to_user(message * src, message * user_mbuf);
*
* Copies a message of 36 bytes to user process space from a kernel buffer. This
* function assumes that the process address space is installed (cr3 loaded) and
* the local descriptor table of this process is loaded too.
* Copies a message of 36 bytes to user process space from a kernel buffer.
*
* All the other copy_msg_from_user() comments apply here as well!
*/
ENTRY(copy_msg_to_user)
push %gs
mov 8(%esp), %eax
movw DSREG(%eax), %gs
/* load the source pointer */
mov 12(%esp), %ecx
mov 4(%esp), %ecx
/* load the destination pointer */
mov 16(%esp), %edx
mov 8(%esp), %edx
mov 0*4(%ecx), %eax
mov %eax, %gs:0*4(%edx)
mov %eax, 0*4(%edx)
mov 1*4(%ecx), %eax
mov %eax, %gs:1*4(%edx)
mov %eax, 1*4(%edx)
mov 2*4(%ecx), %eax
mov %eax, %gs:2*4(%edx)
mov %eax, 2*4(%edx)
mov 3*4(%ecx), %eax
mov %eax, %gs:3*4(%edx)
mov %eax, 3*4(%edx)
mov 4*4(%ecx), %eax
mov %eax, %gs:4*4(%edx)
mov %eax, 4*4(%edx)
mov 5*4(%ecx), %eax
mov %eax, %gs:5*4(%edx)
mov %eax, 5*4(%edx)
mov 6*4(%ecx), %eax
mov %eax, %gs:6*4(%edx)
mov %eax, 6*4(%edx)
mov 7*4(%ecx), %eax
mov %eax, %gs:7*4(%edx)
mov %eax, 7*4(%edx)
mov 8*4(%ecx), %eax
mov %eax, %gs:8*4(%edx)
mov %eax, 8*4(%edx)
LABEL(__copy_msg_to_user_end)
pop %gs
movl $0, %eax
ret
@ -348,8 +309,6 @@ LABEL(__copy_msg_to_user_end)
* here to continue, clean up and report the error
*/
ENTRY(__user_copy_msg_pointer_failure)
pop %gs
movl $-1, %eax
ret
@ -366,12 +325,9 @@ ENTRY(phys_memset)
mov %esp, %ebp
push %esi
push %ebx
push %ds
mov 8(%ebp), %esi
mov 16(%ebp), %eax
mov $FLAT_DS_SELECTOR, %ebx
mov %bx, %ds
mov 12(%ebp), %ebx
shr $2, %eax
fill_start:
@ -395,37 +351,18 @@ remain_fill:
fill_done:
LABEL(memset_fault) /* kernel can send us here */
mov $0, %eax /* 0 means: no fault */
pop %ds
pop %ebx
pop %esi
pop %ebp
ret
LABEL(memset_fault_in_kernel) /* kernel can send us here */
pop %ds
pop %ebx
pop %esi
pop %ebp
mov %cr2, %eax
ret
/*===========================================================================*/
/* mem_rdw */
/*===========================================================================*/
/*
* PUBLIC u16_t mem_rdw(U16_t segment, u16_t *offset);
* Load and return word at far pointer segment:offset.
*/
ENTRY(mem_rdw)
mov %ds, %cx
mov 4(%esp), %ds
mov 4+4(%esp), %eax /* offset */
movzwl (%eax), %eax /* word to return */
mov %cx, %ds
ret
/*===========================================================================*/
/* x86_triplefault */
/*===========================================================================*/
@ -565,12 +502,13 @@ ARG_EAX_ACTION(fnstcw, fnstcw (%eax));
/* invlpg */
ARG_EAX_ACTION(i386_invlpg, invlpg (%eax));
/*===========================================================================*/
/* getcr3val */
/*===========================================================================*/
/* PUBLIC unsigned long getcr3val(void); */
ENTRY(getcr3val)
mov %cr3, %eax
ENTRY(x86_load_kerncs)
push %ebp
mov %esp, %ebp
mov 8(%ebp), %eax
jmp $KERN_CS_SELECTOR, $newcs
newcs:
pop %ebp
ret
/*
@ -609,27 +547,6 @@ ENTRY(ia32_msr_write)
pop %ebp
ret
/*===========================================================================*/
/* idt_reload */
/*===========================================================================*/
/* PUBLIC void idt_reload (void); */
/* reload idt when returning to monitor. */
ENTRY(idt_reload)
lidt _C_LABEL(gdt)+IDT_SELECTOR /* reload interrupt descriptor table */
ret
/*
* void reload_segment_regs(void)
*/
#define RELOAD_SEG_REG(reg) \
mov reg, %ax ;\
mov %ax, reg ;
ENTRY(reload_ds)
RELOAD_SEG_REG(%ds)
ret
/*===========================================================================*/
/* __switch_address_space */
/*===========================================================================*/
@ -642,8 +559,6 @@ ENTRY(reload_ds)
ENTRY(__switch_address_space)
/* read the process pointer */
mov 4(%esp), %edx
/* enable process' segment descriptors */
lldt P_LDT_SEL(%edx)
/* get the new cr3 value */
movl P_CR3(%edx), %eax
/* test if the new cr3 != NULL */
@ -664,52 +579,6 @@ ENTRY(__switch_address_space)
0:
ret
/*===========================================================================*/
/* poweroff */
/*===========================================================================*/
/* PUBLIC void poweroff(); */
/* Jump to 16-bit poweroff code */
ENTRY(poweroff_jmp)
cli
/* Make real mode descriptor */
mov $(_C_LABEL(gdt) + SS_SELECTOR), %edi
mov $0x100, %eax
movw %ax, 2(%edi)
shr $16, %eax
movb %al, 4(%edi)
and $0xff00, %ax
andw $0xff, 6(%edi)
or %ax, 6(%edi)
mov $0xffff, %eax
movw %ax, (%edi)
shr $16, %eax
and $0xf, %ax
andb $0xf0, 6(%edi)
or %ax, 6(%edi)
/* Flush TLB */
xor %eax, %eax
mov %eax, %cr3
xor %esp, %esp /* clear esp for real mode*/
/* Reset IDTR */
lidt idt_ptr
mov $SS_SELECTOR, %ax
mov %ax, %ds
mov %ax, %es
mov %ax, %fs
mov %ax, %gs
mov %ax, %ss
/* Save real mode cr0 in eax */
mov %cr0, %eax
andl $~I386_CR0_PE, %eax
/* Jump to 16-bit code that is copied to below 1MB */
ljmp $MON_CS_SELECTOR, $0
/* acknowledge just the master PIC */
ENTRY(eoi_8259_master)
movb $END_OF_INT, %al
@ -909,3 +778,6 @@ ENTRY(switch_k_stack)
idt_ptr:
.short 0x3ff
.long 0x0
ldtsel:
.long LDT_SELECTOR

View file

@ -1,12 +0,0 @@
#ifndef MB_UTILS_H
#define MB_UTILS_H
#include "kernel/kernel.h"
void mb_cls(void);
void mb_print(char*);
void mb_print_char(char);
int mb_read_char(unsigned char*);
#endif

View file

@ -27,24 +27,27 @@
#endif
#endif
int i386_paging_enabled = 0;
static int psok = 0;
#define MAX_FREEPDES 2
static int nfreepdes = 0, freepdes[MAX_FREEPDES];
phys_bytes video_mem_vaddr = 0;
#define HASPT(procptr) ((procptr)->p_seg.p_cr3 != 0)
static int nfreepdes = 0;
#define MAXFREEPDES 2
static int freepdes[MAXFREEPDES];
static u32_t phys_get32(phys_bytes v);
static void vm_enable_paging(void);
void segmentation2paging(struct proc * current)
void mem_clear_mapcache(void)
{
/* switch to the current process page tables before turning paging on */
switch_address_space(current);
vm_enable_paging();
int i;
for(i = 0; i < nfreepdes; i++) {
struct proc *ptproc = get_cpulocal_var(ptproc);
int pde = freepdes[i];
u32_t *ptv;
assert(ptproc);
ptv = ptproc->p_seg.p_cr3_v;
assert(ptv);
ptv[pde] = 0;
}
}
/* This function sets up a mapping from within the kernel's address
@ -65,7 +68,7 @@ void segmentation2paging(struct proc * current)
*
* The logical number supplied by the caller is translated into an actual
* pde number to be used, and a pointer to it (linear address) is returned
* for actual use by phys_copy or phys_memset.
* for actual use by phys_copy or memset.
*/
static phys_bytes createpde(
const struct proc *pr, /* Requested process, NULL for physical. */
@ -83,10 +86,10 @@ static phys_bytes createpde(
pde = freepdes[free_pde_idx];
assert(pde >= 0 && pde < 1024);
if(pr && ((pr == get_cpulocal_var(ptproc)) || !HASPT(pr))) {
if(pr && ((pr == get_cpulocal_var(ptproc)) || iskernelp(pr))) {
/* Process memory is requested, and
* it's a process that is already in current page table, or
* a process that is in every page table.
* the kernel, which is always there.
* Therefore linaddr is valid directly, with the requested
* size.
*/
@ -138,9 +141,6 @@ static int lin_lin_copy(struct proc *srcproc, vir_bytes srclinaddr,
u32_t addr;
proc_nr_t procslot;
assert(vm_running);
assert(nfreepdes >= MAX_FREEPDES);
assert(get_cpulocal_var(ptproc));
assert(get_cpulocal_var(proc_ptr));
assert(read_cr3() == get_cpulocal_var(ptproc)->p_seg.p_cr3);
@ -219,13 +219,8 @@ static u32_t phys_get32(phys_bytes addr)
const u32_t v;
int r;
if(!vm_running) {
phys_copy(addr, vir2phys(&v), sizeof(v));
return v;
}
if((r=lin_lin_copy(NULL, addr,
proc_addr(SYSTEM), vir2phys(&v), sizeof(v))) != OK) {
proc_addr(SYSTEM), (phys_bytes) &v, sizeof(v))) != OK) {
panic("lin_lin_copy for phys_get32 failed: %d", r);
}
@ -266,87 +261,6 @@ static char *cr4_str(u32_t e)
}
#endif
void vm_stop(void)
{
write_cr0(read_cr0() & ~I386_CR0_PG);
}
static void vm_enable_paging(void)
{
u32_t cr0, cr4;
int pgeok;
psok = _cpufeature(_CPUF_I386_PSE);
pgeok = _cpufeature(_CPUF_I386_PGE);
cr0= read_cr0();
cr4= read_cr4();
/* First clear PG and PGE flag, as PGE must be enabled after PG. */
write_cr0(cr0 & ~I386_CR0_PG);
write_cr4(cr4 & ~(I386_CR4_PGE | I386_CR4_PSE));
cr0= read_cr0();
cr4= read_cr4();
/* Our first page table contains 4MB entries. */
if(psok)
cr4 |= I386_CR4_PSE;
write_cr4(cr4);
/* First enable paging, then enable global page flag. */
cr0 |= I386_CR0_PG;
write_cr0(cr0 );
cr0 |= I386_CR0_WP;
write_cr0(cr0);
/* May we enable these features? */
if(pgeok)
cr4 |= I386_CR4_PGE;
write_cr4(cr4);
}
/*===========================================================================*
* umap_local *
*===========================================================================*/
phys_bytes umap_local(rp, seg, vir_addr, bytes)
register struct proc *rp; /* pointer to proc table entry for process */
int seg; /* T, D, or S segment */
vir_bytes vir_addr; /* virtual address in bytes within the seg */
vir_bytes bytes; /* # of bytes to be copied */
{
/* Calculate the physical memory address for a given virtual address. */
vir_clicks vc; /* the virtual address in clicks */
phys_bytes pa; /* intermediate variables as phys_bytes */
phys_bytes seg_base;
if(seg != T && seg != D && seg != S)
panic("umap_local: wrong seg: %d", seg);
if (bytes <= 0) return( (phys_bytes) 0);
if (vir_addr + bytes <= vir_addr) return 0; /* overflow */
vc = (vir_addr + bytes - 1) >> CLICK_SHIFT; /* last click of data */
if (seg != T)
seg = (vc < rp->p_memmap[D].mem_vir + rp->p_memmap[D].mem_len ? D : S);
else if (rp->p_memmap[T].mem_len == 0) /* common I&D? */
seg = D; /* ptrace needs this */
if ((vir_addr>>CLICK_SHIFT) >= rp->p_memmap[seg].mem_vir +
rp->p_memmap[seg].mem_len) return( (phys_bytes) 0 );
if (vc >= rp->p_memmap[seg].mem_vir +
rp->p_memmap[seg].mem_len) return( (phys_bytes) 0 );
seg_base = (phys_bytes) rp->p_memmap[seg].mem_phys;
seg_base = seg_base << CLICK_SHIFT; /* segment origin in bytes */
pa = (phys_bytes) vir_addr;
pa -= rp->p_memmap[seg].mem_vir << CLICK_SHIFT;
return(seg_base + pa);
}
/*===========================================================================*
* umap_virtual *
*===========================================================================*/
@ -356,22 +270,15 @@ int seg; /* T, D, or S segment */
vir_bytes vir_addr; /* virtual address in bytes within the seg */
vir_bytes bytes; /* # of bytes to be copied */
{
vir_bytes linear;
phys_bytes phys = 0;
if(!(linear = umap_local(rp, seg, vir_addr, bytes))) {
printf("SYSTEM:umap_virtual: umap_local failed\n");
phys = 0;
} else {
if(vm_lookup(rp, linear, &phys, NULL) != OK) {
printf("SYSTEM:umap_virtual: vm_lookup of %s: seg 0x%x: 0x%lx failed\n", rp->p_name, seg, vir_addr);
phys = 0;
} else {
if(phys == 0)
panic("vm_lookup returned phys: %d", phys);
}
}
if(vm_lookup(rp, vir_addr, &phys, NULL) != OK) {
printf("SYSTEM:umap_virtual: vm_lookup of %s: seg 0x%x: 0x%lx failed\n", rp->p_name, seg, vir_addr);
phys = 0;
} else {
if(phys == 0)
panic("vm_lookup returned phys: %d", phys);
}
if(phys == 0) {
printf("SYSTEM:umap_virtual: lookup failed\n");
@ -381,9 +288,9 @@ vir_bytes bytes; /* # of bytes to be copied */
/* Now make sure addresses are contiguous in physical memory
* so that the umap makes sense.
*/
if(bytes > 0 && vm_lookup_range(rp, linear, NULL, bytes) != bytes) {
if(bytes > 0 && vm_lookup_range(rp, vir_addr, NULL, bytes) != bytes) {
printf("umap_virtual: %s: %lu at 0x%lx (vir 0x%lx) not contiguous\n",
rp->p_name, bytes, linear, vir_addr);
rp->p_name, bytes, vir_addr, vir_addr);
return 0;
}
@ -409,11 +316,7 @@ int vm_lookup(const struct proc *proc, const vir_bytes virtual,
assert(proc);
assert(physical);
assert(!isemptyp(proc));
if(!HASPT(proc)) {
*physical = virtual;
return OK;
}
assert(HASPT(proc));
/* Retrieve page directory entry. */
root = (u32_t *) proc->p_seg.p_cr3;
@ -472,9 +375,7 @@ size_t vm_lookup_range(const struct proc *proc, vir_bytes vir_addr,
assert(proc);
assert(bytes > 0);
if (!HASPT(proc))
return bytes;
assert(HASPT(proc));
/* Look up the first page. */
if (vm_lookup(proc, vir_addr, &phys, NULL) != OK)
@ -548,9 +449,6 @@ int vm_check_range(struct proc *caller, struct proc *target,
*/
int r;
if (!vm_running)
return EFAULT;
if ((caller->p_misc_flags & MF_KCALL_RESUME) &&
(r = caller->p_vmrequest.vmresult) != OK)
return r;
@ -570,7 +468,7 @@ void delivermsg(struct proc *rp)
assert(rp->p_misc_flags & MF_DELIVERMSG);
assert(rp->p_delivermsg.m_source != NONE);
if (copy_msg_to_user(rp, &rp->p_delivermsg,
if (copy_msg_to_user(&rp->p_delivermsg,
(message *) rp->p_delivermsg_vir)) {
printf("WARNING wrong user pointer 0x%08lx from "
"process %s / %d\n",
@ -671,24 +569,12 @@ int vm_memset(endpoint_t who, phys_bytes ph, const u8_t c, phys_bytes bytes)
/* NONE for physical, otherwise virtual */
if(who != NONE) {
int n;
vir_bytes lin;
assert(vm_running);
if(!isokendpt(who, &n)) return ESRCH;
whoptr = proc_addr(n);
if(!(lin = umap_local(whoptr, D, ph, bytes))) return EFAULT;
ph = lin;
}
p = c | (c << 8) | (c << 16) | (c << 24);
if(!vm_running) {
if(who != NONE) panic("can't vm_memset without vm running");
phys_memset(ph, p, bytes);
return OK;
}
assert(nfreepdes >= MAX_FREEPDES);
assert(get_cpulocal_var(ptproc)->p_seg.p_cr3_v);
assert(!catch_pagefaults);
@ -736,9 +622,7 @@ int vmcheck; /* if nonzero, can return VMSUSPEND */
{
/* Copy bytes from virtual address src_addr to virtual address dst_addr. */
struct vir_addr *vir_addr[2]; /* virtual source and destination address */
phys_bytes phys_addr[2]; /* absolute source and destination */
int seg_index;
int i;
int i, r;
struct proc *procs[2];
assert((vmcheck && caller) || (!vmcheck && !caller));
@ -751,111 +635,57 @@ int vmcheck; /* if nonzero, can return VMSUSPEND */
vir_addr[_DST_] = dst_addr;
for (i=_SRC_; i<=_DST_; i++) {
int proc_nr, type;
endpoint_t proc_e = vir_addr[i]->proc_nr_e;
int proc_nr;
struct proc *p;
type = vir_addr[i]->segment & SEGMENT_TYPE;
if((type != PHYS_SEG) && isokendpt(vir_addr[i]->proc_nr_e, &proc_nr))
p = proc_addr(proc_nr);
else
if(proc_e == NONE) {
p = NULL;
} else {
if(!isokendpt(proc_e, &proc_nr)) {
printf("virtual_copy: no reasonable endpoint\n");
return ESRCH;
}
p = proc_addr(proc_nr);
}
procs[i] = p;
/* Get physical address. */
switch(type) {
case LOCAL_SEG:
case LOCAL_VM_SEG:
if(!p) {
return EDEADSRCDST;
}
seg_index = vir_addr[i]->segment & SEGMENT_INDEX;
if(type == LOCAL_SEG)
phys_addr[i] = umap_local(p, seg_index, vir_addr[i]->offset,
bytes);
else
phys_addr[i] = umap_virtual(p, seg_index,
vir_addr[i]->offset, bytes);
if(phys_addr[i] == 0) {
printf("virtual_copy: map 0x%x failed for %s seg %d, "
"offset %lx, len %lu, i %d\n",
type, p->p_name, seg_index, vir_addr[i]->offset,
bytes, i);
}
break;
case PHYS_SEG:
phys_addr[i] = vir_addr[i]->offset;
break;
default:
printf("virtual_copy: strange type 0x%x\n", type);
return EINVAL;
}
/* Check if mapping succeeded. */
if (phys_addr[i] <= 0 && vir_addr[i]->segment != PHYS_SEG) {
printf("virtual_copy EFAULT\n");
return EFAULT;
}
}
if(vm_running) {
int r;
if(caller && (caller->p_misc_flags & MF_KCALL_RESUME)) {
assert(caller->p_vmrequest.vmresult != VMSUSPEND);
if(caller->p_vmrequest.vmresult != OK) {
return caller->p_vmrequest.vmresult;
}
}
if((r=lin_lin_copy(procs[_SRC_], phys_addr[_SRC_],
procs[_DST_], phys_addr[_DST_], bytes)) != OK) {
struct proc *target = NULL;
phys_bytes lin;
if(r != EFAULT_SRC && r != EFAULT_DST)
panic("lin_lin_copy failed: %d", r);
if(!vmcheck || !caller) {
return r;
}
if(r == EFAULT_SRC) {
lin = phys_addr[_SRC_];
target = procs[_SRC_];
} else if(r == EFAULT_DST) {
lin = phys_addr[_DST_];
target = procs[_DST_];
} else {
panic("r strange: %d", r);
}
assert(caller);
assert(target);
vm_suspend(caller, target, lin, bytes, VMSTYPE_KERNELCALL);
return VMSUSPEND;
}
return OK;
if(caller && (caller->p_misc_flags & MF_KCALL_RESUME)) {
assert(caller->p_vmrequest.vmresult != VMSUSPEND);
if(caller->p_vmrequest.vmresult != OK) {
return caller->p_vmrequest.vmresult;
}
}
assert(!vm_running);
if((r=lin_lin_copy(procs[_SRC_], vir_addr[_SRC_]->offset,
procs[_DST_], vir_addr[_DST_]->offset, bytes)) != OK) {
struct proc *target = NULL;
phys_bytes lin;
if(r != EFAULT_SRC && r != EFAULT_DST)
panic("lin_lin_copy failed: %d", r);
if(!vmcheck || !caller) {
return r;
}
/* can't copy to/from process with PT without VM */
#define NOPT(p) (!(p) || !HASPT(p))
if(!NOPT(procs[_SRC_])) {
printf("ignoring page table src: %s / %d at 0x%x\n",
procs[_SRC_]->p_name, procs[_SRC_]->p_endpoint, procs[_SRC_]->p_seg.p_cr3);
}
if(!NOPT(procs[_DST_])) {
printf("ignoring page table dst: %s / %d at 0x%x\n",
procs[_DST_]->p_name, procs[_DST_]->p_endpoint,
procs[_DST_]->p_seg.p_cr3);
if(r == EFAULT_SRC) {
lin = vir_addr[_SRC_]->offset;
target = procs[_SRC_];
} else if(r == EFAULT_DST) {
lin = vir_addr[_DST_]->offset;
target = procs[_DST_];
} else {
panic("r strange: %d", r);
}
assert(caller);
assert(target);
vm_suspend(caller, target, lin, bytes, VMSTYPE_KERNELCALL);
return VMSUSPEND;
}
/* Now copy bytes between physical addresseses. */
if(phys_copy(phys_addr[_SRC_], phys_addr[_DST_], (phys_bytes) bytes))
return EFAULT;
return OK;
}
@ -868,11 +698,12 @@ int data_copy(const endpoint_t from_proc, const vir_bytes from_addr,
{
struct vir_addr src, dst;
src.segment = dst.segment = D;
src.offset = from_addr;
dst.offset = to_addr;
src.proc_nr_e = from_proc;
dst.proc_nr_e = to_proc;
assert(src.proc_nr_e != NONE);
assert(dst.proc_nr_e != NONE);
return virtual_copy(&src, &dst, bytes);
}
@ -887,37 +718,48 @@ int data_copy_vmcheck(struct proc * caller,
{
struct vir_addr src, dst;
src.segment = dst.segment = D;
src.offset = from_addr;
dst.offset = to_addr;
src.proc_nr_e = from_proc;
dst.proc_nr_e = to_proc;
assert(src.proc_nr_e != NONE);
assert(dst.proc_nr_e != NONE);
return virtual_copy_vmcheck(caller, &src, &dst, bytes);
}
/*===========================================================================*
* arch_pre_exec *
*===========================================================================*/
void arch_pre_exec(struct proc *pr, const u32_t ip, const u32_t sp)
void memory_init(void)
{
/* set program counter and stack pointer. */
pr->p_reg.pc = ip;
pr->p_reg.sp = sp;
assert(nfreepdes == 0);
freepdes[nfreepdes++] = kinfo.freepde_start++;
freepdes[nfreepdes++] = kinfo.freepde_start++;
assert(kinfo.freepde_start < I386_VM_DIR_ENTRIES);
assert(nfreepdes == 2);
assert(nfreepdes <= MAXFREEPDES);
}
/* VM reports page directory slot we're allowed to use freely. */
void i386_freepde(const int pde)
/*===========================================================================*
* arch_proc_init *
*===========================================================================*/
void arch_proc_init(struct proc *pr, const u32_t ip, const u32_t sp, char *name)
{
if(nfreepdes >= MAX_FREEPDES)
return;
freepdes[nfreepdes++] = pde;
arch_proc_reset(pr);
strcpy(pr->p_name, name);
/* set custom state we know */
pr->p_reg.pc = ip;
pr->p_reg.sp = sp;
}
static int oxpcie_mapping_index = -1,
lapic_mapping_index = -1,
ioapic_first_index = -1,
ioapic_last_index = -1;
ioapic_last_index = -1,
video_mem_mapping_index = -1;
extern char *video_mem;
int arch_phys_map(const int index,
phys_bytes *addr,
@ -929,6 +771,8 @@ int arch_phys_map(const int index,
static char *ser_var = NULL;
if(first) {
video_mem_mapping_index = freeidx++;
#ifdef USE_APIC
if(lapic_addr)
lapic_mapping_index = freeidx++;
@ -950,20 +794,28 @@ int arch_phys_map(const int index,
}
}
#endif
first = 0;
}
#ifdef USE_APIC
/* map the local APIC if enabled */
if (index == lapic_mapping_index) {
if (index == video_mem_mapping_index) {
/* map video memory in so we can print panic messages */
*addr = MULTIBOOT_VIDEO_BUFFER;
*len = I386_PAGE_SIZE;
*flags = 0;
return OK;
}
else if (index == lapic_mapping_index) {
/* map the local APIC if enabled */
if (!lapic_addr)
return EINVAL;
*addr = vir2phys(lapic_addr);
*addr = lapic_addr;
*len = 4 << 10 /* 4kB */;
*flags = VMMF_UNCACHED;
return OK;
}
else if (ioapic_enabled && index <= nioapics) {
else if (ioapic_enabled && index <= ioapic_last_index) {
*addr = io_apic[index - 1].paddr;
*len = 4 << 10 /* 4kB */;
*flags = VMMF_UNCACHED;
@ -993,7 +845,8 @@ int arch_phys_map_reply(const int index, const vir_bytes addr)
}
else if (ioapic_enabled && index >= ioapic_first_index &&
index <= ioapic_last_index) {
io_apic[index - ioapic_first_index].vaddr = addr;
int i = index - ioapic_first_index;
io_apic[i].vaddr = addr;
return OK;
}
#endif
@ -1004,56 +857,22 @@ int arch_phys_map_reply(const int index, const vir_bytes addr)
return OK;
}
#endif
if (index == video_mem_mapping_index) {
video_mem_vaddr = addr;
return OK;
}
return EINVAL;
}
int arch_enable_paging(struct proc * caller, const message * m_ptr)
int arch_enable_paging(struct proc * caller)
{
struct vm_ep_data ep_data;
int r;
assert(caller->p_seg.p_cr3);
/* switch_address_space() checks what is in cr3, and do nothing if it's
* the same as the cr3 of its argument, newptproc. If MINIX was
* previously booted, this could very well be the case.
*
* The first time switch_address_space() is called, we want to
* force it to do something (load cr3 and set newptproc), so we
* zero cr3, and force paging off to make that a safe thing to do.
*
* After that, segmentation2paging() enables paging with the page table
* of caller loaded.
*/
/* load caller's page table */
switch_address_space(caller);
vm_stop();
write_cr3(0);
/* switch from segmentation only to paging */
segmentation2paging(caller);
vm_running = 1;
/*
* copy the extra data associated with the call from userspace
*/
if((r=data_copy(caller->p_endpoint, (vir_bytes)m_ptr->SVMCTL_VALUE,
KERNEL, (vir_bytes) &ep_data, sizeof(ep_data))) != OK) {
printf("vmctl_enable_paging: data_copy failed! (%d)\n", r);
return r;
}
/*
* when turning paging on i386 we also change the segment limits to make
* the special mappings requested by the kernel reachable
*/
if ((r = prot_set_kern_seg_limit(ep_data.data_seg_limit)) != OK)
return r;
/*
* install the new map provided by the call
*/
if (newmap(caller, caller, ep_data.mem_map) != OK)
panic("arch_enable_paging: newmap failed");
video_mem = (char *) video_mem_vaddr;
#ifdef USE_APIC
/* start using the virtual addresses */
@ -1074,8 +893,6 @@ int arch_enable_paging(struct proc * caller, const message * m_ptr)
#if CONFIG_SMP
barrier();
i386_paging_enabled = 1;
wait_for_APs_to_finish_booting();
#endif
#endif
@ -1120,7 +937,7 @@ int platform_tbl_ptr(phys_bytes start,
phys_bytes addr;
for (addr = start; addr < end; addr += increment) {
phys_copy (addr, vir2phys(buff), size);
phys_copy (addr, (phys_bytes) buff, size);
if (cmp_f(buff)) {
if (phys_addr)
*phys_addr = addr;

View file

@ -50,116 +50,10 @@
IMPORT(copr_not_available_handler)
IMPORT(params_size)
IMPORT(params_offset)
IMPORT(mon_ds)
IMPORT(switch_to_user)
IMPORT(multiboot_init)
.text
/*===========================================================================*/
/* MINIX */
/*===========================================================================*/
.global MINIX
MINIX:
/* this is the entry point for the MINIX kernel */
jmp _C_LABEL(multiboot_init)
/* Multiboot header here*/
.balign 8
multiboot_magic:
.long MULTIBOOT_HEADER_MAGIC
multiboot_flags:
.long MULTIBOOT_FLAGS
multiboot_checksum:
.long -(MULTIBOOT_HEADER_MAGIC + MULTIBOOT_FLAGS)
.long 0
.long 0
.long 0
.long 0
.long 0
/* Video mode */
multiboot_mode_type:
.long MULTIBOOT_VIDEO_MODE_EGA
multiboot_width:
.long MULTIBOOT_CONSOLE_COLS
multiboot_height:
.long MULTIBOOT_CONSOLE_LINES
multiboot_depth:
.long 0
.globl kernel_init
kernel_init: /* after pre-init*/
push %ebp
mov %esp, %ebp
push %esi
push %edi
/* Copy the monitor global descriptor table to the address space of kernel and */
/* switch over to it. Prot_init() can then update it with immediate effect. */
sgdt _C_LABEL(gdt)+GDT_SELECTOR /* get the monitor gdtr */
movl _C_LABEL(gdt)+GDT_SELECTOR+2, %esi /* absolute address of GDT */
mov $_C_LABEL(gdt), %ebx /* address of kernel GDT */
mov $8*8, %ecx /* copying eight descriptors */
copygdt:
movb %es:(%esi), %al
movb %al, (%ebx)
inc %esi
inc %ebx
loop copygdt
movl _C_LABEL(gdt)+DS_SELECTOR+2, %eax /* base of kernel data */
and $0x00FFFFFF, %eax /* only 24 bits */
add $_C_LABEL(gdt), %eax /* eax = vir2phys(gdt) */
movl %eax, _C_LABEL(gdt)+GDT_SELECTOR+2 /* set base of GDT */
lgdt _C_LABEL(gdt)+GDT_SELECTOR /* switch over to kernel GDT */
/* Locate boot parameters, set up kernel segment registers and stack. */
mov 8(%ebp), %ebx /* boot parameters offset */
mov 12(%ebp), %edx /* boot parameters length */
mov 16(%ebp), %eax /* address of a.out headers */
mov %ds, %ax /* kernel data */
mov %ax, %es
mov %ax, %fs
mov %ax, %gs
mov %ax, %ss
mov $_C_LABEL(k_boot_stktop) - 4, %esp /* set sp to point to the top of kernel stack */
/* Save boot parameters into these global variables for i386 code */
movl %edx, _C_LABEL(params_size)
movl %ebx, _C_LABEL(params_offset)
movl $SS_SELECTOR, _C_LABEL(mon_ds)
/* Call C startup code to set up a proper environment to run main(). */
push %edx
push %ebx
push $SS_SELECTOR
push $DS_SELECTOR
push $CS_SELECTOR
call _C_LABEL(cstart) /* cstart(cs, ds, mds, parmoff, parmlen) */
add $5*4, %esp
/* Reload gdtr, idtr and the segment registers to global descriptor table set */
/* up by prot_init(). */
lgdt _C_LABEL(gdt)+GDT_SELECTOR
lidt _C_LABEL(gdt)+IDT_SELECTOR
ljmp $CS_SELECTOR, $csinit
csinit:
movw $DS_SELECTOR, %ax
mov %ax, %ds
mov %ax, %es
mov %ax, %fs
mov %ax, %gs
mov %ax, %ss
movw $TSS_SELECTOR_BOOT, %ax /* no other TSS is used */
ltr %ax
push $0 /* set flags to known good state */
popf /* esp, clear nested task and int enable */
jmp _C_LABEL(main) /* main() */
/*===========================================================================*/
/* interrupt handlers */
/* interrupt handlers for 386 32-bit protected mode */
@ -419,22 +313,26 @@ ENTRY(restore_user_context)
mov 4(%esp), %ebp /* will assume P_STACKBASE == 0 */
/* reconstruct the stack for iret */
movl SSREG(%ebp), %eax
push %eax
push $USER_DS_SELECTOR /* ss */
movl SPREG(%ebp), %eax
push %eax
movl PSWREG(%ebp), %eax
push %eax
movl CSREG(%ebp), %eax
push %eax
push $USER_CS_SELECTOR /* cs */
movl PCREG(%ebp), %eax
push %eax
/* Restore segments as the user should see them. */
movw $USER_DS_SELECTOR, %si
movw %si, %ds
movw %si, %es
movw %si, %fs
movw %si, %gs
/* Same for general-purpose registers. */
RESTORE_GP_REGS(%ebp)
RESTORE_SEGS(%ebp)
movl %ss:BPREG(%ebp), %ebp
movl BPREG(%ebp), %ebp
iret /* continue process */
@ -582,7 +480,7 @@ ENTRY(startup_ap_32)
* we are in protected mode now, %cs is correct and we need to set the
* data descriptors before we can touch anything
*/
movw $DS_SELECTOR, %ax
movw $KERN_DS_SELECTOR, %ax
mov %ax, %ds
mov %ax, %ss
mov %ax, %es
@ -613,6 +511,10 @@ ENTRY(startup_ap_32)
.data
.short 0x526F /* this must be the first data entry (magic #) */
.bss
k_initial_stack:
.space K_STACK_SIZE
LABEL(__k_unpaged_k_initial_stktop)
/*
* the kernel stack
*/

View file

@ -1,73 +0,0 @@
#include "kernel/kernel.h" /* configures the kernel */
#include <minix/config.h>
#include <minix/const.h>
#include <minix/com.h>
#include <machine/asm.h>
#include <machine/interrupt.h>
#include "archconst.h"
#include "kernel/const.h"
#include "kernel/proc.h"
#include "sconst.h"
#include <machine/multiboot.h>
#define GDT_SET_ENTRY(selector, base, limit) \
mov %ebp, %edi; \
add $(_C_LABEL(gdt) + selector), %edi; \
mov base, %eax; \
movw %ax, 2(%edi); \
shr $16, %eax; \
movb %al, 4(%edi); \
and $0xff00, %ax; \
andw $0xff, 6(%edi); \
or %ax, 6(%edi); \
mov limit, %eax; \
movw %ax, (%edi); \
shr $16, %eax; \
and $0xf, %ax; \
andb $0xf0, 6(%edi); \
or %ax, 6(%edi); \
IMPORT(pre_init)
.extern kernel_init
ENTRY(multiboot_init)
mov $(GDT_SIZE*DESC_SIZE), %eax
mov $(_C_LABEL(gdt) + GDT_SELECTOR), %edi
mov %ax, (%edi)
mov $_C_LABEL(gdt), %eax
mov %eax, 2(%edi)
lgdt (%edi)
ljmp $(CS_SELECTOR), $reload_cs
reload_cs:
mov $DS_SELECTOR, %eax
mov %eax, %ds
mov %eax, %ss
mov %eax, %es
mov %eax, %fs
mov %eax, %gs
mov $(multiboot_stack + MULTIBOOT_STACK_SIZE), %esp
push %ebx
call _C_LABEL(pre_init)
add $4, %esp
/* return to old boot code of kernel */
push %eax
push $MULTIBOOT_PARAM_BUF_SIZE
push $_C_LABEL(multiboot_param_buf)
push $0
mov $ES_SELECTOR, %eax
mov %eax, %es
jmp kernel_init
.data
LABEL(multiboot_param_buf)
.space MULTIBOOT_PARAM_BUF_SIZE
multiboot_stack:
.space MULTIBOOT_STACK_SIZE + 4

260
kernel/arch/i386/pg_utils.c Normal file
View file

@ -0,0 +1,260 @@
#include <minix/cpufeature.h>
#include <minix/type.h>
#include <libexec.h>
#include <assert.h>
#include "kernel.h"
#include "arch_proto.h"
#include <string.h>
#include <libexec.h>
#include <minix/type.h>
/* These are set/computed in kernel.lds. */
extern char _kern_vir_base, _kern_phys_base, _kern_size;
/* Retrieve the absolute values to something we can use. */
static phys_bytes kern_vir_start = (phys_bytes) &_kern_vir_base;
static phys_bytes kern_phys_start = (phys_bytes) &_kern_phys_base;
static phys_bytes kern_kernlen = (phys_bytes) &_kern_size;
/* page directory we can use to map things */
static u32_t pagedir[1024] __aligned(4096);
void cut_memmap(kinfo_t *cbi, phys_bytes start, phys_bytes end)
{
int m;
phys_bytes o;
if((o=start % I386_PAGE_SIZE))
start -= o;
if((o=end % I386_PAGE_SIZE))
end += I386_PAGE_SIZE - o;
for(m = 0; m < cbi->mmap_size; m++) {
phys_bytes substart = start, subend = end;
phys_bytes memaddr = cbi->memmap[m].addr,
memend = cbi->memmap[m].addr + cbi->memmap[m].len;
/* adjust cut range to be a subset of the free memory */
if(substart < memaddr) substart = memaddr;
if(subend > memend) subend = memend;
if(substart >= subend) continue;
/* if there is any overlap, forget this one and add
* 1-2 subranges back
*/
cbi->memmap[m].addr = cbi->memmap[m].len = 0;
if(substart > memaddr)
add_memmap(cbi, memaddr, substart-memaddr);
if(subend < memend)
add_memmap(cbi, subend, memend-subend);
}
}
void add_memmap(kinfo_t *cbi, u64_t addr, u64_t len)
{
int m;
phys_bytes highmark;
#define LIMIT 0xFFFFF000
/* Truncate available memory at 4GB as the rest of minix
* currently can't deal with any bigger.
*/
if(addr > LIMIT) return;
if(addr + len > LIMIT) {
len -= (addr + len - LIMIT);
}
assert(cbi->mmap_size < MAXMEMMAP);
if(len == 0) return;
addr = roundup(addr, I386_PAGE_SIZE);
len = rounddown(len, I386_PAGE_SIZE);
for(m = 0; m < MAXMEMMAP; m++) {
if(cbi->memmap[m].len) continue;
cbi->memmap[m].addr = addr;
cbi->memmap[m].len = len;
cbi->memmap[m].type = MULTIBOOT_MEMORY_AVAILABLE;
if(m >= cbi->mmap_size)
cbi->mmap_size = m+1;
return;
}
highmark = addr + len;
if(highmark > cbi->mem_high_phys)
cbi->mem_high_phys = highmark;
panic("no available memmap slot");
}
u32_t *alloc_pagetable(phys_bytes *ph)
{
u32_t *ret;
#define PG_PAGETABLES 3
static u32_t pagetables[PG_PAGETABLES][1024] __aligned(4096);
static int pt_inuse = 0;
if(pt_inuse >= PG_PAGETABLES) panic("no more pagetables");
assert(sizeof(pagetables[pt_inuse]) == I386_PAGE_SIZE);
ret = pagetables[pt_inuse++];
*ph = vir2phys(ret);
return ret;
}
#define PAGE_KB (I386_PAGE_SIZE / 1024)
phys_bytes pg_alloc_page(kinfo_t *cbi)
{
int m;
multiboot_memory_map_t *mmap;
for(m = cbi->mmap_size-1; m >= 0; m--) {
mmap = &cbi->memmap[m];
if(!mmap->len) continue;
assert(mmap->len > 0);
assert(!(mmap->len % I386_PAGE_SIZE));
assert(!(mmap->addr % I386_PAGE_SIZE));
mmap->len -= I386_PAGE_SIZE;
return mmap->addr + mmap->len;
}
panic("can't find free memory");
}
void pg_identity(void)
{
int i;
phys_bytes phys;
/* Set up an identity mapping page directory */
for(i = 0; i < I386_VM_DIR_ENTRIES; i++) {
phys = i * I386_BIG_PAGE_SIZE;
pagedir[i] = phys | I386_VM_PRESENT | I386_VM_BIGPAGE |
I386_VM_USER | I386_VM_WRITE;
}
}
int pg_mapkernel(void)
{
int pde;
u32_t mapped = 0, kern_phys = kern_phys_start;
assert(!(kern_vir_start % I386_BIG_PAGE_SIZE));
assert(!(kern_phys % I386_BIG_PAGE_SIZE));
pde = kern_vir_start / I386_BIG_PAGE_SIZE; /* start pde */
while(mapped < kern_kernlen) {
pagedir[pde] = kern_phys | I386_VM_PRESENT |
I386_VM_BIGPAGE | I386_VM_WRITE;
mapped += I386_BIG_PAGE_SIZE;
kern_phys += I386_BIG_PAGE_SIZE;
pde++;
}
return pde; /* free pde */
}
void vm_enable_paging(void)
{
u32_t cr0, cr4;
int pgeok;
pgeok = _cpufeature(_CPUF_I386_PGE);
cr0= read_cr0();
cr4= read_cr4();
/* The boot loader should have put us in protected mode. */
assert(cr0 & I386_CR0_PE);
/* First clear PG and PGE flag, as PGE must be enabled after PG. */
write_cr0(cr0 & ~I386_CR0_PG);
write_cr4(cr4 & ~(I386_CR4_PGE | I386_CR4_PSE));
cr0= read_cr0();
cr4= read_cr4();
/* Our page table contains 4MB entries. */
cr4 |= I386_CR4_PSE;
write_cr4(cr4);
/* First enable paging, then enable global page flag. */
cr0 |= I386_CR0_PG;
write_cr0(cr0);
cr0 |= I386_CR0_WP;
write_cr0(cr0);
/* May we enable these features? */
if(pgeok)
cr4 |= I386_CR4_PGE;
write_cr4(cr4);
}
phys_bytes pg_load()
{
phys_bytes phpagedir = vir2phys(pagedir);
write_cr3(phpagedir);
return phpagedir;
}
void pg_clear(void)
{
memset(pagedir, 0, sizeof(pagedir));
}
phys_bytes pg_rounddown(phys_bytes b)
{
phys_bytes o;
if(!(o = b % I386_PAGE_SIZE))
return b;
return b - o;
}
void pg_map(phys_bytes phys, vir_bytes vaddr, vir_bytes vaddr_end,
kinfo_t *cbi)
{
static int mapped_pde = -1;
static u32_t *pt = NULL;
int pde, pte;
if(phys == PG_ALLOCATEME) {
assert(!(vaddr % I386_PAGE_SIZE));
} else {
assert((vaddr % I386_PAGE_SIZE) == (phys % I386_PAGE_SIZE));
vaddr = pg_rounddown(vaddr);
phys = pg_rounddown(phys);
}
assert(vaddr < kern_vir_start);
while(vaddr < vaddr_end) {
phys_bytes source = phys;
assert(!(vaddr % I386_PAGE_SIZE));
if(phys == PG_ALLOCATEME) {
source = pg_alloc_page(cbi);
} else {
assert(!(phys % I386_PAGE_SIZE));
}
assert(!(source % I386_PAGE_SIZE));
pde = I386_VM_PDE(vaddr);
pte = I386_VM_PTE(vaddr);
if(mapped_pde < pde) {
phys_bytes ph;
pt = alloc_pagetable(&ph);
pagedir[pde] = (ph & I386_VM_ADDR_MASK)
| I386_VM_PRESENT | I386_VM_USER | I386_VM_WRITE;
mapped_pde = pde;
}
assert(pt);
pt[pte] = (source & I386_VM_ADDR_MASK) |
I386_VM_PRESENT | I386_VM_USER | I386_VM_WRITE;
vaddr += I386_PAGE_SIZE;
if(phys != PG_ALLOCATEME)
phys += I386_PAGE_SIZE;
}
}
void pg_info(reg_t *pagedir_ph, u32_t **pagedir_v)
{
*pagedir_ph = vir2phys(pagedir);
*pagedir_v = pagedir;
}

View file

@ -1,258 +1,60 @@
#include "kernel/kernel.h"
#define UNPAGED 1 /* for proper kmain() prototype */
#include "kernel.h"
#include <assert.h>
#include <stdlib.h>
#include <minix/minlib.h>
#include <minix/const.h>
/*
* == IMPORTANT ==
* Routines in this file can not use any variable in kernel BSS,
* since before image is extracted, no BSS is allocated.
* So pay attention to any external call (including library call).
*
* */
#include <minix/types.h>
#include <minix/type.h>
#include <minix/com.h>
#include <sys/param.h>
#include <sys/reboot.h>
#include <machine/partition.h>
#include "string.h"
#include "arch_proto.h"
#include "libexec.h"
#include "mb_utils.h"
#include "direct_utils.h"
#include "serial.h"
#include "glo.h"
#include <machine/multiboot.h>
#if USE_SYSDEBUG
#define MULTIBOOT_VERBOSE 1
#endif
/* FIXME: Share this define with kernel linker script */
#define MULTIBOOT_KERNEL_ADDR 0x00200000UL
/* to-be-built kinfo struct, diagnostics buffer */
kinfo_t kinfo;
struct kmessages kmess;
/* Granularity used in image file and copying */
#define GRAN 512
#define SECT_CEIL(x) ((((x) - 1) / GRAN + 1) * GRAN)
/* pg_utils.c uses this; in this phase, there is a 1:1 mapping. */
phys_bytes vir2phys(void *addr) { return (phys_bytes) addr; }
/* mb_utils.c uses this; we can reach it directly */
char *video_mem = (char *) MULTIBOOT_VIDEO_BUFFER;
/* String length used for mb_itoa */
#define ITOA_BUFFER_SIZE 20
#define mb_load_phymem(buf, phy, len) \
phys_copy((phy), (u32_t)(buf), (len))
#define mb_save_phymem(buf, phy, len) \
phys_copy((u32_t)(buf), (phy), (len))
#define mb_clear_memrange(start, end) \
phys_memset((start), 0, (end)-(start))
static void mb_itoa(u32_t val, char * out)
static int mb_set_param(char *bigbuf, char *name, char *value, kinfo_t *cbi)
{
char ret[ITOA_BUFFER_SIZE];
int i = ITOA_BUFFER_SIZE - 2;
/* Although there's a library version of itoa(int n),
* we can't use it since that implementation relies on BSS segment
*/
ret[ITOA_BUFFER_SIZE - 2] = '0';
if (val) {
for (; i >= 0; i--) {
char c;
if (val == 0) break;
c = val % 10;
val = val / 10;
c += '0';
ret[i] = c;
}
}
else
i--;
ret[ITOA_BUFFER_SIZE - 1] = 0;
strcpy(out, ret + i + 1);
}
static void mb_itox(u32_t val, char *out)
{
char ret[9];
int i = 7;
/* Convert a number to hex string */
ret[7] = '0';
if (val) {
for (; i >= 0; i--) {
char c;
if (val == 0) break;
c = val & 0xF;
val = val >> 4;
if (c > 9)
c += 'A' - 10;
else
c += '0';
ret[i] = c;
}
}
else
i--;
ret[8] = 0;
strcpy(out, ret + i + 1);
}
static void mb_put_char(char c, int line, int col)
{
/* Write a char to vga display buffer. */
if (line<MULTIBOOT_CONSOLE_LINES&&col<MULTIBOOT_CONSOLE_COLS)
mb_save_phymem(
&c,
MULTIBOOT_VIDEO_BUFFER
+ line * MULTIBOOT_CONSOLE_COLS * 2
+ col * 2,
1);
}
static char mb_get_char(int line, int col)
{
char c;
/* Read a char to from display buffer. */
if (line < MULTIBOOT_CONSOLE_LINES && col < MULTIBOOT_CONSOLE_COLS)
mb_load_phymem(
&c,
MULTIBOOT_VIDEO_BUFFER
+ line * MULTIBOOT_CONSOLE_COLS * 2
+ col * 2,
1);
return c;
}
/* Give non-zero values to avoid them in BSS */
static int print_line = 1, print_col = 1;
#include <sys/video.h>
void mb_cls(void)
{
int i, j;
/* Clear screen */
for (i = 0; i < MULTIBOOT_CONSOLE_LINES; i++ )
for (j = 0; j < MULTIBOOT_CONSOLE_COLS; j++ )
mb_put_char(0, i, j);
print_line = print_col = 0;
/* Tell video hardware origin is 0. */
outb(C_6845+INDEX, VID_ORG);
outb(C_6845+DATA, 0);
outb(C_6845+INDEX, VID_ORG+1);
outb(C_6845+DATA, 0);
}
static void mb_scroll_up(int lines)
{
int i, j;
for (i = 0; i < MULTIBOOT_CONSOLE_LINES; i++ ) {
for (j = 0; j < MULTIBOOT_CONSOLE_COLS; j++ ) {
char c = 0;
if(i < MULTIBOOT_CONSOLE_LINES-lines)
c = mb_get_char(i + lines, j);
mb_put_char(c, i, j);
}
}
print_line-= lines;
}
void mb_print_char(char c)
{
while (print_line >= MULTIBOOT_CONSOLE_LINES)
mb_scroll_up(1);
if (c == '\n') {
while (print_col < MULTIBOOT_CONSOLE_COLS)
mb_put_char(' ', print_line, print_col++);
print_line++;
print_col = 0;
return;
}
mb_put_char(c, print_line, print_col++);
if (print_col >= MULTIBOOT_CONSOLE_COLS) {
print_line++;
print_col = 0;
}
while (print_line >= MULTIBOOT_CONSOLE_LINES)
mb_scroll_up(1);
}
void mb_print(char *str)
{
while (*str) {
mb_print_char(*str);
str++;
}
}
/* Standard and AT keyboard. (PS/2 MCA implies AT throughout.) */
#define KEYBD 0x60 /* I/O port for keyboard data */
#define KB_STATUS 0x64 /* I/O port for status on AT */
#define KB_OUT_FULL 0x01 /* status bit set when keypress char pending */
#define KB_AUX_BYTE 0x20 /* Auxiliary Device Output Buffer Full */
int mb_read_char(unsigned char *ch)
{
unsigned long b, sb;
#ifdef DEBUG_SERIAL
u8_t c, lsr;
if (do_serial_debug) {
lsr= inb(COM1_LSR);
if (!(lsr & LSR_DR))
return 0;
c = inb(COM1_RBR);
return 1;
}
#endif /* DEBUG_SERIAL */
sb = inb(KB_STATUS);
if (!(sb & KB_OUT_FULL)) {
return 0;
}
b = inb(KEYBD);
if (!(sb & KB_AUX_BYTE))
return 1;
return 0;
}
static void mb_print_hex(u32_t value)
{
int i;
char c;
char out[9] = "00000000";
/* Print a hex value */
for (i = 7; i >= 0; i--) {
c = value % 0x10;
value /= 0x10;
if (c < 10)
c += '0';
else
c += 'A'-10;
out[i] = c;
}
mb_print(out);
}
static int mb_set_param(char *name, char *value)
{
char *p = multiboot_param_buf;
char *p = bigbuf;
char *bufend = bigbuf + MULTIBOOT_PARAM_BUF_SIZE;
char *q;
int namelen = strlen(name);
int valuelen = strlen(value);
/* Some variables we recognize */
if(!strcmp(name, SERVARNAME)) { cbi->do_serial_debug = 1; return 0; }
if(!strcmp(name, SERBAUDVARNAME)) { cbi->serial_debug_baud = atoi(value); return 0; }
/* Delete the item if already exists */
while (*p) {
if (strncmp(p, name, namelen) == 0 && p[namelen] == '=') {
q = p;
while (*q) q++;
for (q++;
q < multiboot_param_buf + MULTIBOOT_PARAM_BUF_SIZE;
q++, p++)
for (q++; q < bufend; q++, p++)
*p = *q;
break;
}
@ -261,16 +63,12 @@ static int mb_set_param(char *name, char *value)
p++;
}
for (p = multiboot_param_buf;
p < multiboot_param_buf + MULTIBOOT_PARAM_BUF_SIZE
&& (*p || *(p + 1));
p++)
for (p = bigbuf; p < bufend && (*p || *(p + 1)); p++)
;
if (p > multiboot_param_buf) p++;
if (p > bigbuf) p++;
/* Make sure there's enough space for the new parameter */
if (p + namelen + valuelen + 3
> multiboot_param_buf + MULTIBOOT_PARAM_BUF_SIZE)
if (p + namelen + valuelen + 3 > bufend)
return -1;
strcpy(p, name);
@ -281,202 +79,172 @@ static int mb_set_param(char *name, char *value)
return 0;
}
static void get_parameters(multiboot_info_t *mbi)
int overlaps(multiboot_module_t *mod, int n, int cmp_mod)
{
char mem_value[40], temp[ITOA_BUFFER_SIZE];
int i;
int dev;
int ctrlr;
int disk, prim, sub;
int var_i,value_i;
multiboot_module_t *cmp = &mod[cmp_mod];
int m;
#define INRANGE(mod, v) ((v) >= mod->mod_start && (v) <= thismod->mod_end)
#define OVERLAP(mod1, mod2) (INRANGE(mod1, mod2->mod_start) || \
INRANGE(mod1, mod2->mod_end))
for(m = 0; m < n; m++) {
multiboot_module_t *thismod = &mod[m];
if(m == cmp_mod) continue;
if(OVERLAP(thismod, cmp))
return 1;
}
return 0;
}
void print_memmap(kinfo_t *cbi)
{
int m;
assert(cbi->mmap_size < MAXMEMMAP);
for(m = 0; m < cbi->mmap_size; m++) {
printf("%08lx-%08lx ",cbi->memmap[m].addr, cbi->memmap[m].addr + cbi->memmap[m].len);
}
printf("\nsize %08lx\n", cbi->mmap_size);
}
void get_parameters(u32_t ebx, kinfo_t *cbi)
{
multiboot_memory_map_t *mmap;
multiboot_info_t *mbi = &cbi->mbi;
int var_i,value_i, m, k;
char *p;
const static int dev_cNd0[] = { 0x0300, 0x0800, 0x0A00, 0x0C00, 0x1000 };
static char mb_cmd_buff[GRAN] = "add some value to avoid me in BSS";
static char var[GRAN] = "add some value to avoid me in BSS";
static char value[GRAN] = "add some value to avoid me in BSS";
for (i = 0; i < MULTIBOOT_PARAM_BUF_SIZE; i++)
multiboot_param_buf[i] = 0;
if (mbi->flags & MULTIBOOT_INFO_BOOTDEV) {
disk = ((mbi->boot_device&0xff000000) >> 24)-0x80;
prim = (mbi->boot_device & 0xff0000) >> 16;
if (prim == 0xff)
prim = 0;
sub = (mbi->boot_device & 0xff00) >> 8;
if (sub == 0xff)
sub = 0;
ctrlr = 0;
dev = dev_cNd0[ctrlr];
extern char _kern_phys_base, _kern_vir_base, _kern_size,
_kern_unpaged_start, _kern_unpaged_end;
phys_bytes kernbase = (phys_bytes) &_kern_phys_base,
kernsize = (phys_bytes) &_kern_size;
#define BUF 1024
static char cmdline[BUF];
/* Determine the value of rootdev */
dev += 0x80
+ (disk * NR_PARTITIONS + prim) * NR_PARTITIONS + sub;
/* get our own copy of the multiboot info struct and module list */
memcpy((void *) mbi, (void *) ebx, sizeof(*mbi));
mb_itoa(dev, temp);
mb_set_param("rootdev", temp);
mb_set_param("ramimagedev", temp);
}
mb_set_param("hz", "60");
if (mbi->flags & MULTIBOOT_INFO_MEMORY)
{
strcpy(mem_value, "800:");
mb_itox(
mbi->mem_lower * 1024 > MULTIBOOT_LOWER_MEM_MAX ?
MULTIBOOT_LOWER_MEM_MAX : mbi->mem_lower * 1024,
temp);
strcat(mem_value, temp);
strcat(mem_value, ",100000:");
mb_itox(mbi->mem_upper * 1024, temp);
strcat(mem_value, temp);
mb_set_param("memory", mem_value);
}
/* Set various bits of info for the higher-level kernel. */
cbi->mem_high_phys = 0;
cbi->user_sp = (vir_bytes) &_kern_vir_base;
cbi->vir_kern_start = (vir_bytes) &_kern_vir_base;
cbi->bootstrap_start = (vir_bytes) &_kern_unpaged_start;
cbi->bootstrap_len = (vir_bytes) &_kern_unpaged_end -
cbi->bootstrap_start;
cbi->kmess = &kmess;
/* set some configurable defaults */
cbi->do_serial_debug = 0;
cbi->serial_debug_baud = 115200;
/* parse boot command line */
if (mbi->flags&MULTIBOOT_INFO_CMDLINE) {
static char var[BUF];
static char value[BUF];
/* Override values with cmdline argument */
p = mb_cmd_buff;
mb_load_phymem(mb_cmd_buff, mbi->cmdline, GRAN);
memcpy(cmdline, (void *) mbi->cmdline, BUF);
p = cmdline;
while (*p) {
var_i = 0;
value_i = 0;
while (*p == ' ') p++;
if (!*p) break;
while (*p && *p != '=' && *p != ' ' && var_i < GRAN - 1)
while (*p && *p != '=' && *p != ' ' && var_i < BUF - 1)
var[var_i++] = *p++ ;
var[var_i] = 0;
if (*p++ != '=') continue; /* skip if not name=value */
while (*p && *p != ' ' && value_i < GRAN - 1)
while (*p && *p != ' ' && value_i < BUF - 1)
value[value_i++] = *p++ ;
value[value_i] = 0;
mb_set_param(var, value);
mb_set_param(cbi->param_buf, var, value, cbi);
}
}
}
static void mb_extract_image(multiboot_info_t mbi)
{
phys_bytes start_paddr = 0x5000000;
multiboot_module_t *mb_module_info;
multiboot_module_t *module;
u32_t mods_count = mbi.mods_count;
int r, i;
vir_bytes text_vaddr, text_filebytes, text_membytes;
vir_bytes data_vaddr, data_filebytes, data_membytes;
phys_bytes text_paddr, data_paddr;
vir_bytes stack_bytes;
vir_bytes pc;
off_t text_offset, data_offset;
/* round user stack down to leave a gap to catch kernel
* stack overflow; and to distinguish kernel and user addresses
* at a glance (0xf.. vs 0xe..)
*/
cbi->user_sp &= 0xF0000000;
cbi->user_end = cbi->user_sp;
/* Save memory map for kernel tasks */
r = read_header_elf((char *) MULTIBOOT_KERNEL_ADDR,
4096, /* everything is there */
&text_vaddr, &text_paddr,
&text_filebytes, &text_membytes,
&data_vaddr, &data_paddr,
&data_filebytes, &data_membytes,
&pc, &text_offset, &data_offset);
for (i = 0; i < NR_TASKS; ++i) {
image[i].memmap.text_vaddr = trunc_page(text_vaddr);
image[i].memmap.text_paddr = trunc_page(text_paddr);
image[i].memmap.text_bytes = text_membytes;
image[i].memmap.data_vaddr = trunc_page(data_vaddr);
image[i].memmap.data_paddr = trunc_page(data_paddr);
image[i].memmap.data_bytes = data_membytes;
image[i].memmap.stack_bytes = 0;
image[i].memmap.entry = pc;
assert(!(cbi->bootstrap_start % I386_PAGE_SIZE));
cbi->bootstrap_len = rounddown(cbi->bootstrap_len, I386_PAGE_SIZE);
assert(mbi->flags & MULTIBOOT_INFO_MODS);
assert(mbi->mods_count < MULTIBOOT_MAX_MODS);
assert(mbi->mods_count > 0);
memcpy(&cbi->module_list, (void *) mbi->mods_addr,
mbi->mods_count * sizeof(multiboot_module_t));
memset(cbi->memmap, 0, sizeof(cbi->memmap));
/* mem_map has a variable layout */
if(mbi->flags & MULTIBOOT_INFO_MEM_MAP) {
cbi->mmap_size = 0;
for (mmap = (multiboot_memory_map_t *) mbi->mmap_addr;
(unsigned long) mmap < mbi->mmap_addr + mbi->mmap_length;
mmap = (multiboot_memory_map_t *)
((unsigned long) mmap + mmap->size + sizeof(mmap->size))) {
if(mmap->type != MULTIBOOT_MEMORY_AVAILABLE) continue;
add_memmap(cbi, mmap->addr, mmap->len);
}
} else {
assert(mbi->flags & MULTIBOOT_INFO_MEMORY);
add_memmap(cbi, 0, mbi->mem_lower_unused*1024);
add_memmap(cbi, 0x100000, mbi->mem_upper_unused*1024);
}
#ifdef MULTIBOOT_VERBOSE
mb_print("\nKernel: ");
mb_print_hex(trunc_page(text_paddr));
mb_print("-");
mb_print_hex(trunc_page(data_paddr) + data_membytes);
mb_print(" Entry: ");
mb_print_hex(pc);
#endif
mb_module_info = ((multiboot_module_t *)mbi.mods_addr);
module = &mb_module_info[0];
/* Load boot image services into memory and save memory map */
for (i = 0; module < &mb_module_info[mods_count]; ++module, ++i) {
r = read_header_elf((char *) module->mod_start,
module->mod_end - module->mod_start + 1,
&text_vaddr, &text_paddr,
&text_filebytes, &text_membytes,
&data_vaddr, &data_paddr,
&data_filebytes, &data_membytes,
&pc, &text_offset, &data_offset);
if (r) {
mb_print("fatal: ELF parse failure\n");
/* Spin here */
while (1)
;
}
stack_bytes = image[NR_TASKS+i].stack_kbytes * 1024;
text_paddr = start_paddr + (text_vaddr & PAGE_MASK);
/* Load text segment */
phys_copy(module->mod_start+text_offset, text_paddr,
text_filebytes);
mb_clear_memrange(text_paddr+text_filebytes,
round_page(text_paddr) + text_membytes);
data_paddr = round_page((text_paddr + text_membytes));
data_paddr += data_vaddr & PAGE_MASK;
/* start of next module */
start_paddr = round_page(data_paddr + data_membytes + stack_bytes);
/* Load data and stack segments */
phys_copy(module->mod_start+data_offset, data_paddr, data_filebytes);
mb_clear_memrange(data_paddr+data_filebytes, start_paddr);
/* Save memmap for non-kernel tasks, so subscript past kernel
tasks. */
image[NR_TASKS+i].memmap.text_vaddr = trunc_page(text_vaddr);
image[NR_TASKS+i].memmap.text_paddr = trunc_page(text_paddr);
image[NR_TASKS+i].memmap.text_bytes = text_membytes;
image[NR_TASKS+i].memmap.data_vaddr = trunc_page(data_vaddr);
image[NR_TASKS+i].memmap.data_paddr = trunc_page(data_paddr);
image[NR_TASKS+i].memmap.data_bytes = data_membytes;
image[NR_TASKS+i].memmap.stack_bytes = stack_bytes;
image[NR_TASKS+i].memmap.entry = pc;
#ifdef MULTIBOOT_VERBOSE
mb_print("\n");
mb_print_hex(i);
mb_print(": ");
mb_print_hex(trunc_page(text_paddr));
mb_print("-");
mb_print_hex(trunc_page(data_paddr) + data_membytes + stack_bytes);
mb_print(" Entry: ");
mb_print_hex(pc);
mb_print(" Stack: ");
mb_print_hex(stack_bytes);
mb_print(" ");
mb_print((char *)module->cmdline);
/* Sanity check: the kernel nor any of the modules may overlap
* with each other. Pretend the kernel is an extra module for a
* second.
*/
k = mbi->mods_count;
assert(k < MULTIBOOT_MAX_MODS);
cbi->module_list[k].mod_start = kernbase;
cbi->module_list[k].mod_end = kernbase + kernsize;
cbi->mods_with_kernel = mbi->mods_count+1;
cbi->kern_mod = k;
for(m = 0; m < cbi->mods_with_kernel; m++) {
#if 0
printf("checking overlap of module %08lx-%08lx\n",
cbi->module_list[m].mod_start, cbi->module_list[m].mod_end);
#endif
if(overlaps(cbi->module_list, cbi->mods_with_kernel, m))
panic("overlapping boot modules/kernel");
/* We cut out the bits of memory that we know are
* occupied by the kernel and boot modules.
*/
cut_memmap(cbi,
cbi->module_list[m].mod_start,
cbi->module_list[m].mod_end);
}
return;
}
phys_bytes pre_init(u32_t ebx)
kinfo_t *pre_init(u32_t magic, u32_t ebx)
{
multiboot_info_t mbi;
/* Get our own copy boot params pointed to by ebx.
* Here we find out whether we should do serial output.
*/
get_parameters(ebx, &kinfo);
/* Say hello. */
printf("MINIX loading\n");
/* Do pre-initialization for multiboot, returning physical address of
* of multiboot module info
*/
mb_cls();
mb_print("\nMINIX booting... ");
mb_load_phymem(&mbi, ebx, sizeof(mbi));
get_parameters(&mbi);
mb_print("\nLoading image... ");
mb_extract_image(mbi);
return mbi.mods_addr;
assert(magic == MULTIBOOT_BOOTLOADER_MAGIC);
/* Make and load a pagetable that will map the kernel
* to where it should be; but first a 1:1 mapping so
* this code stays where it should be.
*/
pg_clear();
pg_identity();
kinfo.freepde_start = pg_mapkernel();
pg_load();
vm_enable_paging();
/* Done, return boot info so it can be passed to kmain(). */
return &kinfo;
}
int send_sig(endpoint_t proc_nr, int sig_nr) { return 0; }
void minix_shutdown(timer_t *t) { arch_shutdown(RBT_PANIC); }
void busy_delay_ms(int x) { }

View file

@ -3,10 +3,6 @@ include "kernel.h"
include "proc.h"
struct proc
member GSREG p_reg.gs
member FSREG p_reg.fs
member ESREG p_reg.es
member DSREG p_reg.ds
member DIREG p_reg.di
member SIREG p_reg.si
member BPREG p_reg.fp
@ -20,9 +16,4 @@ member PCREG p_reg.pc
member CSREG p_reg.cs
member PSWREG p_reg.psw
member SPREG p_reg.sp
member SSREG p_reg.ss
member P_LDT_SEL p_seg.p_ldt_sel
member P_CR3 p_seg.p_cr3
member P_CR3_V p_seg.p_cr3_v
member P_LDT p_seg.p_ldt

View file

@ -3,19 +3,23 @@
* for local descriptors in the process table.
*/
#include <string.h>
#include <assert.h>
#include <machine/multiboot.h>
#include "kernel/kernel.h"
#include "kernel/proc.h"
#include "archconst.h"
#include "arch_proto.h"
#include <libexec.h>
#define INT_GATE_TYPE (INT_286_GATE | DESC_386_BIT)
#define TSS_TYPE (AVL_286_TSS | DESC_386_BIT)
struct desctableptr_s {
char limit[sizeof(u16_t)];
char base[sizeof(u32_t)]; /* really u24_t + pad for 286 */
};
/* This is OK initially, when the 1:1 mapping is still there. */
char *video_mem = (char *) MULTIBOOT_VIDEO_BUFFER;
struct gatedesc_s {
u16_t offset_low;
@ -23,27 +27,20 @@ struct gatedesc_s {
u8_t pad; /* |000|XXXXX| ig & trpg, |XXXXXXXX| task g */
u8_t p_dpl_type; /* |P|DL|0|TYPE| */
u16_t offset_high;
};
} __attribute__((packed));
/* Storage for gdt, idt and tss. */
static struct segdesc_s gdt[GDT_SIZE] __aligned(DESC_SIZE);
struct gatedesc_s idt[IDT_SIZE] __aligned(DESC_SIZE);
struct tss_s tss[CONFIG_MAX_CPUS];
/* used in klib.s and mpx.s */
struct segdesc_s gdt[GDT_SIZE] __aligned(DESC_SIZE) =
{ {0},
{0,0,0,0}, /* GDT descriptor */
{0,0,0,0}, /* IDT descriptor */
{0xffff,0,0,0x93,0xcf,0}, /* kernel DS */
{0xffff,0,0,0x93,0xcf,0}, /* kernel ES (386: flag 4 Gb at startup) */
{0xffff,0,0,0x93,0xcf,0}, /* kernel SS (386: monitor SS at startup) */
{0xffff,0,0,0x9b,0xcf,0}, /* kernel CS */
{0xffff,0,0,0x9b,0xcf,0}, /* temp for BIOS (386: monitor CS at startup) */
};
/* zero-init so none present */
static struct gatedesc_s idt[IDT_SIZE] __aligned(DESC_SIZE);
struct tss_s tss[CONFIG_MAX_CPUS]; /* zero init */
static void sdesc(struct segdesc_s *segdp, phys_bytes base, vir_bytes
size);
phys_bytes vir2phys(void *vir)
{
extern char _kern_vir_base, _kern_phys_base; /* in kernel.lds */
u32_t offset = (vir_bytes) &_kern_vir_base -
(vir_bytes) &_kern_phys_base;
return (phys_bytes)vir - offset;
}
/*===========================================================================*
* enable_iop *
@ -58,51 +55,60 @@ void enable_iop(struct proc *pp)
pp->p_reg.psw |= 0x3000;
}
/*===========================================================================*
* seg2phys *
*===========================================================================*/
phys_bytes seg2phys(const u16_t seg)
{
/* Return the base address of a segment, with seg being a
* register, or a 286/386 segment selector.
*/
phys_bytes base;
struct segdesc_s *segdp;
segdp = &gdt[seg >> 3];
base = ((u32_t) segdp->base_low << 0)
| ((u32_t) segdp->base_middle << 16)
| ((u32_t) segdp->base_high << 24);
return base;
/*===========================================================================*
* sdesc *
*===========================================================================*/
void sdesc(struct segdesc_s *segdp, phys_bytes base, vir_bytes size)
{
/* Fill in the size fields (base, limit and granularity) of a descriptor. */
segdp->base_low = base;
segdp->base_middle = base >> BASE_MIDDLE_SHIFT;
segdp->base_high = base >> BASE_HIGH_SHIFT;
--size; /* convert to a limit, 0 size means 4G */
if (size > BYTE_GRAN_MAX) {
segdp->limit_low = size >> PAGE_GRAN_SHIFT;
segdp->granularity = GRANULAR | (size >>
(PAGE_GRAN_SHIFT + GRANULARITY_SHIFT));
} else {
segdp->limit_low = size;
segdp->granularity = size >> GRANULARITY_SHIFT;
}
segdp->granularity |= DEFAULT; /* means BIG for data seg */
}
/*===========================================================================*
* init_dataseg *
*===========================================================================*/
void init_dataseg(register struct segdesc_s *segdp,
void init_param_dataseg(register struct segdesc_s *segdp,
phys_bytes base, vir_bytes size, const int privilege)
{
/* Build descriptor for a data segment. */
sdesc(segdp, base, size);
segdp->access = (privilege << DPL_SHIFT) | (PRESENT | SEGMENT |
WRITEABLE);
WRITEABLE | ACCESSED);
/* EXECUTABLE = 0, EXPAND_DOWN = 0, ACCESSED = 0 */
}
void init_dataseg(int index, const int privilege)
{
init_param_dataseg(&gdt[index], 0, 0xFFFFFFFF, privilege);
}
/*===========================================================================*
* init_codeseg *
*===========================================================================*/
static void init_codeseg(register struct segdesc_s *segdp, phys_bytes base,
vir_bytes size, int privilege)
static void init_codeseg(int index, int privilege)
{
/* Build descriptor for a code segment. */
sdesc(segdp, base, size);
segdp->access = (privilege << DPL_SHIFT)
sdesc(&gdt[index], 0, 0xFFFFFFFF);
gdt[index].access = (privilege << DPL_SHIFT)
| (PRESENT | SEGMENT | EXECUTABLE | READABLE);
/* CONFORMING = 0, ACCESSED = 0 */
}
struct gate_table_s gate_table_pic[] = {
static struct gate_table_s gate_table_pic[] = {
{ hwint00, VECTOR( 0), INTR_PRIVILEGE },
{ hwint01, VECTOR( 1), INTR_PRIVILEGE },
{ hwint02, VECTOR( 2), INTR_PRIVILEGE },
@ -122,17 +128,47 @@ struct gate_table_s gate_table_pic[] = {
{ NULL, 0, 0}
};
void tss_init(unsigned cpu, void * kernel_stack)
static struct gate_table_s gate_table_exceptions[] = {
{ divide_error, DIVIDE_VECTOR, INTR_PRIVILEGE },
{ single_step_exception, DEBUG_VECTOR, INTR_PRIVILEGE },
{ nmi, NMI_VECTOR, INTR_PRIVILEGE },
{ breakpoint_exception, BREAKPOINT_VECTOR, USER_PRIVILEGE },
{ overflow, OVERFLOW_VECTOR, USER_PRIVILEGE },
{ bounds_check, BOUNDS_VECTOR, INTR_PRIVILEGE },
{ inval_opcode, INVAL_OP_VECTOR, INTR_PRIVILEGE },
{ copr_not_available, COPROC_NOT_VECTOR, INTR_PRIVILEGE },
{ double_fault, DOUBLE_FAULT_VECTOR, INTR_PRIVILEGE },
{ copr_seg_overrun, COPROC_SEG_VECTOR, INTR_PRIVILEGE },
{ inval_tss, INVAL_TSS_VECTOR, INTR_PRIVILEGE },
{ segment_not_present, SEG_NOT_VECTOR, INTR_PRIVILEGE },
{ stack_exception, STACK_FAULT_VECTOR, INTR_PRIVILEGE },
{ general_protection, PROTECTION_VECTOR, INTR_PRIVILEGE },
{ page_fault, PAGE_FAULT_VECTOR, INTR_PRIVILEGE },
{ copr_error, COPROC_ERR_VECTOR, INTR_PRIVILEGE },
{ alignment_check, ALIGNMENT_CHECK_VECTOR, INTR_PRIVILEGE },
{ machine_check, MACHINE_CHECK_VECTOR, INTR_PRIVILEGE },
{ simd_exception, SIMD_EXCEPTION_VECTOR, INTR_PRIVILEGE },
{ ipc_entry, IPC_VECTOR, USER_PRIVILEGE },
{ kernel_call_entry, KERN_CALL_VECTOR, USER_PRIVILEGE },
{ NULL, 0, 0}
};
int tss_init(unsigned cpu, void * kernel_stack)
{
struct tss_s * t = &tss[cpu];
t->ss0 = DS_SELECTOR;
init_dataseg(&gdt[TSS_INDEX(cpu)], vir2phys(t),
sizeof(struct tss_s), INTR_PRIVILEGE);
gdt[TSS_INDEX(cpu)].access = PRESENT |
(INTR_PRIVILEGE << DPL_SHIFT) | TSS_TYPE;
int index = TSS_INDEX(cpu);
struct segdesc_s *tssgdt;
/* Complete building of main TSS. */
tssgdt = &gdt[index];
init_param_dataseg(tssgdt, (phys_bytes) t,
sizeof(struct tss_s), INTR_PRIVILEGE);
tssgdt->access = PRESENT | (INTR_PRIVILEGE << DPL_SHIFT) | TSS_TYPE;
/* Build TSS. */
memset(t, 0, sizeof(*t));
t->ds = t->es = t->fs = t->gs = t->ss0 = KERN_DS_SELECTOR;
t->cs = KERN_CS_SELECTOR;
t->iobase = sizeof(struct tss_s); /* empty i/o permissions map */
/*
@ -145,344 +181,203 @@ void tss_init(unsigned cpu, void * kernel_stack)
* this stak in use when we trap to kernel
*/
*((reg_t *)(t->sp0 + 1 * sizeof(reg_t))) = cpu;
return SEG_SELECTOR(index);
}
/*===========================================================================*
* prot_init *
*===========================================================================*/
void prot_init(void)
phys_bytes init_segdesc(int gdt_index, void *base, int size)
{
/* Set up tables for protected mode.
* All GDT slots are allocated at compile time.
*/
struct desctableptr_s *dtp;
unsigned ldt_index;
register struct proc *rp;
struct desctableptr_s *dtp = (struct desctableptr_s *) &gdt[gdt_index];
dtp->limit = size - 1;
dtp->base = (phys_bytes) base;
/* Click-round kernel. */
if(kinfo.data_base % CLICK_SIZE)
panic("kinfo.data_base not aligned");
kinfo.data_size = (phys_bytes) (CLICK_CEIL(kinfo.data_size));
return (phys_bytes) dtp;
}
/* Build gdt and idt pointers in GDT where the BIOS expects them. */
dtp= (struct desctableptr_s *) &gdt[GDT_INDEX];
* (u16_t *) dtp->limit = (sizeof gdt) - 1;
* (u32_t *) dtp->base = vir2phys(gdt);
void int_gate(struct gatedesc_s *tab,
unsigned vec_nr, vir_bytes offset, unsigned dpl_type)
{
/* Build descriptor for an interrupt gate. */
register struct gatedesc_s *idp;
dtp= (struct desctableptr_s *) &gdt[IDT_INDEX];
* (u16_t *) dtp->limit = (sizeof idt) - 1;
* (u32_t *) dtp->base = vir2phys(idt);
idp = &tab[vec_nr];
idp->offset_low = offset;
idp->selector = KERN_CS_SELECTOR;
idp->p_dpl_type = dpl_type;
idp->offset_high = offset >> OFFSET_HIGH_SHIFT;
}
/* Build segment descriptors for tasks and interrupt handlers. */
init_codeseg(&gdt[CS_INDEX],
kinfo.code_base, kinfo.code_size, INTR_PRIVILEGE);
init_dataseg(&gdt[DS_INDEX],
kinfo.data_base, kinfo.data_size, INTR_PRIVILEGE);
init_dataseg(&gdt[ES_INDEX], 0L, 0, INTR_PRIVILEGE);
/* Build local descriptors in GDT for LDT's in process table.
* The LDT's are allocated at compile time in the process table, and
* initialized whenever a process' map is initialized or changed.
*/
for (rp = BEG_PROC_ADDR, ldt_index = FIRST_LDT_INDEX;
rp < END_PROC_ADDR; ++rp, ldt_index++) {
init_dataseg(&gdt[ldt_index], vir2phys(rp->p_seg.p_ldt),
sizeof(rp->p_seg.p_ldt), INTR_PRIVILEGE);
gdt[ldt_index].access = PRESENT | LDT;
rp->p_seg.p_ldt_sel = ldt_index * DESC_SIZE;
}
/* Build boot TSS */
tss_init(0, &k_boot_stktop);
void int_gate_idt(unsigned vec_nr, vir_bytes offset, unsigned dpl_type)
{
int_gate(idt, vec_nr, offset, dpl_type);
}
void idt_copy_vectors(struct gate_table_s * first)
{
struct gate_table_s *gtp;
for (gtp = first; gtp->gate; gtp++) {
int_gate(gtp->vec_nr, (vir_bytes) gtp->gate,
int_gate(idt, gtp->vec_nr, (vir_bytes) gtp->gate,
PRESENT | INT_GATE_TYPE |
(gtp->privilege << DPL_SHIFT));
}
}
/* Build descriptors for interrupt gates in IDT. */
void idt_init(void)
void idt_copy_vectors_pic(void)
{
struct gate_table_s gate_table[] = {
{ divide_error, DIVIDE_VECTOR, INTR_PRIVILEGE },
{ single_step_exception, DEBUG_VECTOR, INTR_PRIVILEGE },
{ nmi, NMI_VECTOR, INTR_PRIVILEGE },
{ breakpoint_exception, BREAKPOINT_VECTOR, USER_PRIVILEGE },
{ overflow, OVERFLOW_VECTOR, USER_PRIVILEGE },
{ bounds_check, BOUNDS_VECTOR, INTR_PRIVILEGE },
{ inval_opcode, INVAL_OP_VECTOR, INTR_PRIVILEGE },
{ copr_not_available, COPROC_NOT_VECTOR, INTR_PRIVILEGE },
{ double_fault, DOUBLE_FAULT_VECTOR, INTR_PRIVILEGE },
{ copr_seg_overrun, COPROC_SEG_VECTOR, INTR_PRIVILEGE },
{ inval_tss, INVAL_TSS_VECTOR, INTR_PRIVILEGE },
{ segment_not_present, SEG_NOT_VECTOR, INTR_PRIVILEGE },
{ stack_exception, STACK_FAULT_VECTOR, INTR_PRIVILEGE },
{ general_protection, PROTECTION_VECTOR, INTR_PRIVILEGE },
{ page_fault, PAGE_FAULT_VECTOR, INTR_PRIVILEGE },
{ copr_error, COPROC_ERR_VECTOR, INTR_PRIVILEGE },
{ alignment_check, ALIGNMENT_CHECK_VECTOR, INTR_PRIVILEGE },
{ machine_check, MACHINE_CHECK_VECTOR, INTR_PRIVILEGE },
{ simd_exception, SIMD_EXCEPTION_VECTOR, INTR_PRIVILEGE },
{ ipc_entry, IPC_VECTOR, USER_PRIVILEGE },
{ kernel_call_entry, KERN_CALL_VECTOR, USER_PRIVILEGE },
{ NULL, 0, 0}
};
idt_copy_vectors(gate_table);
idt_copy_vectors(gate_table_pic);
}
/*===========================================================================*
* sdesc *
*===========================================================================*/
static void sdesc(segdp, base, size)
register struct segdesc_s *segdp;
phys_bytes base;
vir_bytes size;
void idt_init(void)
{
/* Fill in the size fields (base, limit and granularity) of a descriptor. */
segdp->base_low = base;
segdp->base_middle = base >> BASE_MIDDLE_SHIFT;
segdp->base_high = base >> BASE_HIGH_SHIFT;
--size; /* convert to a limit, 0 size means 4G */
if (size > BYTE_GRAN_MAX) {
segdp->limit_low = size >> PAGE_GRAN_SHIFT;
segdp->granularity = GRANULAR | (size >>
(PAGE_GRAN_SHIFT + GRANULARITY_SHIFT));
} else {
segdp->limit_low = size;
segdp->granularity = size >> GRANULARITY_SHIFT;
}
segdp->granularity |= DEFAULT; /* means BIG for data seg */
idt_copy_vectors_pic();
idt_copy_vectors(gate_table_exceptions);
}
/*===========================================================================*
* int_gate *
*===========================================================================*/
void int_gate(unsigned vec_nr, vir_bytes offset, unsigned dpl_type)
{
/* Build descriptor for an interrupt gate. */
register struct gatedesc_s *idp;
struct desctableptr_s gdt_desc, idt_desc;
idp = &idt[vec_nr];
idp->offset_low = offset;
idp->selector = CS_SELECTOR;
idp->p_dpl_type = dpl_type;
idp->offset_high = offset >> OFFSET_HIGH_SHIFT;
void idt_reload(void)
{
x86_lidt(&idt_desc);
}
/*===========================================================================*
* alloc_segments *
*===========================================================================*/
void alloc_segments(register struct proc *rp)
multiboot_module_t *bootmod(int pnr)
{
/* This is called at system initialization from main() and by do_newmap().
* The code has a separate function because of all hardware-dependencies.
*/
phys_bytes code_bytes;
phys_bytes data_bytes;
phys_bytes text_vaddr, data_vaddr;
phys_bytes text_segbase, data_segbase;
int privilege;
int i;
data_bytes = (phys_bytes) (rp->p_memmap[S].mem_vir +
rp->p_memmap[S].mem_len) << CLICK_SHIFT;
if (rp->p_memmap[T].mem_len == 0)
code_bytes = data_bytes; /* common I&D, poor protect */
else
code_bytes = (phys_bytes) rp->p_memmap[T].mem_len << CLICK_SHIFT;
privilege = USER_PRIVILEGE;
assert(pnr >= 0);
text_vaddr = rp->p_memmap[T].mem_vir << CLICK_SHIFT;
data_vaddr = rp->p_memmap[D].mem_vir << CLICK_SHIFT;
text_segbase = (rp->p_memmap[T].mem_phys -
rp->p_memmap[T].mem_vir) << CLICK_SHIFT;
data_segbase = (rp->p_memmap[D].mem_phys -
rp->p_memmap[D].mem_vir) << CLICK_SHIFT;
init_codeseg(&rp->p_seg.p_ldt[CS_LDT_INDEX],
text_segbase,
text_vaddr + code_bytes, privilege);
init_dataseg(&rp->p_seg.p_ldt[DS_LDT_INDEX],
data_segbase,
data_vaddr + data_bytes, privilege);
rp->p_reg.cs = (CS_LDT_INDEX * DESC_SIZE) | TI | privilege;
rp->p_reg.gs =
rp->p_reg.fs =
rp->p_reg.ss =
rp->p_reg.es =
rp->p_reg.ds = (DS_LDT_INDEX*DESC_SIZE) | TI | privilege;
}
#if 0
/*===========================================================================*
* check_segments *
*===========================================================================*/
static void check_segments(char *File, int line)
{
int checked = 0;
int fail = 0;
struct proc *rp;
for (rp = BEG_PROC_ADDR; rp < END_PROC_ADDR; ++rp) {
int privilege;
int cs, ds;
if (isemptyp(rp))
continue;
privilege = USER_PRIVILEGE;
cs = (CS_LDT_INDEX*DESC_SIZE) | TI | privilege;
ds = (DS_LDT_INDEX*DESC_SIZE) | TI | privilege;
#define CHECK(s1, s2) if(s1 != s2) { \
printf("%s:%d: " #s1 " != " #s2 " for ep %d\n", \
File, line, rp->p_endpoint); fail++; } checked++;
CHECK(rp->p_reg.cs, cs);
CHECK(rp->p_reg.gs, ds);
CHECK(rp->p_reg.fs, ds);
CHECK(rp->p_reg.ss, ds);
if(rp->p_endpoint != SYSTEM) {
CHECK(rp->p_reg.es, ds);
}
CHECK(rp->p_reg.ds, ds);
}
if(fail) {
printf("%d/%d checks failed\n", fail, checked);
panic("wrong: %d", fail);
}
}
#endif
/*===========================================================================*
* printseg *
*===========================================================================*/
void printseg(char *banner, const int iscs, struct proc *pr,
const u32_t selector)
{
#if USE_SYSDEBUG
u32_t base, limit, index, dpl;
struct segdesc_s *desc;
if(banner) { printf("%s", banner); }
index = selector >> 3;
printf("RPL %d, ind %d of ",
(selector & RPL_MASK), index);
if(selector & TI) {
printf("LDT");
if(index >= LDT_SIZE) {
printf("invalid index in ldt\n");
return;
/* Search for desired process in boot process
* list. The first NR_TASKS ones do not correspond
* to a module, however, so we don't search those.
*/
for(i = NR_TASKS; i < NR_BOOT_PROCS; i++) {
int p;
p = i - NR_TASKS;
if(image[i].proc_nr == pnr) {
assert(p < MULTIBOOT_MAX_MODS);
assert(p < kinfo.mbi.mods_count);
return &kinfo.module_list[p];
}
if(!pr) {
printf("local selector but unknown process\n");
return;
}
desc = &pr->p_seg.p_ldt[index];
} else {
printf("GDT");
if(index >= GDT_SIZE) {
printf("invalid index in gdt\n");
return;
}
desc = &gdt[index];
}
limit = desc->limit_low |
(((u32_t) desc->granularity & LIMIT_HIGH) << GRANULARITY_SHIFT);
if(desc->granularity & GRANULAR) {
limit = (limit << PAGE_GRAN_SHIFT) + 0xfff;
}
base = desc->base_low |
((u32_t) desc->base_middle << BASE_MIDDLE_SHIFT) |
((u32_t) desc->base_high << BASE_HIGH_SHIFT);
printf(" -> base 0x%08lx size 0x%08lx ", base, limit+1);
if(iscs) {
if(!(desc->granularity & BIG))
printf("16bit ");
} else {
if(!(desc->granularity & BIG))
printf("not big ");
}
if(desc->granularity & 0x20) { /* reserved */
panic("granularity reserved field set");
}
if(!(desc->access & PRESENT))
printf("notpresent ");
if(!(desc->access & SEGMENT))
printf("system ");
if(desc->access & EXECUTABLE) {
printf(" exec ");
if(desc->access & CONFORMING) printf("conforming ");
if(!(desc->access & READABLE)) printf("non-readable ");
} else {
printf("nonexec ");
if(desc->access & EXPAND_DOWN) printf("non-expand-down ");
if(!(desc->access & WRITEABLE)) printf("non-writable ");
}
if(!(desc->access & ACCESSED)) {
printf("nonacc ");
}
dpl = ((u32_t) desc->access & DPL) >> DPL_SHIFT;
printf("DPL %d\n", dpl);
return;
#endif /* USE_SYSDEBUG */
panic("boot module %d not found", pnr);
}
/*===========================================================================*
* prot_set_kern_seg_limit *
* prot_init *
*===========================================================================*/
int prot_set_kern_seg_limit(const vir_bytes limit)
void prot_init()
{
struct proc *rp;
int orig_click;
int incr_clicks;
int sel_tss;
extern char k_boot_stktop;
if(limit <= kinfo.data_base) {
printf("prot_set_kern_seg_limit: limit bogus\n");
return EINVAL;
}
memset(gdt, 0, sizeof(gdt));
memset(idt, 0, sizeof(idt));
/* Do actual increase. */
orig_click = kinfo.data_size / CLICK_SIZE;
kinfo.data_size = limit - kinfo.data_base;
incr_clicks = kinfo.data_size / CLICK_SIZE - orig_click;
/* Build GDT, IDT, IDT descriptors. */
gdt_desc.base = (u32_t) gdt;
gdt_desc.limit = sizeof(gdt)-1;
idt_desc.base = (u32_t) idt;
idt_desc.limit = sizeof(idt)-1;
sel_tss = tss_init(0, &k_boot_stktop);
prot_init();
/* Build GDT */
init_param_dataseg(&gdt[LDT_INDEX],
(phys_bytes) 0, 0, INTR_PRIVILEGE); /* unusable LDT */
gdt[LDT_INDEX].access = PRESENT | LDT;
init_codeseg(KERN_CS_INDEX, INTR_PRIVILEGE);
init_dataseg(KERN_DS_INDEX, INTR_PRIVILEGE);
init_codeseg(USER_CS_INDEX, USER_PRIVILEGE);
init_dataseg(USER_DS_INDEX, USER_PRIVILEGE);
/* Increase kernel processes too. */
for (rp = BEG_PROC_ADDR; rp < END_PROC_ADDR; ++rp) {
if (isemptyp(rp) || !iskernelp(rp))
continue;
rp->p_memmap[S].mem_len += incr_clicks;
alloc_segments(rp);
rp->p_memmap[S].mem_len -= incr_clicks;
}
x86_lgdt(&gdt_desc); /* Load gdt */
idt_init();
idt_reload();
x86_lldt(LDT_SELECTOR); /* Load bogus ldt */
x86_ltr(sel_tss); /* Load global TSS */
return OK;
/* Currently the multiboot segments are loaded; which is fine, but
* let's replace them with the ones from our own GDT so we test
* right away whether they work as expected.
*/
x86_load_kerncs();
x86_load_ds(KERN_DS_SELECTOR);
x86_load_es(KERN_DS_SELECTOR);
x86_load_fs(KERN_DS_SELECTOR);
x86_load_gs(KERN_DS_SELECTOR);
x86_load_ss(KERN_DS_SELECTOR);
/* Set up a new post-relocate bootstrap pagetable so that
* we can map in VM, and we no longer rely on pre-relocated
* data.
*/
pg_clear();
pg_identity(); /* Still need 1:1 for lapic and video mem and such. */
pg_mapkernel();
pg_load();
bootstrap_pagetable_done = 1; /* secondary CPU's can use it too */
}
void arch_post_init(void)
{
/* Let memory mapping code know what's going on at bootstrap time */
struct proc *vm;
vm = proc_addr(VM_PROC_NR);
get_cpulocal_var(ptproc) = vm;
pg_info(&vm->p_seg.p_cr3, &vm->p_seg.p_cr3_v);
}
int libexec_pg_alloc(struct exec_info *execi, off_t vaddr, size_t len)
{
pg_map(PG_ALLOCATEME, vaddr, vaddr+len, &kinfo);
pg_load();
memset((char *) vaddr, 0, len);
return OK;
}
void arch_boot_proc(struct boot_image *ip, struct proc *rp)
{
multiboot_module_t *mod;
if(rp->p_nr < 0) return;
mod = bootmod(rp->p_nr);
/* Important special case: we put VM in the bootstrap pagetable
* so it can run.
*/
if(rp->p_nr == VM_PROC_NR) {
struct exec_info execi;
memset(&execi, 0, sizeof(execi));
/* exec parameters */
execi.stack_high = kinfo.user_sp;
execi.stack_size = 16 * 1024; /* not too crazy as it must be preallocated */
execi.proc_e = ip->endpoint;
execi.hdr = (char *) mod->mod_start; /* phys mem direct */
execi.hdr_len = mod->mod_end - mod->mod_start;
strcpy(execi.progname, ip->proc_name);
execi.frame_len = 0;
/* callbacks for use in the kernel */
execi.copymem = libexec_copy_memcpy;
execi.clearmem = libexec_clear_memset;
execi.allocmem_prealloc = libexec_pg_alloc;
execi.allocmem_ondemand = libexec_pg_alloc;
execi.clearproc = NULL;
/* parse VM ELF binary and alloc/map it into bootstrap pagetable */
libexec_load_elf(&execi);
/* Initialize the server stack pointer. Take it down three words
* to give startup code something to use as "argc", "argv" and "envp".
*/
arch_proc_init(rp, execi.pc, kinfo.user_sp - 3*4, ip->proc_name);
/* Free VM blob that was just copied into existence. */
cut_memmap(&kinfo, mod->mod_start, mod->mod_end);
}
}

View file

@ -17,7 +17,7 @@
* zeroed
*/
#define TEST_INT_IN_KERNEL(displ, label) \
cmpl $CS_SELECTOR, displ(%esp) ;\
cmpl $KERN_CS_SELECTOR, displ(%esp) ;\
je label ;
/*
@ -36,28 +36,12 @@
movl tmp, PSWREG(pptr) ;\
movl (12 + displ)(%esp), tmp ;\
movl tmp, SPREG(pptr) ;\
movl tmp, STREG(pptr) ;\
movl (16 + displ)(%esp), tmp ;\
movl tmp, SSREG(pptr) ;
#define SAVE_SEGS(pptr) \
mov %ds, %ss:DSREG(pptr) ;\
mov %es, %ss:ESREG(pptr) ;\
mov %fs, %ss:FSREG(pptr) ;\
mov %gs, %ss:GSREG(pptr) ;
#define RESTORE_SEGS(pptr) \
movw %ss:DSREG(pptr), %ds ;\
movw %ss:ESREG(pptr), %es ;\
movw %ss:FSREG(pptr), %fs ;\
movw %ss:GSREG(pptr), %gs ;
movl tmp, STREG(pptr)
/*
* restore kernel segments, %ss is kernnel data segment, %cs is aready set and
* %fs, %gs are not used
*/
* restore kernel segments. %cs is aready set and %fs, %gs are not used */
#define RESTORE_KERNEL_SEGS \
mov %ss, %si ;\
mov $KERN_DS_SELECTOR, %si ;\
mov %si, %ds ;\
mov %si, %es ;\
movw $0, %si ;\
@ -65,20 +49,20 @@
mov %si, %fs ;
#define SAVE_GP_REGS(pptr) \
mov %eax, %ss:AXREG(pptr) ;\
mov %ecx, %ss:CXREG(pptr) ;\
mov %edx, %ss:DXREG(pptr) ;\
mov %ebx, %ss:BXREG(pptr) ;\
mov %esi, %ss:SIREG(pptr) ;\
mov %edi, %ss:DIREG(pptr) ;
mov %eax, AXREG(pptr) ;\
mov %ecx, CXREG(pptr) ;\
mov %edx, DXREG(pptr) ;\
mov %ebx, BXREG(pptr) ;\
mov %esi, SIREG(pptr) ;\
mov %edi, DIREG(pptr) ;
#define RESTORE_GP_REGS(pptr) \
movl %ss:AXREG(pptr), %eax ;\
movl %ss:CXREG(pptr), %ecx ;\
movl %ss:DXREG(pptr), %edx ;\
movl %ss:BXREG(pptr), %ebx ;\
movl %ss:SIREG(pptr), %esi ;\
movl %ss:DIREG(pptr), %edi ;
movl AXREG(pptr), %eax ;\
movl CXREG(pptr), %ecx ;\
movl DXREG(pptr), %edx ;\
movl BXREG(pptr), %ebx ;\
movl SIREG(pptr), %esi ;\
movl DIREG(pptr), %edi ;
/*
* save the context of the interrupted process to the structure in the process
@ -97,12 +81,9 @@
;\
movl (CURR_PROC_PTR + 4 + displ)(%esp), %ebp ;\
\
/* save the segment registers */ \
SAVE_SEGS(%ebp) ;\
\
SAVE_GP_REGS(%ebp) ;\
pop %esi /* get the orig %ebp and save it */ ;\
mov %esi, %ss:BPREG(%ebp) ;\
mov %esi, BPREG(%ebp) ;\
\
RESTORE_KERNEL_SEGS ;\
SAVE_TRAP_CTX(displ, %ebp, %esi) ;

View file

@ -20,7 +20,7 @@ ENTRY(trampoline)
orb $1, %al
mov %eax, %cr0
ljmpl $CS_SELECTOR, $_C_LABEL(startup_ap_32)
ljmpl $KERN_CS_SELECTOR, $_C_LABEL(startup_ap_32)
.balign 4
LABEL(__ap_id)

View file

@ -141,7 +141,7 @@ int timer_int_handler(void)
}
#ifdef DEBUG_SERIAL
if (do_serial_debug)
if (kinfo.do_serial_debug)
do_ser_debug();
#endif

View file

@ -43,12 +43,6 @@
#define USE_MEMSET 1 /* write char to a given memory area */
#define USE_RUNCTL 1 /* control stop flags of a process */
/* Length of program names stored in the process table. This is only used
* for the debugging dumps that can be generated with the IS server. The PM
* server keeps its own copy of the program name.
*/
#define P_NAME_LEN 8
/* This section contains defines for valuable system resources that are used
* by device drivers. The number of elements of the vectors is determined by
* the maximum needed by any given driver. The number of interrupt hooks may

View file

@ -27,10 +27,6 @@
#define unset_sys_bit(map,bit) \
( MAP_CHUNK((map).chunk,bit) &= ~(1 << CHUNK_OFFSET(bit) ))
/* args to intr_init() */
#define INTS_ORIG 0 /* restore interrupts */
#define INTS_MINIX 1 /* initialize interrupts for minix */
/* for kputc() */
#define END_OF_KMESS 0

View file

@ -175,7 +175,6 @@ miscflagstr(const u32_t flags)
str[0] = '\0';
FLAG(MF_REPLY_PEND);
FLAG(MF_FULLVM);
FLAG(MF_DELIVERMSG);
FLAG(MF_KCALL_RESUME);

View file

@ -22,7 +22,6 @@
EXTERN struct kinfo kinfo; /* kernel information for users */
EXTERN struct machine machine; /* machine information for users */
EXTERN struct kmessages kmess; /* diagnostic messages in kernel */
EXTERN char kmess_buf[80*25]; /* printable copy of message buffer */
EXTERN struct k_randomness krandom; /* gather kernel random information */
EXTERN struct loadinfo kloadinfo; /* status of load average */
@ -40,14 +39,8 @@ EXTERN int irq_use; /* map of all in-use irq's */
EXTERN u32_t system_hz; /* HZ value */
/* Miscellaneous. */
EXTERN int do_serial_debug;
EXTERN int serial_debug_baud;
EXTERN time_t boottime;
EXTERN char params_buffer[512]; /* boot monitor parameters */
EXTERN int minix_panicing;
EXTERN int verboseboot; /* verbose boot, init'ed in cstart */
#define MAGICTEST 0xC0FFEE23
EXTERN u32_t magictest; /* global magic number */
#if DEBUG_TRACE
EXTERN int verboseflags;
@ -66,14 +59,14 @@ EXTERN u64_t cpu_hz[CONFIG_MAX_CPUS];
#ifdef CONFIG_SMP
EXTERN int config_no_smp; /* optionaly turn off SMP */
#endif
EXTERN int bootstrap_pagetable_done;
/* VM */
EXTERN int vm_running;
EXTERN int catch_pagefaults;
/* Variables that are initialized elsewhere are just extern here. */
extern struct boot_image image[]; /* system image processes */
extern struct segdesc_s gdt[]; /* global descriptor table */
extern struct boot_image image[NR_BOOT_PROCS]; /* system image processes */
EXTERN volatile int serial_debug_active;
@ -85,4 +78,7 @@ EXTERN u64_t bkl_ticks[CONFIG_MAX_CPUS];
EXTERN unsigned bkl_tries[CONFIG_MAX_CPUS];
EXTERN unsigned bkl_succ[CONFIG_MAX_CPUS];
/* Feature flags */
EXTERN int minix_feature_flags;
#endif /* GLO_H */

View file

@ -10,12 +10,16 @@
*/
#include "kernel.h"
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <libexec.h>
#include <a.out.h>
#include <minix/com.h>
#include <minix/endpoint.h>
#include <machine/vmparam.h>
#include <minix/u64.h>
#include <minix/type.h>
#include "proc.h"
#include "debug.h"
#include "clock.h"
@ -102,35 +106,40 @@ void bsp_finish_booting(void)
machine.processors_count = 1;
machine.bsp_id = 0;
#endif
switch_to_user();
NOT_REACHABLE;
}
/*===========================================================================*
* main *
* kmain *
*===========================================================================*/
int main(void)
void kmain(kinfo_t *local_cbi)
{
/* Start the ball rolling. */
struct boot_image *ip; /* boot image pointer */
register struct proc *rp; /* process pointer */
register int i, j;
size_t argsz; /* size of arguments passed to crtso on stack */
/* save a global copy of the boot parameters */
memcpy(&kinfo, local_cbi, sizeof(kinfo));
memcpy(&kmess, kinfo.kmess, sizeof(kmess));
/* We can talk now */
printf("MINIX booting\n");
assert(sizeof(kinfo.boot_procs) == sizeof(image));
memcpy(kinfo.boot_procs, image, sizeof(kinfo.boot_procs));
cstart();
BKL_LOCK();
/* Global value to test segment sanity. */
magictest = MAGICTEST;
DEBUGEXTRA(("main()\n"));
proc_init();
/* Set up proc table entries for processes in boot image. The stacks
* of the servers have been added to the data segment by the monitor, so
* the stack pointer is set to the end of the data segment.
*/
/* Set up proc table entries for processes in boot image. */
for (i=0; i < NR_BOOT_PROCS; ++i) {
int schedulable_proc;
proc_nr_t proc_nr;
@ -142,7 +151,13 @@ int main(void)
rp = proc_addr(ip->proc_nr); /* get process pointer */
ip->endpoint = rp->p_endpoint; /* ipc endpoint */
make_zero64(rp->p_cpu_time_left);
strncpy(rp->p_name, ip->proc_name, P_NAME_LEN); /* set process name */
if(i >= NR_TASKS) {
/* Remember this so it can be passed to VM */
multiboot_module_t *mb_mod = &kinfo.module_list[i - NR_TASKS];
ip->start_addr = mb_mod->mod_start;
ip->len = mb_mod->mod_end - mb_mod->mod_start;
}
reset_proc_accounting(rp);
@ -154,13 +169,23 @@ int main(void)
* process has set their privileges.
*/
proc_nr = proc_nr(rp);
schedulable_proc = (iskerneln(proc_nr) || isrootsysn(proc_nr));
schedulable_proc = (iskerneln(proc_nr) || isrootsysn(proc_nr) ||
proc_nr == VM_PROC_NR);
if(schedulable_proc) {
/* Assign privilege structure. Force a static privilege id. */
(void) get_priv(rp, static_priv_id(proc_nr));
/* Priviliges for kernel tasks. */
if(iskerneln(proc_nr)) {
if(proc_nr == VM_PROC_NR) {
priv(rp)->s_flags = VM_F;
priv(rp)->s_trap_mask = SRV_T;
ipc_to_m = SRV_M;
kcalls = SRV_KC;
priv(rp)->s_sig_mgr = SELF;
rp->p_priority = SRV_Q;
rp->p_quantum_size_ms = SRV_QT;
}
else if(iskerneln(proc_nr)) {
/* Privilege flags. */
priv(rp)->s_flags = (proc_nr == IDLE ? IDL_F : TSK_F);
/* Allowed traps. */
@ -203,64 +228,34 @@ int main(void)
/* Don't let the process run for now. */
RTS_SET(rp, RTS_NO_PRIV | RTS_NO_QUANTUM);
}
rp->p_memmap[T].mem_vir = ABS2CLICK(ip->memmap.text_vaddr);
rp->p_memmap[T].mem_phys = ABS2CLICK(ip->memmap.text_paddr);
rp->p_memmap[T].mem_len = ABS2CLICK(ip->memmap.text_bytes);
rp->p_memmap[D].mem_vir = ABS2CLICK(ip->memmap.data_vaddr);
rp->p_memmap[D].mem_phys = ABS2CLICK(ip->memmap.data_paddr);
rp->p_memmap[D].mem_len = ABS2CLICK(ip->memmap.data_bytes);
rp->p_memmap[S].mem_phys = ABS2CLICK(ip->memmap.data_paddr +
ip->memmap.data_bytes +
ip->memmap.stack_bytes);
rp->p_memmap[S].mem_vir = ABS2CLICK(ip->memmap.data_vaddr +
ip->memmap.data_bytes +
ip->memmap.stack_bytes);
rp->p_memmap[S].mem_len = 0;
/* Set initial register values. The processor status word for tasks
* is different from that of other processes because tasks can
* access I/O; this is not allowed to less-privileged processes
*/
rp->p_reg.pc = ip->memmap.entry;
rp->p_reg.psw = (iskerneln(proc_nr)) ? INIT_TASK_PSW : INIT_PSW;
/* Initialize the server stack pointer. Take it down three words
* to give crtso.s something to use as "argc", "argv" and "envp".
*/
if (isusern(proc_nr)) { /* user-space process? */
rp->p_reg.sp = (rp->p_memmap[S].mem_vir +
rp->p_memmap[S].mem_len) << CLICK_SHIFT;
argsz = 3 * sizeof(reg_t);
rp->p_reg.sp -= argsz;
phys_memset(rp->p_reg.sp -
(rp->p_memmap[S].mem_vir << CLICK_SHIFT) +
(rp->p_memmap[S].mem_phys << CLICK_SHIFT),
0, argsz);
}
/* Arch-specific state initialization. */
arch_boot_proc(ip, rp);
/* scheduling functions depend on proc_ptr pointing somewhere. */
if(!get_cpulocal_var(proc_ptr))
get_cpulocal_var(proc_ptr) = rp;
/* If this process has its own page table, VM will set the
* PT up and manage it. VM will signal the kernel when it has
* done this; until then, don't let it run.
*/
if(ip->flags & PROC_FULLVM)
/* Process isn't scheduled until VM has set up a pagetable for it. */
if(rp->p_nr != VM_PROC_NR && rp->p_nr >= 0)
rp->p_rts_flags |= RTS_VMINHIBIT;
rp->p_rts_flags |= RTS_PROC_STOP;
rp->p_rts_flags &= ~RTS_SLOT_FREE;
alloc_segments(rp);
DEBUGEXTRA(("done\n"));
}
/* update boot procs info for VM */
memcpy(kinfo.boot_procs, image, sizeof(kinfo.boot_procs));
#define IPCNAME(n) { \
assert((n) >= 0 && (n) <= IPCNO_HIGHEST); \
assert(!ipc_call_names[n]); \
ipc_call_names[n] = #n; \
}
arch_post_init();
IPCNAME(SEND);
IPCNAME(RECEIVE);
IPCNAME(SENDREC);
@ -268,16 +263,17 @@ int main(void)
IPCNAME(SENDNB);
IPCNAME(SENDA);
/* Architecture-dependent initialization. */
DEBUGEXTRA(("arch_init()... "));
arch_init();
DEBUGEXTRA(("done\n"));
/* System and processes initialization */
memory_init();
DEBUGEXTRA(("system_init()... "));
system_init();
DEBUGEXTRA(("done\n"));
/* The bootstrap phase is over, so we can add the physical
* memory used for it to the free list.
*/
add_memmap(&kinfo, kinfo.bootstrap_start, kinfo.bootstrap_len);
#ifdef CONFIG_SMP
if (config_no_apic) {
BOOT_VERBOSE(printf("APIC disabled, disables SMP, using legacy PIC\n"));
@ -303,7 +299,6 @@ int main(void)
#endif
NOT_REACHABLE;
return 1;
}
/*===========================================================================*
@ -360,7 +355,127 @@ void minix_shutdown(timer_t *tp)
#endif
hw_intr_disable_all();
stop_local_timer();
intr_init(INTS_ORIG, 0);
arch_shutdown(tp ? tmr_arg(tp)->ta_int : RBT_PANIC);
}
/*===========================================================================*
* cstart *
*===========================================================================*/
void cstart()
{
/* Perform system initializations prior to calling main(). Most settings are
* determined with help of the environment strings passed by MINIX' loader.
*/
register char *value; /* value in key=value pair */
int h;
/* low-level initialization */
prot_init();
/* determine verbosity */
if ((value = env_get(VERBOSEBOOTVARNAME)))
verboseboot = atoi(value);
/* Get clock tick frequency. */
value = env_get("hz");
if(value)
system_hz = atoi(value);
if(!value || system_hz < 2 || system_hz > 50000) /* sanity check */
system_hz = DEFAULT_HZ;
DEBUGEXTRA(("cstart\n"));
/* Record miscellaneous information for user-space servers. */
kinfo.nr_procs = NR_PROCS;
kinfo.nr_tasks = NR_TASKS;
strncpy(kinfo.release, OS_RELEASE, sizeof(kinfo.release));
kinfo.release[sizeof(kinfo.release)-1] = '\0';
strncpy(kinfo.version, OS_VERSION, sizeof(kinfo.version));
kinfo.version[sizeof(kinfo.version)-1] = '\0';
/* Load average data initialization. */
kloadinfo.proc_last_slot = 0;
for(h = 0; h < _LOAD_HISTORY; h++)
kloadinfo.proc_load_history[h] = 0;
#ifdef USE_APIC
value = env_get("no_apic");
if(value)
config_no_apic = atoi(value);
else
config_no_apic = 1;
value = env_get("apic_timer_x");
if(value)
config_apic_timer_x = atoi(value);
else
config_apic_timer_x = 1;
#endif
#ifdef USE_WATCHDOG
value = env_get("watchdog");
if (value)
watchdog_enabled = atoi(value);
#endif
#ifdef CONFIG_SMP
if (config_no_apic)
config_no_smp = 1;
value = env_get("no_smp");
if(value)
config_no_smp = atoi(value);
else
config_no_smp = 0;
#endif
DEBUGEXTRA(("intr_init(0)\n"));
intr_init(0);
arch_init();
}
/*===========================================================================*
* get_value *
*===========================================================================*/
char *get_value(
const char *params, /* boot monitor parameters */
const char *name /* key to look up */
)
{
/* Get environment value - kernel version of getenv to avoid setting up the
* usual environment array.
*/
register const char *namep;
register char *envp;
for (envp = (char *) params; *envp != 0;) {
for (namep = name; *namep != 0 && *namep == *envp; namep++, envp++)
;
if (*namep == '\0' && *envp == '=') return(envp + 1);
while (*envp++ != 0)
;
}
return(NULL);
}
/*===========================================================================*
* env_get *
*===========================================================================*/
char *env_get(const char *name)
{
return get_value(kinfo.param_buf, name);
}
void cpu_print_freq(unsigned cpu)
{
u64_t freq;
freq = cpu_get_freq(cpu);
printf("CPU %d freq %lu MHz\n", cpu, div64u(freq, 1000000));
}
int is_fpu(void)
{
return get_cpulocal_var(fpu_presence);
}

View file

@ -6,11 +6,4 @@
/* Enable copy-on-write optimization for safecopy. */
#define PERF_USE_COW_SAFECOPY 0
/* Use a private page table for critical system processes. */
#ifdef CONFIG_SMP
#define PERF_SYS_CORE_FULLVM 1
#else
#define PERF_SYS_CORE_FULLVM 0
#endif
#endif /* PERF_H */

View file

@ -72,10 +72,7 @@ static void set_idle_name(char * name, int n)
{
int i, c;
int p_z = 0;
/*
* P_NAME_LEN limits us to 3 characters for the idle task numer. 999
* should be enough though.
*/
if (n > 999)
n = 999;
@ -138,7 +135,7 @@ void proc_init(void)
rp->p_quantum_size_ms = 0; /* no quantum size */
/* arch-specific initialization */
arch_proc_init(i, rp);
arch_proc_reset(rp);
}
for (sp = BEG_PRIV_ADDR, i = 0; sp < END_PRIV_ADDR; ++sp, ++i) {
sp->s_proc_nr = NONE; /* initialize as free */
@ -387,7 +384,7 @@ check_misc_flags:
*/
p->p_misc_flags &= ~MF_CONTEXT_SET;
assert(!(p->p_misc_flags & MF_FULLVM) || p->p_seg.p_cr3 != 0);
assert(p->p_seg.p_cr3 != 0);
#ifdef CONFIG_SMP
if (p->p_misc_flags & MF_FLUSH_TLB) {
if (tlb_must_refresh)
@ -816,7 +813,7 @@ int mini_send(
assert(!(dst_ptr->p_misc_flags & MF_DELIVERMSG));
if (!(flags & FROM_KERNEL)) {
if(copy_msg_from_user(caller_ptr, m_ptr, &dst_ptr->p_delivermsg))
if(copy_msg_from_user(m_ptr, &dst_ptr->p_delivermsg))
return EFAULT;
} else {
dst_ptr->p_delivermsg = *m_ptr;
@ -851,7 +848,7 @@ int mini_send(
/* Destination is not waiting. Block and dequeue caller. */
if (!(flags & FROM_KERNEL)) {
if(copy_msg_from_user(caller_ptr, m_ptr, &caller_ptr->p_sendmsg))
if(copy_msg_from_user(m_ptr, &caller_ptr->p_sendmsg))
return EFAULT;
} else {
caller_ptr->p_sendmsg = *m_ptr;

View file

@ -54,8 +54,6 @@ struct proc {
unsigned long preempted;
} p_accounting;
struct mem_map p_memmap[NR_LOCAL_SEGS]; /* memory map (T, D, S) */
clock_t p_user_time; /* user time in ticks */
clock_t p_sys_time; /* sys time in ticks */
@ -74,7 +72,7 @@ struct proc {
sigset_t p_pending; /* bit map for pending kernel signals */
char p_name[P_NAME_LEN]; /* name of the process, including \0 */
char p_name[PROC_NAME_LEN]; /* name of the process, including \0 */
endpoint_t p_endpoint; /* endpoint number, generation-aware */
@ -237,7 +235,6 @@ struct proc {
We need to resume the kernel call execution
now
*/
#define MF_FULLVM 0x020
#define MF_DELIVERMSG 0x040 /* Copy message for him before running */
#define MF_SIG_DELAY 0x080 /* Send signal when no longer sending */
#define MF_SC_ACTIVE 0x100 /* Syscall tracing: in a system call now */

View file

@ -82,7 +82,7 @@ static void sprof_save_proc(struct proc * p)
s = (struct sprof_proc *) (sprof_sample_buffer + sprof_info.mem_used);
s->proc = p->p_endpoint;
memcpy(&s->name, p->p_name, P_NAME_LEN);
strcpy(s->name, p->p_name);
sprof_info.mem_used += sizeof(struct sprof_proc);
}

View file

@ -38,7 +38,10 @@ void fpu_sigcontext(struct proc *, struct sigframe *fr, struct
sigcontext *sc);
/* main.c */
int main(void);
#ifndef UNPAGED
#define kmain __k_unpaged_kmain
#endif
void kmain(kinfo_t *cbi);
void prepare_shutdown(int how);
__dead void minix_shutdown(struct timer *tp);
void bsp_finish_booting(void);
@ -55,7 +58,7 @@ int mini_notify(const struct proc *src, endpoint_t dst);
void enqueue(struct proc *rp);
void dequeue(struct proc *rp);
void switch_to_user(void);
void arch_proc_init(int nr, struct proc *rp);
void arch_proc_reset(struct proc *rp);
struct proc * arch_finish_switch_to_user(void);
struct proc *endpoint_lookup(endpoint_t ep);
#if DEBUG_ENABLE_IPC_WARNINGS
@ -73,8 +76,7 @@ int try_deliver_senda(struct proc *caller_ptr, asynmsg_t *table, size_t
size);
/* start.c */
void cstart(u16_t cs, u16_t ds, u16_t mds, u16_t parmoff, u16_t
parmsize);
void cstart();
char *env_get(const char *key);
/* system.c */
@ -87,17 +89,11 @@ void cause_sig(proc_nr_t proc_nr, int sig_nr);
void sig_delay_done(struct proc *rp);
void kernel_call(message *m_user, struct proc * caller);
void system_init(void);
#define numap_local(proc_nr, vir_addr, bytes) \
umap_local(proc_addr(proc_nr), D, (vir_addr), (bytes))
void clear_endpoint(struct proc *rc);
void clear_ipc_refs(struct proc *rc, int caller_ret);
void kernel_call_resume(struct proc *p);
int sched_proc(struct proc *rp, int priority, int quantum, int cpu);
/* system/do_newmap.c */
int newmap(struct proc * caller, struct proc *rp, struct mem_map
*map_ptr);
/* system/do_vtimer.c */
void vtimer_check(struct proc *rp);
@ -152,7 +148,8 @@ void stop_profile_clock(void);
#endif
/* functions defined in architecture-dependent files. */
void prot_init(void);
void prot_init();
void arch_post_init();
phys_bytes phys_copy(phys_bytes source, phys_bytes dest, phys_bytes
count);
void phys_copy_fault(void);
@ -169,18 +166,15 @@ int data_copy(endpoint_t from, vir_bytes from_addr, endpoint_t to,
vir_bytes to_addr, size_t bytes);
int data_copy_vmcheck(struct proc *, endpoint_t from, vir_bytes
from_addr, endpoint_t to, vir_bytes to_addr, size_t bytes);
void alloc_segments(struct proc *rp);
void vm_stop(void);
phys_bytes umap_local(register struct proc *rp, int seg, vir_bytes
vir_addr, vir_bytes bytes);
phys_bytes umap_virtual(struct proc* rp, int seg, vir_bytes vir_addr,
vir_bytes bytes);
phys_bytes seg2phys(u16_t);
int vm_memset(endpoint_t who,
phys_bytes source, u8_t pattern, phys_bytes count);
int intr_init(int, int);
int intr_init(int);
void halt_cpu(void);
void arch_init(void);
void arch_boot_proc(struct boot_image *b, struct proc *p);
void cpu_identify(void);
/* arch dependent FPU initialization per CPU */
void fpu_init(void);
@ -195,8 +189,9 @@ void arch_stop_profile_clock(void);
void arch_ack_profile_clock(void);
void do_ser_debug(void);
int arch_get_params(char *parm, int max);
int arch_set_params(char *parm, int max);
void arch_pre_exec(struct proc *pr, u32_t, u32_t);
void memory_init(void);
void mem_clear_mapcache(void);
void arch_proc_init(struct proc *pr, u32_t, u32_t, char *);
int arch_do_vmctl(message *m_ptr, struct proc *p);
int vm_contiguous(const struct proc *targetproc, vir_bytes vir_buf,
size_t count);
@ -210,14 +205,12 @@ void arch_do_syscall(struct proc *proc);
int arch_phys_map(int index, phys_bytes *addr, phys_bytes *len, int
*flags);
int arch_phys_map_reply(int index, vir_bytes addr);
int arch_enable_paging(struct proc * caller, const message * m_ptr);
int arch_enable_paging(struct proc * caller);
int vm_check_range(struct proc *caller,
struct proc *target, vir_bytes vir_addr, size_t bytes);
int copy_msg_from_user(struct proc * p, message * user_mbuf, message *
dst);
int copy_msg_to_user(struct proc * p, message * src, message *
user_mbuf);
int copy_msg_from_user(message * user_mbuf, message * dst);
int copy_msg_to_user(message * src, message * user_mbuf);
void switch_address_space(struct proc * p);
void release_address_space(struct proc *pr);

View file

@ -1,151 +0,0 @@
/* First C file used by the kernel. */
#include "kernel.h"
#include "proc.h"
#include <stdlib.h>
#include <string.h>
#include "proto.h"
#ifdef USE_WATCHDOG
#include "watchdog.h"
#endif
/*===========================================================================*
* cstart *
*===========================================================================*/
void cstart(
u16_t cs, /* kernel code segment */
u16_t ds, /* kernel data segment */
u16_t mds, /* monitor data segment */
u16_t parmoff, /* boot parameters offset */
u16_t parmsize /* boot parameters length */
)
{
/* Perform system initializations prior to calling main(). Most settings are
* determined with help of the environment strings passed by MINIX' loader.
*/
register char *value; /* value in key=value pair */
extern int etext, end;
int h;
/* Record where the kernel and the monitor are. */
kinfo.code_base = seg2phys(cs);
kinfo.code_size = (phys_bytes) &etext; /* size of code segment */
kinfo.data_base = seg2phys(ds);
kinfo.data_size = (phys_bytes) &end; /* size of data segment */
/* protection initialization */
prot_init();
/* Copy the boot parameters to the local buffer. */
arch_get_params(params_buffer, sizeof(params_buffer));
/* determine verbosity */
if ((value = env_get(VERBOSEBOOTVARNAME)))
verboseboot = atoi(value);
/* Get clock tick frequency. */
value = env_get("hz");
if(value)
system_hz = atoi(value);
if(!value || system_hz < 2 || system_hz > 50000) /* sanity check */
system_hz = DEFAULT_HZ;
#ifdef DEBUG_SERIAL
/* Intitialize serial debugging */
value = env_get(SERVARNAME);
if(value && atoi(value) == 0) {
do_serial_debug=1;
value = env_get(SERBAUDVARNAME);
if (value) serial_debug_baud = atoi(value);
}
#endif
DEBUGEXTRA(("cstart\n"));
/* Record miscellaneous information for user-space servers. */
kinfo.nr_procs = NR_PROCS;
kinfo.nr_tasks = NR_TASKS;
strncpy(kinfo.release, OS_RELEASE, sizeof(kinfo.release));
kinfo.release[sizeof(kinfo.release)-1] = '\0';
strncpy(kinfo.version, OS_VERSION, sizeof(kinfo.version));
kinfo.version[sizeof(kinfo.version)-1] = '\0';
kinfo.proc_addr = (vir_bytes) proc;
/* Load average data initialization. */
kloadinfo.proc_last_slot = 0;
for(h = 0; h < _LOAD_HISTORY; h++)
kloadinfo.proc_load_history[h] = 0;
#ifdef USE_APIC
value = env_get("no_apic");
if(value)
config_no_apic = atoi(value);
else
config_no_apic = 1;
value = env_get("apic_timer_x");
if(value)
config_apic_timer_x = atoi(value);
else
config_apic_timer_x = 1;
#endif
#ifdef USE_WATCHDOG
value = env_get("watchdog");
if (value)
watchdog_enabled = atoi(value);
#endif
#ifdef CONFIG_SMP
if (config_no_apic)
config_no_smp = 1;
value = env_get("no_smp");
if(value)
config_no_smp = atoi(value);
else
config_no_smp = 0;
#endif
/* Return to assembler code to switch to protected mode (if 286),
* reload selectors and call main().
*/
DEBUGEXTRA(("intr_init(%d, 0)\n", INTS_MINIX));
intr_init(INTS_MINIX, 0);
}
/*===========================================================================*
* get_value *
*===========================================================================*/
char *get_value(
const char *params, /* boot monitor parameters */
const char *name /* key to look up */
)
{
/* Get environment value - kernel version of getenv to avoid setting up the
* usual environment array.
*/
register const char *namep;
register char *envp;
for (envp = (char *) params; *envp != 0;) {
for (namep = name; *namep != 0 && *namep == *envp; namep++, envp++)
;
if (*namep == '\0' && *envp == '=') return(envp + 1);
while (*envp++ != 0)
;
}
return(NULL);
}
/*===========================================================================*
* env_get *
*===========================================================================*/
char *env_get(const char *name)
{
return get_value(params_buffer, name);
}

View file

@ -81,8 +81,7 @@ static void kernel_call_finish(struct proc * caller, message *msg, int result)
#if DEBUG_IPC_HOOK
hook_ipc_msgkresult(msg, caller);
#endif
if (copy_msg_to_user(caller, msg,
(message *)caller->p_delivermsg_vir)) {
if (copy_msg_to_user(msg, (message *)caller->p_delivermsg_vir)) {
printf("WARNING wrong user pointer 0x%08x from "
"process %s / %d\n",
caller->p_delivermsg_vir,
@ -146,7 +145,7 @@ void kernel_call(message *m_user, struct proc * caller)
* into the kernel or was already set in switch_to_user() before we resume
* execution of an interrupted kernel call
*/
if (copy_msg_from_user(caller, m_user, &msg) == 0) {
if (copy_msg_from_user(m_user, &msg) == 0) {
msg.m_source = caller->p_endpoint;
result = kernel_call_dispatch(caller, &msg);
}
@ -216,7 +215,6 @@ void system_init(void)
map(SYS_VDEVIO, do_vdevio); /* vector with devio requests */
/* Memory management. */
map(SYS_NEWMAP, do_newmap); /* set up a process memory map */
map(SYS_MEMSET, do_memset); /* write char to memory area */
map(SYS_VMCTL, do_vmctl); /* various VM process settings */
@ -366,13 +364,16 @@ int send_sig(endpoint_t ep, int sig_nr)
* send a notification with source SYSTEM.
*/
register struct proc *rp;
struct priv *priv;
int proc_nr;
if(!isokendpt(ep, &proc_nr) || isemptyn(proc_nr))
return EINVAL;
rp = proc_addr(proc_nr);
sigaddset(&priv(rp)->s_sig_pending, sig_nr);
priv = priv(rp);
if(!priv) return ENOENT;
sigaddset(&priv->s_sig_pending, sig_nr);
mini_notify(proc_addr(SYSTEM), rp->p_endpoint);
return OK;

View file

@ -46,11 +46,6 @@ int do_fork(struct proc * caller, message *m_ptr);
#define do_fork NULL
#endif
int do_newmap(struct proc * caller, message *m_ptr);
#if ! USE_NEWMAP
#define do_newmap NULL
#endif
int do_clear(struct proc * caller, message *m_ptr);
#if ! USE_CLEAR
#define do_clear NULL

View file

@ -5,7 +5,6 @@
SRCS+= \
do_fork.c \
do_exec.c \
do_newmap.c \
do_clear.c \
do_exit.c \
do_trace.c \

View file

@ -35,10 +35,6 @@ int do_abort(struct proc * caller, message * m_ptr)
return p;
}
paramsbuffer[len] = '\0';
/* Parameters seem ok, copy them and prepare shutting down. */
if((p = arch_set_params(paramsbuffer, len+1)) != OK)
return p;
}
/* Now prepare to shutdown MINIX. */

View file

@ -26,7 +26,6 @@ int do_copy(struct proc * caller, message * m_ptr)
struct vir_addr vir_addr[2]; /* virtual source and destination address */
phys_bytes bytes; /* number of bytes to copy */
int i;
endpoint_t pe;
#if 0
if (caller->p_endpoint != PM_PROC_NR && caller->p_endpoint != VFS_PROC_NR &&
@ -47,11 +46,10 @@ int do_copy(struct proc * caller, message * m_ptr)
#endif
/* Dismember the command message. */
pe = vir_addr[_SRC_].proc_nr_e = m_ptr->CP_SRC_ENDPT;
vir_addr[_SRC_].segment = (pe == NONE ? PHYS_SEG : D);
vir_addr[_SRC_].proc_nr_e = m_ptr->CP_SRC_ENDPT;
vir_addr[_DST_].proc_nr_e = m_ptr->CP_DST_ENDPT;
vir_addr[_SRC_].offset = (vir_bytes) m_ptr->CP_SRC_ADDR;
pe = vir_addr[_DST_].proc_nr_e = m_ptr->CP_DST_ENDPT;
vir_addr[_DST_].segment = (pe == NONE ? PHYS_SEG : D);
vir_addr[_DST_].offset = (vir_bytes) m_ptr->CP_DST_ADDR;
bytes = (phys_bytes) m_ptr->CP_NR_BYTES;
@ -63,10 +61,9 @@ int do_copy(struct proc * caller, message * m_ptr)
/* Check if process number was given implictly with SELF and is valid. */
if (vir_addr[i].proc_nr_e == SELF)
vir_addr[i].proc_nr_e = caller->p_endpoint;
if (vir_addr[i].segment != PHYS_SEG) {
if (vir_addr[i].proc_nr_e != NONE) {
if(! isokendpt(vir_addr[i].proc_nr_e, &p)) {
printf("do_copy: %d: seg 0x%x, %d not ok endpoint\n",
i, vir_addr[i].segment, vir_addr[i].proc_nr_e);
printf("do_copy: %d: %d not ok endpoint\n", i, vir_addr[i].proc_nr_e);
return(EINVAL);
}
}

View file

@ -21,6 +21,7 @@ int do_exec(struct proc * caller, message * m_ptr)
/* Handle sys_exec(). A process has done a successful EXEC. Patch it up. */
register struct proc *rp;
int proc_nr;
char name[PROC_NAME_LEN];
if(!isokendpt(m_ptr->PR_ENDPT, &proc_nr))
return EINVAL;
@ -33,11 +34,14 @@ int do_exec(struct proc * caller, message * m_ptr)
/* Save command name for debugging, ps(1) output, etc. */
if(data_copy(caller->p_endpoint, (vir_bytes) m_ptr->PR_NAME_PTR,
KERNEL, (vir_bytes) rp->p_name, (phys_bytes) P_NAME_LEN - 1) != OK)
strncpy(rp->p_name, "<unset>", P_NAME_LEN);
KERNEL, (vir_bytes) name,
(phys_bytes) sizeof(name) - 1) != OK)
strncpy(name, "<unset>", PROC_NAME_LEN);
/* Do architecture-specific exec() stuff. */
arch_pre_exec(rp, (u32_t) m_ptr->PR_IP_PTR, (u32_t) m_ptr->PR_STACK_PTR);
name[sizeof(name)-1] = '\0';
/* Set process state. */
arch_proc_init(rp, (u32_t) m_ptr->PR_IP_PTR, (u32_t) m_ptr->PR_STACK_PTR, name);
/* No reply to EXEC call */
RTS_UNSET(rp, RTS_RECEIVING);

View file

@ -26,13 +26,11 @@ int do_fork(struct proc * caller, message * m_ptr)
{
/* Handle sys_fork(). PR_ENDPT has forked. The child is PR_SLOT. */
#if (_MINIX_CHIP == _CHIP_INTEL)
reg_t old_ldt_sel;
char *old_fpu_save_area_p;
#endif
register struct proc *rpc; /* child process pointer */
struct proc *rpp; /* parent process pointer */
struct mem_map *map_ptr; /* virtual address of map inside caller (PM) */
int gen, r;
int gen;
int p_proc;
int namelen;
@ -51,19 +49,15 @@ int do_fork(struct proc * caller, message * m_ptr)
return EINVAL;
}
map_ptr= (struct mem_map *) m_ptr->PR_MEM_PTR;
/* make sure that the FPU context is saved in parent before copy */
save_fpu(rpp);
/* Copy parent 'proc' struct to child. And reinitialize some fields. */
gen = _ENDPOINT_G(rpc->p_endpoint);
#if (_MINIX_CHIP == _CHIP_INTEL)
old_ldt_sel = rpc->p_seg.p_ldt_sel; /* backup local descriptors */
old_fpu_save_area_p = rpc->p_seg.fpu_state;
#endif
*rpc = *rpp; /* copy 'proc' struct */
#if (_MINIX_CHIP == _CHIP_INTEL)
rpc->p_seg.p_ldt_sel = old_ldt_sel; /* restore descriptors */
rpc->p_seg.fpu_state = old_fpu_save_area_p;
if(proc_used_fpu(rpp))
memcpy(rpc->p_seg.fpu_state, rpp->p_seg.fpu_state, FPU_XFP_SIZE);
@ -111,9 +105,6 @@ int do_fork(struct proc * caller, message * m_ptr)
m_ptr->PR_ENDPT = rpc->p_endpoint;
m_ptr->PR_FORK_MSGADDR = (char *) rpp->p_delivermsg_vir;
/* Install new map */
r = newmap(caller, rpc, map_ptr);
/* Don't schedule process in VM mode until it has a new pagetable. */
if(m_ptr->PR_FORK_FLAGS & PFF_VMINHIBIT) {
RTS_SET(rpc, RTS_VMINHIBIT);
@ -128,7 +119,7 @@ int do_fork(struct proc * caller, message * m_ptr)
rpc->p_seg.p_cr3 = 0;
rpc->p_seg.p_cr3_v = NULL;
return r;
return OK;
}
#endif /* USE_FORK */

View file

@ -133,8 +133,8 @@ int do_getinfo(struct proc * caller, message * m_ptr)
return OK;
}
case GET_MONPARAMS: {
src_vir = (vir_bytes) params_buffer;
length = sizeof(params_buffer);
src_vir = (vir_bytes) kinfo.param_buf;
length = sizeof(kinfo.param_buf);
break;
}
case GET_RANDOMNESS: {

View file

@ -1,50 +0,0 @@
/* The kernel call implemented in this file:
* m_type: SYS_NEWMAP
*
* The parameters for this kernel call are:
* m1_i1: PR_ENDPT (install new map for this process)
* m1_p1: PR_MEM_PTR (pointer to the new memory map)
*/
#include "kernel/system.h"
#include <minix/endpoint.h>
#if USE_NEWMAP
/*===========================================================================*
* do_newmap *
*===========================================================================*/
int do_newmap(struct proc * caller, message * m_ptr)
{
/* Handle sys_newmap(). Fetch the memory map. */
struct proc *rp; /* process whose map is to be loaded */
struct mem_map *map_ptr; /* virtual address of map inside caller */
int proc_nr;
map_ptr = (struct mem_map *) m_ptr->PR_MEM_PTR;
if (! isokendpt(m_ptr->PR_ENDPT, &proc_nr)) return(EINVAL);
if (iskerneln(proc_nr)) return(EPERM);
rp = proc_addr(proc_nr);
return newmap(caller, rp, map_ptr);
}
/*===========================================================================*
* newmap *
*===========================================================================*/
int newmap(struct proc *caller, struct proc *rp, struct mem_map *map_ptr)
{
int r;
/* Fetch the memory map. */
if((r=data_copy(caller->p_endpoint, (vir_bytes) map_ptr,
KERNEL, (vir_bytes) rp->p_memmap, sizeof(rp->p_memmap))) != OK) {
printf("newmap: data_copy failed! (%d)\n", r);
return r;
}
alloc_segments(rp);
return(OK);
}
#endif /* USE_NEWMAP */

View file

@ -13,6 +13,7 @@
* VSCP_VEC_SIZE number of significant elements in vector
*/
#include <assert.h>
#include <minix/type.h>
#include <minix/safecopies.h>
@ -245,6 +246,11 @@ int access; /* CPF_READ for a copy from granter to grantee, CPF_WRITE
vir_bytes size;
#endif
if(granter == NONE || grantee == NONE) {
printf("safecopy: nonsense processes\n");
return EFAULT;
}
/* See if there is a reasonable grant table. */
if(!(granter_p = endpoint_lookup(granter))) return EINVAL;
if(!HASGRANTTABLE(granter_p)) {
@ -277,8 +283,6 @@ int access; /* CPF_READ for a copy from granter to grantee, CPF_WRITE
granter = new_granter;
/* Now it's a regular copy. */
v_src.segment = D;
v_dst.segment = D;
v_src.proc_nr_e = *src;
v_dst.proc_nr_e = *dst;
@ -373,8 +377,8 @@ int do_vsafecopy(struct proc * caller, message * m_ptr)
/* Set vector copy parameters. */
src.proc_nr_e = caller->p_endpoint;
assert(src.proc_nr_e != NONE);
src.offset = (vir_bytes) m_ptr->VSCP_VEC_ADDR;
src.segment = dst.segment = D;
dst.proc_nr_e = KERNEL;
dst.offset = (vir_bytes) vec;

View file

@ -120,21 +120,12 @@ int map_invoke_vm(struct proc * caller,
size_t size, int flag)
{
struct proc *src, *dst;
phys_bytes lin_src, lin_dst;
src = endpoint_lookup(end_s);
dst = endpoint_lookup(end_d);
lin_src = umap_local(src, D, off_s, size);
lin_dst = umap_local(dst, D, off_d, size);
if(lin_src == 0 || lin_dst == 0) {
printf("map_invoke_vm: error in umap_local.\n");
return EINVAL;
}
/* Make sure the linear addresses are both page aligned. */
if(lin_src % CLICK_SIZE != 0
|| lin_dst % CLICK_SIZE != 0) {
if(off_s % CLICK_SIZE != 0 || off_d % CLICK_SIZE != 0) {
printf("map_invoke_vm: linear addresses not page aligned.\n");
return EINVAL;
}
@ -149,9 +140,9 @@ int map_invoke_vm(struct proc * caller,
/* Map to the destination. */
caller->p_vmrequest.req_type = req_type;
caller->p_vmrequest.target = end_d; /* destination proc */
caller->p_vmrequest.params.map.vir_d = lin_dst; /* destination addr */
caller->p_vmrequest.params.map.vir_d = off_d; /* destination addr */
caller->p_vmrequest.params.map.ep_s = end_s; /* source process */
caller->p_vmrequest.params.map.vir_s = lin_src; /* source address */
caller->p_vmrequest.params.map.vir_s = off_s; /* source address */
caller->p_vmrequest.params.map.length = (vir_bytes) size;
caller->p_vmrequest.params.map.writeflag = flag;

View file

@ -51,15 +51,13 @@ int do_trace(struct proc * caller, message * m_ptr)
unsigned char ub;
int i;
#define COPYTOPROC(seg, addr, myaddr, length) { \
#define COPYTOPROC(addr, myaddr, length) { \
struct vir_addr fromaddr, toaddr; \
int r; \
fromaddr.proc_nr_e = KERNEL; \
toaddr.proc_nr_e = tr_proc_nr_e; \
fromaddr.offset = (myaddr); \
toaddr.offset = (addr); \
fromaddr.segment = D; \
toaddr.segment = (seg); \
if((r=virtual_copy_vmcheck(caller, &fromaddr, \
&toaddr, length)) != OK) { \
printf("Can't copy in sys_trace: %d\n", r);\
@ -67,15 +65,13 @@ int do_trace(struct proc * caller, message * m_ptr)
} \
}
#define COPYFROMPROC(seg, addr, myaddr, length) { \
#define COPYFROMPROC(addr, myaddr, length) { \
struct vir_addr fromaddr, toaddr; \
int r; \
fromaddr.proc_nr_e = tr_proc_nr_e; \
toaddr.proc_nr_e = KERNEL; \
fromaddr.offset = (addr); \
toaddr.offset = (myaddr); \
fromaddr.segment = (seg); \
toaddr.segment = D; \
if((r=virtual_copy_vmcheck(caller, &fromaddr, \
&toaddr, length)) != OK) { \
printf("Can't copy in sys_trace: %d\n", r);\
@ -96,12 +92,12 @@ int do_trace(struct proc * caller, message * m_ptr)
return(OK);
case T_GETINS: /* return value from instruction space */
COPYFROMPROC(T, tr_addr, (vir_bytes) &tr_data, sizeof(long));
COPYFROMPROC(tr_addr, (vir_bytes) &tr_data, sizeof(long));
m_ptr->CTL_DATA = tr_data;
break;
case T_GETDATA: /* return value from data space */
COPYFROMPROC(D, tr_addr, (vir_bytes) &tr_data, sizeof(long));
COPYFROMPROC(tr_addr, (vir_bytes) &tr_data, sizeof(long));
m_ptr->CTL_DATA= tr_data;
break;
@ -125,12 +121,12 @@ int do_trace(struct proc * caller, message * m_ptr)
break;
case T_SETINS: /* set value in instruction space */
COPYTOPROC(T, tr_addr, (vir_bytes) &tr_data, sizeof(long));
COPYTOPROC(tr_addr, (vir_bytes) &tr_data, sizeof(long));
m_ptr->CTL_DATA = 0;
break;
case T_SETDATA: /* set value in data space */
COPYTOPROC(D, tr_addr, (vir_bytes) &tr_data, sizeof(long));
COPYTOPROC(tr_addr, (vir_bytes) &tr_data, sizeof(long));
m_ptr->CTL_DATA = 0;
break;
@ -184,13 +180,13 @@ int do_trace(struct proc * caller, message * m_ptr)
break;
case T_READB_INS: /* get value from instruction space */
COPYFROMPROC(T, tr_addr, (vir_bytes) &ub, 1);
COPYFROMPROC(tr_addr, (vir_bytes) &ub, 1);
m_ptr->CTL_DATA = ub;
break;
case T_WRITEB_INS: /* set value in instruction space */
ub = (unsigned char) (tr_data & 0xff);
COPYTOPROC(T, tr_addr, (vir_bytes) &ub, 1);
COPYTOPROC(tr_addr, (vir_bytes) &ub, 1);
m_ptr->CTL_DATA = 0;
break;

View file

@ -33,7 +33,6 @@ int do_umap_remote(struct proc * caller, message * m_ptr)
int endpt = (int) m_ptr->CP_SRC_ENDPT;
endpoint_t grantee = (endpoint_t) m_ptr->CP_DST_ENDPT;
int proc_nr, proc_nr_grantee;
int naughty = 0;
phys_bytes phys_addr = 0, lin_addr = 0;
struct proc *targetpr;
@ -57,11 +56,6 @@ int do_umap_remote(struct proc * caller, message * m_ptr)
/* See which mapping should be made. */
switch(seg_type) {
case LOCAL_SEG:
phys_addr = lin_addr = umap_local(targetpr, seg_index, offset, count);
if(!lin_addr) return EFAULT;
naughty = 1;
break;
case LOCAL_VM_SEG:
if(seg_index == MEM_GRANT) {
vir_bytes newoffset;
@ -84,11 +78,11 @@ int do_umap_remote(struct proc * caller, message * m_ptr)
/* New lookup. */
offset = newoffset;
targetpr = proc_addr(new_proc_nr);
seg_index = D;
seg_index = VIR_ADDR;
}
if(seg_index == T || seg_index == D || seg_index == S) {
phys_addr = lin_addr = umap_local(targetpr, seg_index, offset, count);
if(seg_index == VIR_ADDR) {
phys_addr = lin_addr = offset;
} else {
printf("SYSTEM: bogus seg type 0x%lx\n", seg_index);
return EFAULT;
@ -115,7 +109,7 @@ int do_umap_remote(struct proc * caller, message * m_ptr)
}
m_ptr->CP_DST_ADDR = phys_addr;
if(naughty || phys_addr == 0) {
if(phys_addr == 0) {
printf("kernel: umap 0x%x done by %d / %s, pc 0x%lx, 0x%lx -> 0x%lx\n",
seg_type, caller->p_endpoint, caller->p_name,
caller->p_reg.pc, offset, phys_addr);

View file

@ -111,11 +111,6 @@ int do_update(struct proc * caller, message * m_ptr)
/* Swap global process slot addresses. */
swap_proc_slot_pointer(get_cpulocal_var_ptr(ptproc), src_rp, dst_rp);
/* Fix segments. */
alloc_segments(src_rp);
alloc_segments(dst_rp);
prot_init();
#if DEBUG
printf("do_update: updated %d (%s, %d, %d) into %d (%s, %d, %d)\n",
src_rp->p_endpoint, src_rp->p_name, src_rp->p_nr, priv(src_rp)->s_proc_nr,

View file

@ -120,13 +120,6 @@ int do_vmctl(struct proc * caller, message * m_ptr)
RTS_UNSET(p, RTS_VMREQUEST);
return OK;
case VMCTL_ENABLE_PAGING:
if(vm_running)
panic("do_vmctl: paging already enabled");
if (arch_enable_paging(caller, m_ptr) != OK)
panic("do_vmctl: paging enabling failed");
return OK;
case VMCTL_KERN_PHYSMAP:
{
int i = m_ptr->SVMCTL_VALUE;
@ -177,6 +170,10 @@ int do_vmctl(struct proc * caller, message * m_ptr)
bits_fill(p->p_stale_tlb, CONFIG_MAX_CPUS);
#endif
return OK;
case VMCTL_CLEARMAPCACHE:
/* VM says: forget about old mappings we have cached. */
mem_clear_mapcache();
return OK;
}
/* Try architecture-specific vmctls. */

View file

@ -29,7 +29,7 @@ int do_vumap(struct proc *caller, message *m_ptr)
struct proc *procp;
struct vumap_vir vvec[MAPVEC_NR];
struct vumap_phys pvec[MAPVEC_NR];
vir_bytes vaddr, paddr, vir_addr, lin_addr;
vir_bytes vaddr, paddr, vir_addr;
phys_bytes phys_addr;
int i, r, proc_nr, vcount, pcount, pmax, access;
size_t size, chunk, offset;
@ -89,13 +89,9 @@ int do_vumap(struct proc *caller, message *m_ptr)
okendpt(granter, &proc_nr);
procp = proc_addr(proc_nr);
lin_addr = umap_local(procp, D, vir_addr, size);
if (!lin_addr)
return EFAULT;
/* Each virtual range is made up of one or more physical ranges. */
while (size > 0 && pcount < pmax) {
chunk = vm_lookup_range(procp, lin_addr, &phys_addr, size);
chunk = vm_lookup_range(procp, vir_addr, &phys_addr, size);
if (!chunk) {
/* Try to get the memory allocated, unless the memory
@ -107,14 +103,14 @@ int do_vumap(struct proc *caller, message *m_ptr)
/* This call may suspend the current call, or return an
* error for a previous invocation.
*/
return vm_check_range(caller, procp, lin_addr, size);
return vm_check_range(caller, procp, vir_addr, size);
}
pvec[pcount].vp_addr = phys_addr;
pvec[pcount].vp_size = chunk;
pcount++;
lin_addr += chunk;
vir_addr += chunk;
size -= chunk;
}

View file

@ -34,13 +34,6 @@
#include "ipc.h"
#include <minix/com.h>
/* Define boot process flags. */
#define BVM_F (PROC_FULLVM) /* boot processes with VM */
#define OVM_F (PERF_SYS_CORE_FULLVM ? PROC_FULLVM : 0) /* critical boot
* processes with
* optional VM.
*/
/* The system image table lists all programs that are part of the boot image.
* The order of the entries here MUST agree with the order of the programs
* in the boot image and all kernel tasks must come first.
@ -51,33 +44,26 @@
* to prioritize ping messages periodically delivered to system processes.
*/
struct boot_image image[] = {
struct boot_image image[NR_BOOT_PROCS] = {
/* process nr, flags, stack size, name */
{ASYNCM, 0, 0, "asyncm"},
{IDLE, 0, 0, "idle" },
{CLOCK, 0, 0, "clock" },
{SYSTEM, 0, 0, "system"},
{HARDWARE, 0, 0, "kernel"},
{ASYNCM, "asyncm"},
{IDLE, "idle" },
{CLOCK, "clock" },
{SYSTEM, "system"},
{HARDWARE, "kernel"},
{DS_PROC_NR, BVM_F, 16, "ds" },
{RS_PROC_NR, 0, 8125, "rs" },
{DS_PROC_NR, "ds" },
{RS_PROC_NR, "rs" },
{PM_PROC_NR, OVM_F, 32, "pm" },
{SCHED_PROC_NR,OVM_F, 32, "sched" },
{VFS_PROC_NR, BVM_F, 16, "vfs" },
{MEM_PROC_NR, BVM_F, 8, "memory"},
{LOG_PROC_NR, BVM_F, 32, "log" },
{TTY_PROC_NR, BVM_F, 16, "tty" },
{MFS_PROC_NR, BVM_F, 128, "mfs" },
{VM_PROC_NR, 0, 128, "vm" },
{PFS_PROC_NR, BVM_F, 128, "pfs" },
{INIT_PROC_NR, BVM_F, 64, "init" },
{PM_PROC_NR, "pm" },
{SCHED_PROC_NR, "sched" },
{VFS_PROC_NR, "vfs" },
{MEM_PROC_NR, "memory"},
{LOG_PROC_NR, "log" },
{TTY_PROC_NR, "tty" },
{MFS_PROC_NR, "mfs" },
{VM_PROC_NR, "vm" },
{PFS_PROC_NR, "pfs" },
{INIT_PROC_NR, "init" },
};
/* Verify the size of the system image table at compile time.
* If a problem is detected, the size of the 'dummy' array will be negative,
* causing a compile time error. Note that no space is actually allocated
* because 'dummy' is declared extern.
*/
extern int dummy[(NR_BOOT_PROCS==sizeof(image)/
sizeof(struct boot_image))?1:-1];

View file

@ -3,6 +3,7 @@
#include <minix/com.h>
#include <machine/interrupt.h>
#include <machine/multiboot.h>
/* Process table and system property related types. */
typedef int proc_nr_t; /* process table entry number */
@ -11,26 +12,6 @@ typedef struct { /* bitmap for system indexes */
bitchunk_t chunk[BITMAP_CHUNKS(NR_SYS_PROCS)];
} sys_map_t;
struct boot_image_memmap {
phys_bytes text_vaddr; /* Virtual start address of text */
phys_bytes text_paddr; /* Physical start address of text */
phys_bytes text_bytes; /* Text segment's size (bytes) */
phys_bytes data_vaddr; /* Virtual start address of data */
phys_bytes data_paddr; /* Physical start address of data */
phys_bytes data_bytes; /* Data segment's size (bytes) */
phys_bytes stack_bytes; /* Size of stack set aside (bytes) */
phys_bytes entry; /* Entry point of executable */
};
struct boot_image {
proc_nr_t proc_nr; /* process number to use */
int flags; /* process flags */
int stack_kbytes; /* stack size (in KB) */
char proc_name[P_NAME_LEN]; /* name in process table */
endpoint_t endpoint; /* endpoint number when started */
struct boot_image_memmap memmap; /* memory map info for boot image */
};
typedef unsigned long irq_policy_t;
typedef unsigned long irq_id_t;

View file

@ -24,10 +24,10 @@ void panic(const char *fmt, ...)
{
va_list arg;
/* The system has run aground of a fatal kernel error. Terminate execution. */
if (minix_panicing == ARE_PANICING) {
if (kinfo.minix_panicing == ARE_PANICING) {
reset();
}
minix_panicing = ARE_PANICING;
kinfo.minix_panicing = ARE_PANICING;
if (fmt != NULL) {
printf("kernel panic: ");
va_start(arg, fmt);
@ -38,8 +38,12 @@ void panic(const char *fmt, ...)
printf("kernel on CPU %d: ", cpuid);
util_stacktrace();
printf("current process : ");
proc_stacktrace(get_cpulocal_var(proc_ptr));
#if 0
if(get_cpulocal_var(proc_ptr)) {
printf("current process : ");
proc_stacktrace(get_cpulocal_var(proc_ptr));
}
#endif
/* Abort MINIX. */
minix_shutdown(NULL);
@ -55,29 +59,29 @@ int c; /* character to append */
* to the output driver if an END_OF_KMESS is encountered.
*/
if (c != END_OF_KMESS) {
static int blpos = 0;
int maxblpos = sizeof(kmess_buf) - 2;
int maxblpos = sizeof(kmess.kmess_buf) - 2;
#ifdef DEBUG_SERIAL
if (do_serial_debug) {
if (kinfo.do_serial_debug) {
if(c == '\n')
ser_putc('\r');
ser_putc(c);
}
#endif
kmess.km_buf[kmess.km_next] = c; /* put normal char in buffer */
kmess_buf[blpos] = c;
kmess.kmess_buf[kmess.blpos] = c;
if (kmess.km_size < sizeof(kmess.km_buf))
kmess.km_size += 1;
kmess.km_next = (kmess.km_next + 1) % _KMESS_BUF_SIZE;
if(blpos == maxblpos) {
memmove(kmess_buf, kmess_buf+1, sizeof(kmess_buf)-1);
} else blpos++;
if(kmess.blpos == maxblpos) {
memmove(kmess.kmess_buf,
kmess.kmess_buf+1, sizeof(kmess.kmess_buf)-1);
} else kmess.blpos++;
} else {
int p;
endpoint_t outprocs[] = OUTPUT_PROCS_ARRAY;
if(!(minix_panicing || do_serial_debug)) {
if(!(kinfo.minix_panicing || kinfo.do_serial_debug)) {
for(p = 0; outprocs[p] != NONE; p++) {
if(isokprocn(outprocs[p]) && !isemptyn(outprocs[p])) {
if(isokprocn(outprocs[p])) {
send_sig(outprocs[p], SIGKMESS);
}
}
@ -86,15 +90,3 @@ int c; /* character to append */
return;
}
void cpu_print_freq(unsigned cpu)
{
u64_t freq;
freq = cpu_get_freq(cpu);
printf("CPU %d freq %lu MHz\n", cpu, div64u(freq, 1000000));
}
int is_fpu(void)
{
return get_cpulocal_var(fpu_presence);
}

View file

@ -14,4 +14,6 @@
typedef struct _asynfd asynfd_t;
#undef IDLE
typedef enum state { IDLE, WAITING, PENDING } state_t;

View file

@ -933,7 +933,7 @@ static int init_buffers(sub_dev_t *sub_dev_ptr)
}
sub_dev_ptr->DmaPtr = sub_dev_ptr->DmaBuf;
i = sys_umap(SELF, D,
i = sys_umap(SELF, VM_D,
(vir_bytes) sub_dev_ptr->DmaBuf,
(phys_bytes) sizeof(sub_dev_ptr->DmaBuf),
&(sub_dev_ptr->DmaPhys));

View file

@ -14,6 +14,7 @@
#include <machine/elf.h>
#include <machine/vmparam.h>
#include <machine/memory.h>
#include <minix/syslib.h>
/* For verbose logging */
#define ELF_DEBUG 0
@ -59,16 +60,12 @@ static int elf_unpack(char *exec_hdr,
{
*hdr = (Elf_Ehdr *) exec_hdr;
if(!elf_sane(*hdr)) {
#if ELF_DEBUG
printf("elf_sane failed\n");
#endif
printf("elf_unpack: elf_sane failed\n");
return ENOEXEC;
}
*phdr = (Elf_Phdr *)(exec_hdr + (*hdr)->e_phoff);
if(!elf_ph_sane(*phdr)) {
#if ELF_DEBUG
printf("elf_ph_sane failed\n");
#endif
printf("elf_unpack: elf_ph_sane failed\n");
return ENOEXEC;
}
#if 0
@ -77,98 +74,6 @@ static int elf_unpack(char *exec_hdr,
return OK;
}
int read_header_elf(
char *exec_hdr, /* executable header */
int hdr_len, /* significant bytes in exec_hdr */
vir_bytes *text_vaddr, /* text virtual address */
phys_bytes *text_paddr, /* text physical address */
vir_bytes *text_filebytes, /* text segment size (in the file) */
vir_bytes *text_membytes, /* text segment size (in memory) */
vir_bytes *data_vaddr, /* data virtual address */
phys_bytes *data_paddr, /* data physical address */
vir_bytes *data_filebytes, /* data segment size (in the file) */
vir_bytes *data_membytes, /* data segment size (in memory) */
vir_bytes *pc, /* program entry point (initial PC) */
off_t *text_offset, /* file offset to text segment */
off_t *data_offset /* file offset to data segment */
)
{
Elf_Ehdr *hdr = NULL;
Elf_Phdr *phdr = NULL;
unsigned long seg_filebytes, seg_membytes;
int e, i = 0;
*text_vaddr = *text_paddr = 0;
*text_filebytes = *text_membytes = 0;
*data_vaddr = *data_paddr = 0;
*data_filebytes = *data_membytes = 0;
*pc = *text_offset = *data_offset = 0;
if((e=elf_unpack(exec_hdr, hdr_len, &hdr, &phdr)) != OK) {
#if ELF_DEBUG
printf("elf_unpack failed\n");
#endif
return e;
}
#if ELF_DEBUG
printf("Program header file offset (phoff): %ld\n", hdr->e_phoff);
printf("Section header file offset (shoff): %ld\n", hdr->e_shoff);
printf("Program header entry size (phentsize): %d\n", hdr->e_phentsize);
printf("Program header entry num (phnum): %d\n", hdr->e_phnum);
printf("Section header entry size (shentsize): %d\n", hdr->e_shentsize);
printf("Section header entry num (shnum): %d\n", hdr->e_shnum);
printf("Section name strings index (shstrndx): %d\n", hdr->e_shstrndx);
printf("Entry Point: 0x%lx\n", hdr->e_entry);
#endif
for (i = 0; i < hdr->e_phnum; i++) {
switch (phdr[i].p_type) {
case PT_LOAD:
if (phdr[i].p_memsz == 0)
break;
seg_filebytes = phdr[i].p_filesz;
seg_membytes = round_page(phdr[i].p_memsz + phdr[i].p_vaddr -
trunc_page(phdr[i].p_vaddr));
if (hdr->e_entry >= phdr[i].p_vaddr &&
hdr->e_entry < (phdr[i].p_vaddr + phdr[i].p_memsz)) {
*text_vaddr = phdr[i].p_vaddr;
*text_paddr = phdr[i].p_paddr;
*text_filebytes = seg_filebytes;
*text_membytes = seg_membytes;
*pc = (vir_bytes)hdr->e_entry;
*text_offset = phdr[i].p_offset;
} else {
*data_vaddr = phdr[i].p_vaddr;
*data_paddr = phdr[i].p_paddr;
*data_filebytes = seg_filebytes;
*data_membytes = seg_membytes;
*data_offset = phdr[i].p_offset;
}
break;
default:
break;
}
}
#if ELF_DEBUG
printf("Text vaddr: 0x%lx\n", *text_vaddr);
printf("Text paddr: 0x%lx\n", *text_paddr);
printf("Text filebytes: 0x%lx\n", *text_filebytes);
printf("Text membytes: 0x%lx\n", *text_membytes);
printf("Data vaddr: 0x%lx\n", *data_vaddr);
printf("Data paddr: 0x%lx\n", *data_paddr);
printf("Data filebyte: 0x%lx\n", *data_filebytes);
printf("Data membytes: 0x%lx\n", *data_membytes);
printf("PC: 0x%lx\n", *pc);
printf("Text offset: 0x%lx\n", *text_offset);
printf("Data offset: 0x%lx\n", *data_offset);
#endif
return OK;
}
#define IS_ELF(ehdr) ((ehdr).e_ident[EI_MAG0] == ELFMAG0 && \
(ehdr).e_ident[EI_MAG1] == ELFMAG1 && \
(ehdr).e_ident[EI_MAG2] == ELFMAG2 && \
@ -243,8 +148,7 @@ int libexec_load_elf(struct exec_info *execi)
}
execi->stack_size = roundup(execi->stack_size, PAGE_SIZE);
execi->stack_high = VM_STACKTOP;
assert(!(VM_STACKTOP % PAGE_SIZE));
execi->stack_high = rounddown(execi->stack_high, PAGE_SIZE);
stacklow = execi->stack_high - execi->stack_size;
assert(execi->copymem);
@ -311,6 +215,10 @@ int libexec_load_elf(struct exec_info *execi)
return ENOMEM;
}
#if ELF_DEBUG
printf("stack mmapped 0x%lx-0x%lx\n", stacklow, stacklow+execi->stack_size);
#endif
/* record entry point and lowest load vaddr for caller */
execi->pc = hdr->e_entry;
execi->load_base = startv;

View file

@ -49,6 +49,20 @@ int libexec_clear_sys_memset(struct exec_info *execi, off_t vaddr, size_t len)
return sys_memset(execi->proc_e, 0, vaddr, len);
}
int libexec_copy_memcpy(struct exec_info *execi,
off_t off, off_t vaddr, size_t len)
{
assert(off + len <= execi->hdr_len);
memcpy((char *) vaddr, (char *) execi->hdr + off, len);
return OK;
}
int libexec_clear_memset(struct exec_info *execi, off_t vaddr, size_t len)
{
memset((char *) vaddr, 0, len);
return OK;
}
void libexec_patch_ptr(char stack[ARG_MAX], vir_bytes base)
{
/* When doing an exec(name, argv, envp) call, the user builds up a stack

View file

@ -44,12 +44,6 @@ struct exec_info {
int elf_has_interpreter(char *exec_hdr, int hdr_len, char *interp, int maxsz);
int elf_phdr(char *exec_hdr, int hdr_len, vir_bytes *phdr);
int read_header_elf(char *exec_hdr, int hdr_len,
vir_bytes *text_vaddr, phys_bytes *text_paddr,
vir_bytes *text_filebytes, vir_bytes *text_membytes,
vir_bytes *data_vaddr, phys_bytes *data_paddr,
vir_bytes *data_filebytes, vir_bytes *data_membytes,
vir_bytes *pc, off_t *text_offset, off_t *data_offset);
void libexec_patch_ptr(char stack[ARG_MAX], vir_bytes base);
int libexec_pm_newexec(endpoint_t proc_e, struct exec_info *execi);
@ -57,6 +51,8 @@ int libexec_pm_newexec(endpoint_t proc_e, struct exec_info *execi);
typedef int (*libexec_exec_loadfunc_t)(struct exec_info *execi);
int libexec_load_elf(struct exec_info *execi);
int libexec_copy_memcpy(struct exec_info *execi, off_t offset, off_t vaddr, size_t len);
int libexec_clear_memset(struct exec_info *execi, off_t vaddr, size_t len);
int libexec_alloc_mmap_prealloc(struct exec_info *execi, off_t vaddr, size_t len);
int libexec_alloc_mmap_ondemand(struct exec_info *execi, off_t vaddr, size_t len);
int libexec_clearproc_vm_procctl(struct exec_info *execi);

View file

@ -7,21 +7,15 @@
int _cpufeature(int cpufeature)
{
u32_t eax, ebx, ecx, edx;
int proc;
eax = ebx = ecx = edx = 0;
proc = getprocessor();
/* If processor supports CPUID and its CPUID supports enough
* parameters, retrieve EDX feature flags to test against.
*/
if(proc >= 586) {
eax = 0;
/* We assume >= pentium for cpuid */
eax = 0;
_cpuid(&eax, &ebx, &ecx, &edx);
if(eax > 0) {
eax = 1;
_cpuid(&eax, &ebx, &ecx, &edx);
if(eax > 0) {
eax = 1;
_cpuid(&eax, &ebx, &ecx, &edx);
}
}
switch(cpufeature) {

View file

@ -74,7 +74,6 @@ SRCS= \
sys_kill.c \
sys_mcontext.c \
sys_memset.c \
sys_newmap.c \
sys_out.c \
sys_physcopy.c \
sys_privctl.c \
@ -127,7 +126,7 @@ SRCS= \
vm_umap.c \
vm_yield_get_block.c \
vm_procctl.c \
vprintf.c \
vprintf.c
.if ${MKCOVERAGE} != "no"
SRCS+= gcov.c \

View file

@ -6,21 +6,6 @@
#include <sys/mman.h>
#include <minix/sysutil.h>
int sys_umap_data_fb(endpoint_t ep, vir_bytes buf, vir_bytes len, phys_bytes *phys)
{
int r;
if((r=sys_umap(ep, VM_D, buf, len, phys)) != OK) {
if(r != EINVAL)
return r;
r = sys_umap(ep, D, buf, len, phys);
}
return r;
}
void *alloc_contig(size_t len, int flags, phys_bytes *phys)
{
vir_bytes buf;
@ -66,7 +51,7 @@ void *alloc_contig(size_t len, int flags, phys_bytes *phys)
}
/* Get physical address, if requested. */
if(phys != NULL && sys_umap_data_fb(SELF, buf, len, phys) != OK)
if(phys != NULL && sys_umap(SELF, VM_D, buf, len, phys) != OK)
panic("sys_umap_data_fb failed");
return (void *) buf;

View file

@ -26,7 +26,7 @@ char *value; /* where to store value */
int max_len; /* maximum length of value */
{
message m;
static char mon_params[128*sizeof(char *)]; /* copy parameters here */
static char mon_params[MULTIBOOT_PARAM_BUF_SIZE]; /* copy parameters here */
char *key_value;
int i, s;
size_t keylen;

View file

@ -97,44 +97,17 @@ int env_memory_parse(mem_chunks, maxchunks)
struct memory *mem_chunks; /* where to store the memory bits */
int maxchunks; /* how many were found */
{
int i, done = 0;
char *s;
struct memory *memp;
char memstr[100], *end;
static kinfo_t kinfo;
int mm;
sys_getkinfo(&kinfo);
/* Initialize everything to zero. */
for (i = 0; i < maxchunks; i++) {
memp = &mem_chunks[i]; /* next mem chunk is stored here */
memp->base = memp->size = 0;
}
memset(mem_chunks, 0, maxchunks*sizeof(*mem_chunks));
/* The available memory is determined by MINIX' boot loader as a list of
* (base:size)-pairs in boothead.s. The 'memory' boot variable is set in
* in boot.s. The format is "b0:s0,b1:s1,b2:s2", where b0:s0 is low mem,
* b1:s1 is mem between 1M and 16M, b2:s2 is mem above 16M. Pairs b1:s1
* and b2:s2 are combined if the memory is adjacent.
*/
if(env_get_param("memory", memstr, sizeof(memstr)-1) != OK)
return -1;
s = memstr;
for (i = 0; i < maxchunks && !done; i++) {
phys_bytes base = 0, size = 0;
memp = &mem_chunks[i]; /* next mem chunk is stored here */
if (*s != 0) { /* get fresh data, unless at end */
/* Read fresh base and expect colon as next char. */
base = strtoul(s, &end, 0x10); /* get number */
if (end != s && *end == ':') s = ++end; /* skip ':' */
else *s=0; /* terminate, should not happen */
/* Read fresh size and expect comma or assume end. */
size = strtoul(s, &end, 0x10); /* get number */
if (end != s && *end == ',') s = ++end; /* skip ',' */
else done = 1;
}
if (base + size <= base) continue;
memp->base = base;
memp->size = size;
for(mm = 0; mm < MAXMEMMAP; mm++) {
mem_chunks[mm].base = kinfo.memmap[mm].addr;
mem_chunks[mm].size = kinfo.memmap[mm].len;
}
return OK;

View file

@ -26,6 +26,7 @@ void kputc(int c)
if (c != 0) {
/* Append a single character to the output buffer. */
print_buf[buf_count++] = c;
print_buf[buf_count] = c;
buf_count++;
}
}

View file

@ -84,7 +84,9 @@ void sef_startup()
panic("RS unable to complete init: %d", r);
}
}
else {
else if(sef_self_endpoint == VM_PROC_NR) {
/* VM handles initialization by RS later */
} else {
message m;
/* Wait for an initialization message from RS. We need this to learn the

View file

@ -1,10 +1,9 @@
#include "syslib.h"
int sys_fork(parent, child, child_endpoint, map_ptr, flags, msgaddr)
int sys_fork(parent, child, child_endpoint, flags, msgaddr)
endpoint_t parent; /* process doing the fork */
endpoint_t child; /* which proc has been created by the fork */
endpoint_t *child_endpoint;
struct mem_map *map_ptr;
u32_t flags;
vir_bytes *msgaddr;
{
@ -15,7 +14,6 @@ vir_bytes *msgaddr;
m.PR_ENDPT = parent;
m.PR_SLOT = child;
m.PR_MEM_PTR = (char *) map_ptr;
m.PR_FORK_FLAGS = flags;
r = _kernel_call(SYS_FORK, &m);
*child_endpoint = m.PR_ENDPT;

View file

@ -1,15 +0,0 @@
#include "syslib.h"
int sys_newmap(
endpoint_t proc_ep, /* process whose map is to be changed */
struct mem_map *ptr /* pointer to new map */
)
{
/* A process has been assigned a new memory map. Tell the kernel. */
message m;
m.PR_ENDPT = proc_ep;
m.PR_MEM_PTR = (char *) ptr;
return(_kernel_call(SYS_NEWMAP, &m));
}

View file

@ -25,8 +25,8 @@ phys_bytes bytes; /* how many bytes */
/* provide backwards compatability arguments to old
* kernels based on process id's; NONE <=> physical
*/
copy_mess.CP_DST_SPACE_OBSOLETE = (dst_proc == NONE ? PHYS_SEG : D);
copy_mess.CP_SRC_SPACE_OBSOLETE = (src_proc == NONE ? PHYS_SEG : D);
copy_mess.CP_DST_SPACE_OBSOLETE = (dst_proc == NONE ? PHYS_SEG : D_OBSOLETE);
copy_mess.CP_SRC_SPACE_OBSOLETE = (src_proc == NONE ? PHYS_SEG : D_OBSOLETE);
return(_kernel_call(SYS_PHYSCOPY, &copy_mess));
}

View file

@ -22,7 +22,7 @@ int sys_safecopyfrom(endpoint_t src_e,
/* for older kernels that still need the 'seg' field
* provide the right value.
*/
copy_mess.SCP_SEG_OBSOLETE = D;
copy_mess.SCP_SEG_OBSOLETE = D_OBSOLETE;
return(_kernel_call(SYS_SAFECOPYFROM, &copy_mess));
@ -47,7 +47,7 @@ int sys_safecopyto(endpoint_t dst_e,
/* for older kernels that still need the 'seg' field
* provide the right value.
*/
copy_mess.SCP_SEG_OBSOLETE = D;
copy_mess.SCP_SEG_OBSOLETE = D_OBSOLETE;
return(_kernel_call(SYS_SAFECOPYTO, &copy_mess));

View file

@ -23,7 +23,7 @@ int sys_safemap(endpoint_t grantor, cp_grant_id_t grant,
copy_mess.SMAP_BYTES = bytes;
copy_mess.SMAP_FLAG = writable;
copy_mess.SMAP_SEG_OBSOLETE = (void *) D;
copy_mess.SMAP_SEG_OBSOLETE = (void *) D_OBSOLETE;
return(_kernel_call(SYS_SAFEMAP, &copy_mess));
@ -67,7 +67,7 @@ int sys_safeunmap(vir_bytes my_address)
copy_mess.SMAP_ADDRESS = my_address;
copy_mess.SMAP_SEG_OBSOLETE = (void *) D;
copy_mess.SMAP_SEG_OBSOLETE = (void *) D_OBSOLETE;
return(_kernel_call(SYS_SAFEUNMAP, &copy_mess));
}

View file

@ -23,8 +23,8 @@ phys_bytes bytes; /* how many bytes */
copy_mess.CP_NR_BYTES = (long) bytes;
/* backwards compatability D segs */
copy_mess.CP_DST_SPACE_OBSOLETE = D;
copy_mess.CP_SRC_SPACE_OBSOLETE = D;
copy_mess.CP_DST_SPACE_OBSOLETE = D_OBSOLETE;
copy_mess.CP_SRC_SPACE_OBSOLETE = D_OBSOLETE;
return(_kernel_call(SYS_VIRCOPY, &copy_mess));
}

View file

@ -63,15 +63,6 @@ int sys_vmctl_get_memreq(endpoint_t *who, vir_bytes *mem,
return r;
}
int sys_vmctl_enable_paging(void * data)
{
message m;
m.SVMCTL_WHO = SELF;
m.SVMCTL_PARAM = VMCTL_ENABLE_PAGING;
m.SVMCTL_VALUE = (u32_t) data;
return _kernel_call(SYS_VMCTL, &m);
}
int sys_vmctl_get_mapping(int index,
phys_bytes *addr, phys_bytes *len, int *flags)
{

View file

@ -131,7 +131,7 @@ char
VAssert_Init(void)
{
uint32 eax, ebx, ecx, edx;
VA page_address = (VA) &vassert_state.inReplay, ph;
VA page_address = (VA) &vassert_state.inReplay;
if (!VAssert_IsInVM()) {
return -1;
}
@ -143,16 +143,7 @@ VAssert_Init(void)
}
#endif
/* vmware expects a linear address (or is simply forgetting
* to adjust the given address for segments)
*/
if(sys_umap(SELF, D, page_address, 1, (phys_bytes *) &ph)) {
printf("VAssert_Init: sys_umap failed\n");
return -1;
}
libvassert_process_backdoor(CMD_SET_ADDRESS, ph,
libvassert_process_backdoor(CMD_SET_ADDRESS, page_address,
MAGIC_PORT|(1<<16), &eax, &ebx, &ecx, &edx);
return (eax != -1) ? 0 : -1;

View file

@ -8,6 +8,4 @@ MAN=
BINDIR?= /usr/sbin
LDFLAGS+= -Wl,--section-start=.init=0x0
.include <bsd.prog.mk>

View file

@ -17,7 +17,6 @@ struct hook_entry {
char *name;
} hooks[] = {
{ F1, proctab_dmp, "Kernel process table" },
{ F2, memmap_dmp, "Process memory maps" },
{ F3, image_dmp, "System image" },
{ F4, privileges_dmp, "Process privileges" },
{ F5, monparams_dmp, "Boot monitor parameters" },

View file

@ -45,7 +45,6 @@ static char *proc_name(int proc_nr);
static char *s_traps_str(int flags);
static char *s_flags_str(int flags);
static char *p_rts_flags_str(int flags);
static char *boot_flags_str(int flags);
/* Some global data that is shared among several dumping procedures.
* Note that the process table copy has the same name as in the kernel
@ -92,7 +91,7 @@ void kmessages_dmp()
*===========================================================================*/
void monparams_dmp()
{
char val[1024];
char val[MULTIBOOT_PARAM_BUF_SIZE];
char *e;
int r;
@ -160,18 +159,6 @@ void irqtab_dmp()
printf("\n");
}
/*===========================================================================*
* boot_flags_str *
*===========================================================================*/
static char *boot_flags_str(int flags)
{
static char str[10];
str[0] = (flags & PROC_FULLVM) ? 'V' : '-';
str[1] = '\0';
return str;
}
/*===========================================================================*
* image_dmp *
*===========================================================================*/
@ -188,9 +175,7 @@ void image_dmp()
printf("---name- -nr- flags -stack-\n");
for (m=0; m<NR_BOOT_PROCS; m++) {
ip = &image[m];
printf("%8s %4d %5s\n",
ip->proc_name, ip->proc_nr,
boot_flags_str(ip->flags));
printf("%8s %4d\n", ip->proc_name, ip->proc_nr);
}
printf("\n");
}
@ -215,15 +200,6 @@ void kenv_dmp()
printf("Dump of kinfo structure.\n\n");
printf("Kernel info structure:\n");
printf("- code_base: %5lu\n", kinfo.code_base);
printf("- code_size: %5lu\n", kinfo.code_size);
printf("- data_base: %5lu\n", kinfo.data_base);
printf("- data_size: %5lu\n", kinfo.data_size);
printf("- proc_addr: %5lu\n", kinfo.proc_addr);
printf("- bootdev_base: %5lu\n", kinfo.bootdev_base);
printf("- bootdev_size: %5lu\n", kinfo.bootdev_size);
printf("- ramdev_base: %5lu\n", kinfo.ramdev_base);
printf("- ramdev_size: %5lu\n", kinfo.ramdev_size);
printf("- nr_procs: %3u\n", kinfo.nr_procs);
printf("- nr_tasks: %3u\n", kinfo.nr_tasks);
printf("- release: %.6s\n", kinfo.release);
@ -341,7 +317,6 @@ void proctab_dmp()
register struct proc *rp;
static struct proc *oldrp = BEG_PROC_ADDR;
int r;
phys_clicks text, data, size;
/* First obtain a fresh copy of the current process table. */
if ((r = sys_getproctab(proc)) != OK) {
@ -352,10 +327,6 @@ void proctab_dmp()
printf("\n-nr-----gen---endpoint-name--- -prior-quant- -user----sys-rtsflags-from/to-\n");
PROCLOOP(rp, oldrp)
text = rp->p_memmap[T].mem_phys;
data = rp->p_memmap[D].mem_phys;
size = rp->p_memmap[T].mem_len
+ ((rp->p_memmap[S].mem_phys + rp->p_memmap[S].mem_len) - data);
printf(" %5d %10d ", _ENDPOINT_G(rp->p_endpoint), rp->p_endpoint);
printf("%-8.8s %5u %5u %6lu %6lu ",
rp->p_name,
@ -393,38 +364,6 @@ void procstack_dmp()
}
}
/*===========================================================================*
* memmap_dmp *
*===========================================================================*/
void memmap_dmp()
{
register struct proc *rp;
static struct proc *oldrp = proc;
int r;
phys_clicks size;
/* First obtain a fresh copy of the current process table. */
if ((r = sys_getproctab(proc)) != OK) {
printf("IS: warning: couldn't get copy of process table: %d\n", r);
return;
}
printf("\n-nr/name--- --pc-- --sp-- -text---- -data---- -stack--- -cr3-\n");
PROCLOOP(rp, oldrp)
size = rp->p_memmap[T].mem_len
+ ((rp->p_memmap[S].mem_phys + rp->p_memmap[S].mem_len)
- rp->p_memmap[D].mem_phys);
printf("%-7.7s%7lx %8lx %4x %4x %4x %4x %5x %5x %8u\n",
rp->p_name,
(unsigned long) rp->p_reg.pc,
(unsigned long) rp->p_reg.sp,
rp->p_memmap[T].mem_phys, rp->p_memmap[T].mem_len,
rp->p_memmap[D].mem_phys, rp->p_memmap[D].mem_len,
rp->p_memmap[S].mem_phys, rp->p_memmap[S].mem_len,
rp->p_seg.p_cr3);
}
}
/*===========================================================================*
* proc_name *
*===========================================================================*/

View file

@ -12,14 +12,12 @@ static void print_region(struct vm_region_info *vri, int *n)
{
static int vri_count, vri_prev_set;
static struct vm_region_info vri_prev;
char c;
int is_repeat;
/* part of a contiguous identical run? */
is_repeat =
vri &&
vri_prev_set &&
vri->vri_seg == vri_prev.vri_seg &&
vri->vri_prot == vri_prev.vri_prot &&
vri->vri_flags == vri_prev.vri_flags &&
vri->vri_length == vri_prev.vri_length &&
@ -44,14 +42,7 @@ static void print_region(struct vm_region_info *vri, int *n)
/* NULL indicates the end of a list of mappings, nothing else to do */
if (!vri) return;
/* first in a run, print all info */
switch (vri->vri_seg) {
case T: c = 'T'; break;
case D: c = 'D'; break;
default: c = '?';
}
printf(" %c %08lx-%08lx %c%c%c %c (%lu kB)\n", c, vri->vri_addr,
printf(" %08lx-%08lx %c%c%c %c (%lu kB)\n", vri->vri_addr,
vri->vri_addr + vri->vri_length,
(vri->vri_prot & PROT_READ) ? 'r' : '-',
(vri->vri_prot & PROT_WRITE) ? 'w' : '-',

View file

@ -12,7 +12,6 @@ void vm_dmp(void);
/* dmp_kernel.c */
void proctab_dmp(void);
void procstack_dmp(void);
void memmap_dmp(void);
void privileges_dmp(void);
void image_dmp(void);
void irqtab_dmp(void);

View file

@ -12,7 +12,7 @@
*
* The entry points into this file are:
* do_exec: perform the EXEC system call
* do_exec_newmem: allocate new memory map for a process that tries to exec
* do_newexec: handle PM part of exec call after VFS
* do_execrestart: finish the special exec call for RS
* exec_restart: finish a regular exec call
*/
@ -73,14 +73,14 @@ int do_newexec()
proc_e= m_in.EXC_NM_PROC;
if (pm_isokendpt(proc_e, &proc_n) != OK) {
panic("do_exec_newmem: got bad endpoint: %d", proc_e);
panic("do_newexec: got bad endpoint: %d", proc_e);
}
rmp= &mproc[proc_n];
ptr= m_in.EXC_NM_PTR;
r= sys_datacopy(who_e, (vir_bytes)ptr,
SELF, (vir_bytes)&args, sizeof(args));
if (r != OK)
panic("do_exec_newmem: sys_datacopy failed: %d", r);
panic("do_newexec: sys_datacopy failed: %d", r);
allow_setuid = 0; /* Do not allow setuid execution */
rmp->mp_flags &= ~TAINTED; /* By default not tainted */

View file

@ -7,7 +7,7 @@
/* Global variables. */
EXTERN struct mproc *mp; /* ptr to 'mproc' slot of current process */
EXTERN int procs_in_use; /* how many processes are marked as IN_USE */
EXTERN char monitor_params[128*sizeof(char *)]; /* boot monitor parameters */
EXTERN char monitor_params[MULTIBOOT_PARAM_BUF_SIZE];
EXTERN struct kinfo kinfo; /* kernel information */
/* Misc.c */
@ -25,7 +25,6 @@ EXTERN sigset_t noign_sset; /* which signals cannot be ignored */
EXTERN u32_t system_hz; /* System clock frequency. */
EXTERN int abort_flag;
EXTERN char monitor_code[256];
EXTERN struct machine machine; /* machine info */
#ifdef CONFIG_SMP

Some files were not shown because too many files have changed in this diff Show more