You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
pke-doc/chapter5_process.md

54 KiB

第五章实验3进程管理

目录

5.1 实验3的基础知识

完成了实验1和实验2的读者应该对PKE实验中的“进程”不会太陌生。因为实际上我们从最开始的lab1_1开始就有了进程结构struct process只是在之前的实验中进程结构中最重要的成员是trapframe和kstack它们分别用来记录系统进入S模式前的进程上下文以及作为进入S模式后的操作系统栈。在实验3我们将进入多任务环境完成PKE实验环境下的进程创建、换入换出以及进程调度相关实验。

5.1.1 多任务环境下进程的封装

实验3跟之前的两个实验最大的不同在于在实验3的3个基本实验中PKE操作系统将需要支持多个进程的执行。为了对多任务环境进行支撑PKE操作系统定义了一个“进程池”见kernel/process.c文件

 29 // process pool. added @lab3_1
 30 process procs[NPROC];

实际上这个进程池就是一个包含NPROC=32见kernel/process.h文件个process结构的数组。

接下来PKE操作系统对进程的结构进行了扩充见kernel/process.h文件

 58   // points to a page that contains mapped_regions. below are added @lab3_1
 59   mapped_region *mapped_info;
 60   // next free mapped region in mapped_info
 61   int total_mapped_region;
 62
 63   // process id
 64   uint64 pid;
 65   // process status
 66   int status;
 67   // parent process
 68   struct process_t *parent;
 69   // next queue element
 70   struct process_t *queue_next;
  • 前两项mapped_info和total_mapped_region用于对进程的虚拟地址空间中的代码段、堆栈段等进行跟踪这些虚拟地址空间在进程创建fork将发挥重要作用。同时这也是lab3_1的内容。PKE将进程可能拥有的段分为以下几个类型
 34 enum segment_type {
 35   CODE_SEGMENT,    // ELF segment
 36   DATA_SEGMENT,    // ELF segment
 37   STACK_SEGMENT,   // runtime segment
 38   CONTEXT_SEGMENT, // trapframe segment
 39   SYSTEM_SEGMENT,  // system segment
 40 };

其中CODE_SEGMENT表示该段是从可执行ELF文件中加载的代码段DATA_SEGMENT为从ELF文件中加载的数据段STACK_SEGMENT为进程自身的栈段CONTEXT_SEGMENT为保存进程上下文的trapframe所对应的段SYSTEM_SEGMENT为进程的系统段如所映射的异常处理段。

  • pid是进程的ID号具有唯一性
  • status记录了进程的状态PKE操作系统在实验3给进程规定了以下几种状态
 25 enum proc_status {
 26   FREE,            // unused state
 27   READY,           // ready state
 28   RUNNING,         // currently running
 29   BLOCKED,         // waiting for something
 30   ZOMBIE,          // terminated but not reclaimed yet
 31 };

其中FREE为自由态表示进程结构可用READY为就绪态即进程所需的资源都已准备好可以被调度执行RUNNING表示该进程处于正在运行的状态BLOCKED表示进程处于阻塞状态ZOMBIE表示进程处于“僵尸”状态进程的资源可以被释放和回收。

  • parent用于记录进程的父进程
  • queue_next用于将进程链接进各类队列比如就绪队列
  • tick_count用于对进程进行记账即记录它的执行经历了多少次的timer事件将在lab3_3中实现循环轮转调度时使用。

5.1.2 进程的启动与终止

PKE实验中创建一个进程需要先调用kernel/process.c文件中的alloc_process()函数:

 92 process* alloc_process() {
 93   // locate the first usable process structure
 94   int i;
 95
 96   for( i=0; i<NPROC; i++ )
 97     if( procs[i].status == FREE ) break;
 98
 99   if( i>=NPROC ){
100     panic( "cannot find any free process structure.\n" );
101     return 0;
102   }
103
104   // init proc[i]'s vm space
105   procs[i].trapframe = (trapframe *)alloc_page();  //trapframe, used to save context
106   memset(procs[i].trapframe, 0, sizeof(trapframe));
107
108   // page directory
109   procs[i].pagetable = (pagetable_t)alloc_page();
110   memset((void *)procs[i].pagetable, 0, PGSIZE);
111
112   procs[i].kstack = (uint64)alloc_page() + PGSIZE;   //user kernel stack top
113   uint64 user_stack = (uint64)alloc_page();       //phisical address of user stack bottom
114   procs[i].trapframe->regs.sp = USER_STACK_TOP;  //virtual address of user stack top
115
116   // allocates a page to record memory regions (segments)
117   procs[i].mapped_info = (mapped_region*)alloc_page();
118   memset( procs[i].mapped_info, 0, PGSIZE );
119
120   // map user stack in userspace
121   user_vm_map((pagetable_t)procs[i].pagetable, USER_STACK_TOP - PGSIZE, PGSIZE,
122     user_stack, prot_to_type(PROT_WRITE | PROT_READ, 1));
123   procs[i].mapped_info[0].va = USER_STACK_TOP - PGSIZE;
124   procs[i].mapped_info[0].npages = 1;
125   procs[i].mapped_info[0].seg_type = STACK_SEGMENT;
126
127   // map trapframe in user space (direct mapping as in kernel space).
128   user_vm_map((pagetable_t)procs[i].pagetable, (uint64)procs[i].trapframe, PGSIZE,
129     (uint64)procs[i].trapframe, prot_to_type(PROT_WRITE | PROT_READ, 0));
130   procs[i].mapped_info[1].va = (uint64)procs[i].trapframe;
131   procs[i].mapped_info[1].npages = 1;
132   procs[i].mapped_info[1].seg_type = CONTEXT_SEGMENT;
133
134   // map S-mode trap vector section in user space (direct mapping as in kernel space)
135   // we assume that the size of usertrap.S is smaller than a page.
136   user_vm_map((pagetable_t)procs[i].pagetable, (uint64)trap_sec_start, PGSIZE,
137     (uint64)trap_sec_start, prot_to_type(PROT_READ | PROT_EXEC, 0));
138   procs[i].mapped_info[2].va = (uint64)trap_sec_start;
139   procs[i].mapped_info[2].npages = 1;
140   procs[i].mapped_info[2].seg_type = SYSTEM_SEGMENT;
141
142   sprint("in alloc_proc. user frame 0x%lx, user stack 0x%lx, user kstack 0x%lx \n",
143     procs[i].trapframe, procs[i].trapframe->regs.sp, procs[i].kstack);
144
145   procs[i].total_mapped_region = 3;
146   // return after initialization.
147   return &procs[i];
148 }

通过以上代码可以发现alloc_process()函数除了找到一个空的进程结构外还为新创建的进程建立了KERN_BASE以上逻辑地址的映射这段代码在实验3之前位于kernel/kernel.c文件的load_user_program()函数中),并将映射信息保存到了进程结构中。

对于给定应用PKE将通过调用load_bincode_from_host_elf()函数载入给定应用对应的ELF文件的各个段。之后被调用的elf_load()函数在载入段后,将对被载入的段进行判断,以记录它们的虚地址映射:

 65 elf_status elf_load(elf_ctx *ctx) {
 66   // elf_prog_header structure is defined in kernel/elf.h
 67   elf_prog_header ph_addr;
 68   int i, off;
 69
 70   // traverse the elf program segment headers
 71   for (i = 0, off = ctx->ehdr.phoff; i < ctx->ehdr.phnum; i++, off += sizeof(ph_addr)) {
 72     // read segment headers
 73     if (elf_fpread(ctx, (void *)&ph_addr, sizeof(ph_addr), off) != sizeof(ph_addr)) return EL    _EIO;
 74
 75     if (ph_addr.type != ELF_PROG_LOAD) continue;
 76     if (ph_addr.memsz < ph_addr.filesz) return EL_ERR;
 77     if (ph_addr.vaddr + ph_addr.memsz < ph_addr.vaddr) return EL_ERR;
 78
 79     // allocate memory block before elf loading
 80     void *dest = elf_alloc_mb(ctx, ph_addr.vaddr, ph_addr.vaddr, ph_addr.memsz);
 81
 82     // actual loading
 83     if (elf_fpread(ctx, dest, ph_addr.memsz, ph_addr.off) != ph_addr.memsz)
 84       return EL_EIO;
 85
 86     // record the vm region in proc->mapped_info. added @lab3_1
 87     int j;
 88     for( j=0; j<PGSIZE/sizeof(mapped_region); j++ ) //seek the last mapped region
 89       if( (process*)(((elf_info*)(ctx->info))->p)->mapped_info[j].va == 0x0 ) break;
 90
 91     ((process*)(((elf_info*)(ctx->info))->p))->mapped_info[j].va = ph_addr.vaddr;
 92     ((process*)(((elf_info*)(ctx->info))->p))->mapped_info[j].npages = 1;
 93
 94     // SEGMENT_READABLE, SEGMENT_EXECUTABLE, SEGMENT_WRITABLE are defined in kernel/elf.h
 95     if( ph_addr.flags == (SEGMENT_READABLE|SEGMENT_EXECUTABLE) ){
 96       ((process*)(((elf_info*)(ctx->info))->p))->mapped_info[j].seg_type = CODE_SEGMENT;
 97       sprint( "CODE_SEGMENT added at mapped info offset:%d\n", j );
 98     }else if ( ph_addr.flags == (SEGMENT_READABLE|SEGMENT_WRITABLE) ){
 99       ((process*)(((elf_info*)(ctx->info))->p))->mapped_info[j].seg_type = DATA_SEGMENT;
100       sprint( "DATA_SEGMENT added at mapped info offset:%d\n", j );
101     }else
102       panic( "unknown program segment encountered, segment flag:%d.\n", ph_addr.flags );
103
104     ((process*)(((elf_info*)(ctx->info))->p))->total_mapped_region ++;
105   }
106
107   return EL_OK;
108 }

以上代码段中第95--102行将对被载入的段的类型ph_addr.flags进行判断以确定它是代码段还是数据段。完成以上的虚地址空间到物理地址空间的映射后将形成用户进程的虚地址空间结构图4.5所示)。

接下来将通过switch_to()函数将所构造的进程投入执行:

 41 void switch_to(process* proc) {
 42   assert(proc);
 43   current = proc;
 44
 45   // write the smode_trap_vector (64-bit func. address) defined in kernel/strap_vector.S
 46   // to the stvec privilege register, such that trap handler pointed by smode_trap_vector
 47   // will be triggered when an interrupt occurs in S mode.
 48   write_csr(stvec, (uint64)smode_trap_vector);
 49
 50   // set up trapframe values (in process structure) that smode_trap_vector will need when
 51   // the process next re-enters the kernel.
 52   proc->trapframe->kernel_sp = proc->kstack;      // process's kernel stack
 53   proc->trapframe->kernel_satp = read_csr(satp);  // kernel page table
 54   proc->trapframe->kernel_trap = (uint64)smode_trap_handler;
 55
 56   // SSTATUS_SPP and SSTATUS_SPIE are defined in kernel/riscv.h
 57   // set S Previous Privilege mode (the SSTATUS_SPP bit in sstatus register) to User mode.
 58   unsigned long x = read_csr(sstatus);
 59   x &= ~SSTATUS_SPP;  // clear SPP to 0 for user mode
 60   x |= SSTATUS_SPIE;  // enable interrupts in user mode
 61
 62   // write x back to 'sstatus' register to enable interrupts, and sret destination mode.
 63   write_csr(sstatus, x);
 64
 65   // set S Exception Program Counter (sepc register) to the elf entry pc.
 66   write_csr(sepc, proc->trapframe->epc);
 67
 68   // make user page table. macro MAKE_SATP is defined in kernel/riscv.h. added @lab2_1
 69   uint64 user_satp = MAKE_SATP(proc->pagetable);
 70
 71   // return_to_user() is defined in kernel/strap_vector.S. switch to user mode with sret.
 72   // note, return_to_user takes two parameters @ and after lab2_1.
 73   return_to_user(proc->trapframe, user_satp);
 74 }

实际上,以上函数在实验1就有所涉及它的作用是将进程结构中的trapframe作为进程上下文恢复到RISC-V机器的通用寄存器中并最后调用sret指令通过return_to_user()函数)将进程投入执行。

不同于实验1和实验2实验3的exit系统调用不能够直接将系统shutdown因为一个进程的结束并不一定意味着系统中所有进程的完成。以下是实验3中exit系统调用的实现

 34 ssize_t sys_user_exit(uint64 code) {
 35   sprint("User exit with code:%d.\n", code);
 36   // reclaim the current process, and reschedule. added @lab3_1
 37   free_process( current );
 38   schedule();
 39   return 0;
 40 }

可以看到如果某进程调用了exit()系统调用操作系统的处理方法是调用free_process()函数将当前进程也就是调用者进行“释放”然后转进程调度。其中free_process()函数kernel/process.c文件的实现非常简单

153 int free_process( process* proc ) {
154   // we set the status to ZOMBIE, but cannot destruct its vm space immediately.
155   // since proc can be current process, and its user kernel stack is currently in use!
156   // but for proxy kernel, it (memory leaking) may NOT be a really serious issue,
157   // as it is different from regular OS, which needs to run 7x24.
158   proc->status = ZOMBIE;
159
160   return 0;
161 }

可以看到,free_process()函数仅是将进程设为ZOMBIE状态而不会将进程所占用的资源全部释放这是因为free_process()函数的调用说明操作系统当前是在S模式下运行而按照PKE的设计思想S态的运行将使用当前进程的用户系统栈user kernel stack。此时如果将当前进程的内存空间进行释放将导致操作系统本身的崩溃。所以释放进程时PKE采用的是折衷的办法即只将其设置为僵尸ZOMBIE状态而不是立即将它所占用的资源进行释放。最后schedule()函数的调用,将选择系统中可能存在的其他处于就绪状态的进程投入运行,它的处理逻辑我们将在下一节讨论。

5.1.3 就绪进程的管理与调度

PKE的操作系统设计了一个非常简单的就绪队列管理因为实验3的基础实验并未涉及进程的阻塞所以未设计阻塞队列队列头在kernel/sched.c文件中定义

8 process* ready_queue_head = NULL;

将一个进程加入就绪队列可以调用insert_to_ready_queue()函数:

 13 void insert_to_ready_queue( process* proc ) {
 14   sprint( "going to insert process %d to ready queue.\n", proc->pid );
 15   // if the queue is empty in the beginning
 16   if( ready_queue_head == NULL ){
 17     proc->status = READY;
 18     proc->queue_next = NULL;
 19     ready_queue_head = proc;
 20     return;
 21   }
 22
 23   // ready queue is not empty
 24   process *p;
 25   // browse the ready queue to see if proc is already in-queue
 26   for( p=ready_queue_head; p->queue_next!=NULL; p=p->queue_next )
 27     if( p == proc ) return;  //already in queue
 28
 29   // p points to the last element of the ready queue
 30   if( p==proc ) return;
 31   p->queue_next = proc;
 32   proc->status = READY;
 33   proc->queue_next = NULL;
 34
 35   return;
 36 }

该函数首先第16--21行处理ready_queue_head为空初始状态的情况如果就绪队列不为空则将进程加入到队尾第26--33行

PKE操作系统内核通过调用schedule()函数来完成进程的选择和换入:

 45 void schedule() {
 46   if ( !ready_queue_head ){
 47     // by default, if there are no ready process, and all processes are in the status of
 48     // FREE and ZOMBIE, we should shutdown the emulated RISC-V machine.
 49     int should_shutdown = 1;
 50
 51     for( int i=0; i<NPROC; i++ )
 52       if( (procs[i].status != FREE) && (procs[i].status != ZOMBIE) ){
 53         should_shutdown = 0;
 54         sprint( "ready queue empty, but process %d is not in free/zombie state:%d\n",
 55           i, procs[i].status );
 56       }
 57
 58     if( should_shutdown ){
 59       sprint( "no more ready processes, system shutdown now.\n" );
 60       shutdown( 0 );
 61     }else{
 62       panic( "Not handled: we should let system wait for unfinished processes.\n" );
 63     }
 64   }
 65
 66   current = ready_queue_head;
 67   assert( current->status == READY );
 68   ready_queue_head = ready_queue_head->queue_next;
 69
 70   current->status == RUNNING;
 71   sprint( "going to schedule process %d to run.\n", current->pid );
 72   switch_to( current );
 73 }

可以看到schedule()函数首先判断就绪队列ready_queue_head是否为空对于为空的情况第46--64行schedule()函数将判断系统中所有的进程是否全部都处于被释放FREE状态或者僵尸ZOMBIE状态。如果是则启动关模拟RISC-V机程序否则应进入等待系统中进程结束的状态。但是由于实验3的基础实验并无可能进入这样的状态所以我们在这里调用了panic等后续实验有可能进入这种状态后再进一步处理。

对于就绪队列非空的情况第66--72行处理就简单得多只需要将就绪队列队首的进程换入执行即可。对于换入的过程需要注意的是要将被选中的进程从就绪队列中摘掉。

5.2 lab3_1 进程创建fork

给定应用

  • user/app_naive_fork.c
  1 /*
  2  * The application of lab3_1.
  3  * it simply forks a child process.
  4  */
  5
  6 #include "user/user_lib.h"
  7 #include "util/types.h"
  8
  9 int main(void) {
 10   uint64 pid = fork();
 11   if (pid == 0) {
 12     printu("Child: Hello world!\n");
 13   } else {
 14     printu("Parent: Hello world! child id %ld\n", pid);
 15   }
 16
 17   exit(0);
 18 }

以上程序的行为非常简单主进程调用fork()函数,后者产生一个系统调用,基于主进程这个模板创建它的子进程。

  • 先提交lab2_3的答案然后切换到lab3_1继承lab2_3及之前实验所做的修改并make后的直接运行结果
//切换到lab3_1
$ git checkout lab3_1_fork

//继承lab2_3以及之前的答案
$ git merge lab2_3_pagefault -m "continue to work on lab3_1"

//重新构造
$ make clean; make

//运行构造结果
$ spike ./obj/riscv-pke ./obj/app_naive_fork
In m_start, hartid:0
HTIF is available!
(Emulated) memory size: 2048 MB
Enter supervisor mode...
PKE kernel start 0x0000000080000000, PKE kernel end: 0x0000000080010000, PKE kernel size: 0x0000000000010000 .
free physical memory address: [0x0000000080010000, 0x0000000087ffffff]
kernel memory manager is initializing ...
KERN_BASE 0x0000000080000000
physical address of _etext is: 0x0000000080005000
kernel page table is on
Switching to user mode...
in alloc_proc. user frame 0x0000000087fbc000, user stack 0x000000007ffff000, user kstack 0x0000000087fbb000
User application is loading.
Application: ./obj/app_naive_fork
CODE_SEGMENT added at mapped info offset:3
Application program entry point (virtual address): 0x0000000000010078
going to insert process 0 to ready queue.
going to schedule process 0 to run.
User call fork.
will fork a child from parent 0.
in alloc_proc. user frame 0x0000000087faf000, user stack 0x000000007ffff000, user kstack 0x0000000087fae000
You need to implement the code segment mapping of child in lab3_1.

System is shutting down with exit code -1.

从以上运行结果来看应用程序的fork动作并未将子进程给创建出来并投入运行。按照提示我们需要在PKE操作系统内核中实现子进程到父进程代码段的映射以最终完成fork动作。

这里既然涉及到了父进程的代码段我们就可以先用readelf命令查看一下给定应用程序的可执行代码对应的ELF文件结构

$ riscv64-unknown-elf-readelf -l ./obj/app_naive_fork

Elf file type is EXEC (Executable file)
Entry point 0x10078
There is 1 program header, starting at offset 64

Program Headers:
  Type           Offset             VirtAddr           PhysAddr
                 FileSiz            MemSiz              Flags  Align
  LOAD           0x0000000000000000 0x0000000000010000 0x0000000000010000
                 0x000000000000040c 0x000000000000040c  R E    0x1000

 Section to Segment mapping:
  Segment Sections...
   00     .text .rodata

可以看到app_naive_fork可执行文件只包含一个代码段编号为00应该来讲是最简单的可执行文件结构了无须考虑数据段的问题。如果要依据这样的父进程模板创建子进程只需要将它的代码段映射而非拷贝到子进程的对应虚地址即可。

实验内容

完善操作系统内核kernel/process.c文件中的do_fork()函数,并最终获得以下预期结果:

$ spike ./obj/riscv-pke ./obj/app_naive_fork
In m_start, hartid:0
HTIF is available!
(Emulated) memory size: 2048 MB
Enter supervisor mode...
PKE kernel start 0x0000000080000000, PKE kernel end: 0x0000000080010000, PKE kernel size: 0x0000000000010000 .
free physical memory address: [0x0000000080010000, 0x0000000087ffffff]
kernel memory manager is initializing ...
KERN_BASE 0x0000000080000000
physical address of _etext is: 0x0000000080005000
kernel page table is on
Switching to user mode...
in alloc_proc. user frame 0x0000000087fbc000, user stack 0x000000007ffff000, user kstack 0x0000000087fbb000
User application is loading.
Application: ./obj/app_naive_fork
CODE_SEGMENT added at mapped info offset:3
Application program entry point (virtual address): 0x0000000000010078
going to insert process 0 to ready queue.
going to schedule process 0 to run.
User call fork.
will fork a child from parent 0.
in alloc_proc. user frame 0x0000000087faf000, user stack 0x000000007ffff000, user kstack 0x0000000087fae000
do_fork map code segment at pa:0000000087fb2000 of parent to child at va:0000000000010000.
going to insert process 1 to ready queue.
Parent: Hello world! child id 1
User exit with code:0.
going to schedule process 1 to run.
Child: Hello world!
User exit with code:0.
no more ready processes, system shutdown now.
System is shutting down with exit code 0.

从以上运行结果来看,子进程已经被创建,且在其后被投入运行。

实验指导

读者可以回顾lab1_1中所学习到的系统调用的知识从应用程序user/app_naive_fork.c开始跟踪fork()函数的实现:

user/app_naive_fork.c --> user/user_lib.c --> kernel/strap_vector.S --> kernel/strap.c --> kernel/syscall.c

直至跟踪到kernel/process.c文件中的do_fork()函数:

170 int do_fork( process* parent)
171 {
172   sprint( "will fork a child from parent %d.\n", parent->pid );
173   process* child = alloc_process();
174
175   for( int i=0; i<parent->total_mapped_region; i++ ){
176     // browse parent's vm space, and copy its trapframe and data segments,
177     // map its code segment.
178     switch( parent->mapped_info[i].seg_type ){
179       case CONTEXT_SEGMENT:
180         *child->trapframe = *parent->trapframe;
181         break;
182       case STACK_SEGMENT:
183         memcpy( (void*)lookup_pa(child->pagetable, child->mapped_info[0].va),
184           (void*)lookup_pa(parent->pagetable, parent->mapped_info[i].va), PGSIZE );
185         break;
186       case CODE_SEGMENT:
187         // TODO (lab3_1): implment the mapping of child code segment to parent's
188         // code segment.
189         // hint: the virtual address mapping of code segment is tracked in mapped_info
190         // page of parent's process structure. use the information in mapped_info to
191         // retrieve the virtual to physical mapping of code segment.
192         // after having the mapping information, just map the corresponding virtual
193         // address region of child to the physical pages that actually store the code
194         // segment of parent process.
195         // DO NOT COPY THE PHYSICAL PAGES, JUST MAP THEM.
196         panic( "You need to implement the code segment mapping of child in lab3_1.\n" );
197
198         // after mapping, register the vm region (do not delete codes below!)
199         child->mapped_info[child->total_mapped_region].va = parent->mapped_info[i].va;
200         child->mapped_info[child->total_mapped_region].npages =
201           parent->mapped_info[i].npages;
202         child->mapped_info[child->total_mapped_region].seg_type = CODE_SEGMENT;
203         child->total_mapped_region++;
204         break;
205     }
206   }

该函数使用第175--205行的循环来拷贝父进程的逻辑地址空间到其子进程。我们看到对于trapframe段case CONTEXT_SEGMENT以及堆栈段case STACK_SEGMENTdo_fork()函数采用了简单复制的办法来拷贝父进程的这两个段到子进程中,这样做的目的是将父进程的执行现场传递给子进程。

然而对于父进程的代码段子进程应该如何“继承”呢通过第187--195行的注释我们知道对于代码段我们不应直接复制减少系统开销而应通过映射的办法将子进程中对应的逻辑地址空间映射到其父进程中装载代码段的物理页面。这里就要回到实验2内存管理部分,寻找合适的函数来实现了。注意对页面的权限设置(可读可执行)。

实验完毕后,记得提交修改(命令行中-m后的字符串可自行确定以便在后续实验中继承lab3_1中所做的工作

$ git commit -a -m "my work on lab3_1 is done."

5.3 lab3_2 进程yield

给定应用

  • user/app_yield.c
  1 /*
  2  * The application of lab3_2.
  3  * parent and child processes intermittently give up their processors.
  4  */
  5
  6 #include "user/user_lib.h"
  7 #include "util/types.h"
  8
  9 int main(void) {
 10   uint64 pid = fork();
 11   uint64 rounds = 0xffff;
 12   if (pid == 0) {
 13     printu("Child: Hello world! \n");
 14     for (uint64 i = 0; i < rounds; ++i) {
 15       if (i % 10000 == 0) {
 16         printu("Child running %ld \n", i);
 17         yield();
 18       }
 19     }
 20   } else {
 21     printu("Parent: Hello world! \n");
 22     for (uint64 i = 0; i < rounds; ++i) {
 23       if (i % 10000 == 0) {
 24         printu("Parent running %ld \n", i);
 25         yield();
 26       }
 27     }
 28   }
 29
 30   exit(0);
 31   return 0;
 32 }

和lab3_1一样以上的应用程序通过fork系统调用创建了一个子进程接下来父进程和子进程都进入了一个很长的循环。在循环中无论是父进程还是子进程在循环的次数是10000的整数倍时除了打印信息外都调用了yield()函数来释放自己的执行权即CPU

  • 先提交lab3_1的答案然后切换到lab3_2继承lab3_1及之前实验所做的修改并make后的直接运行结果
//切换到lab3_2
$ git checkout lab3_2_yield

//继承lab3_1以及之前的答案
$ git merge lab3_1_fork -m "continue to work on lab3_2"

//重新构造
$ make clean; make

//运行构造结果
$ spike ./obj/riscv-pke ./obj/app_yield
In m_start, hartid:0
HTIF is available!
(Emulated) memory size: 2048 MB
Enter supervisor mode...
PKE kernel start 0x0000000080000000, PKE kernel end: 0x0000000080010000, PKE kernel size: 0x0000000000010000 .
free physical memory address: [0x0000000080010000, 0x0000000087ffffff]
kernel memory manager is initializing ...
KERN_BASE 0x0000000080000000
physical address of _etext is: 0x0000000080005000
kernel page table is on
Switching to user mode...
in alloc_proc. user frame 0x0000000087fbc000, user stack 0x000000007ffff000, user kstack 0x0000000087fbb000
User application is loading.
Application: ./obj/app_yield
CODE_SEGMENT added at mapped info offset:3
Application program entry point (virtual address): 0x000000000001017c
going to insert process 0 to ready queue.
going to schedule process 0 to run.
User call fork.
will fork a child from parent 0.
in alloc_proc. user frame 0x0000000087faf000, user stack 0x000000007ffff000, user kstack 0x0000000087fae000
do_fork map code segment at pa:0000000087fb2000 of parent to child at va:0000000000010000.
going to insert process 1 to ready queue.
Parent: Hello world!
Parent running 0
You need to implement the yield syscall in lab3_2.

System is shutting down with exit code -1.

从以上输出来看还是因为PKE操作系统中的yield()功能未完善,导致应用无法正常执行下去。

实验内容

完善yield系统调用实现进程执行过程中的主动释放CPU的动作。实验完成后获得以下预期结果

$ spike ./obj/riscv-pke ./obj/app_yield
In m_start, hartid:0
HTIF is available!
(Emulated) memory size: 2048 MB
Enter supervisor mode...
PKE kernel start 0x0000000080000000, PKE kernel end: 0x0000000080010000, PKE kernel size: 0x0000000000010000 .
free physical memory address: [0x0000000080010000, 0x0000000087ffffff]
kernel memory manager is initializing ...
KERN_BASE 0x0000000080000000
physical address of _etext is: 0x0000000080005000
kernel page table is on
Switching to user mode...
in alloc_proc. user frame 0x0000000087fbc000, user stack 0x000000007ffff000, user kstack 0x0000000087fbb000
User application is loading.
Application: ./obj/app_yield
CODE_SEGMENT added at mapped info offset:3
Application program entry point (virtual address): 0x000000000001017c
going to insert process 0 to ready queue.
going to schedule process 0 to run.
User call fork.
will fork a child from parent 0.
in alloc_proc. user frame 0x0000000087faf000, user stack 0x000000007ffff000, user kstack 0x0000000087fae000
do_fork map code segment at pa:0000000087fb2000 of parent to child at va:0000000000010000.
going to insert process 1 to ready queue.
Parent: Hello world!
Parent running 0
going to insert process 0 to ready queue.
going to schedule process 1 to run.
Child: Hello world!
Child running 0
going to insert process 1 to ready queue.
going to schedule process 0 to run.
Parent running 10000
going to insert process 0 to ready queue.
going to schedule process 1 to run.
Child running 10000
going to insert process 1 to ready queue.
going to schedule process 0 to run.
Parent running 20000
going to insert process 0 to ready queue.
going to schedule process 1 to run.
Child running 20000
going to insert process 1 to ready queue.
going to schedule process 0 to run.
Parent running 30000
going to insert process 0 to ready queue.
going to schedule process 1 to run.
Child running 30000
going to insert process 1 to ready queue.
going to schedule process 0 to run.
Parent running 40000
going to insert process 0 to ready queue.
going to schedule process 1 to run.
Child running 40000
going to insert process 1 to ready queue.
going to schedule process 0 to run.
Parent running 50000
going to insert process 0 to ready queue.
going to schedule process 1 to run.
Child running 50000
going to insert process 1 to ready queue.
going to schedule process 0 to run.
Parent running 60000
going to insert process 0 to ready queue.
going to schedule process 1 to run.
Child running 60000
going to insert process 1 to ready queue.
going to schedule process 0 to run.
User exit with code:0.
going to schedule process 1 to run.
User exit with code:0.
no more ready processes, system shutdown now.
System is shutting down with exit code 0.

实验指导

进程释放CPU的动作应该是

  • 将当前进程置为就绪状态READY
  • 将当前进程加入到就绪队列的队尾;
  • 转进程调度。

实验完毕后,记得提交修改(命令行中-m后的字符串可自行确定以便在后续实验中继承lab3_2中所做的工作

$ git commit -a -m "my work on lab3_2 is done."

5.4 lab3_3 循环轮转调度

给定应用

  1 /*
  2  * The application of lab3_3.
  3  * parent and child processes never give up their processor during execution.
  4  */
  5
  6 #include "user/user_lib.h"
  7 #include "util/types.h"
  8
  9 int main(void) {
 10   uint64 pid = fork();
 11   uint64 rounds = 100000000;
 12   uint64 interval = 10000000;
 13   uint64 a = 0;
 14   if (pid == 0) {
 15     printu("Child: Hello world! \n");
 16     for (uint64 i = 0; i < rounds; ++i) {
 17       if (i % interval == 0) printu("Child running %ld \n", i);
 18     }
 19   } else {
 20     printu("Parent: Hello world! \n");
 21     for (uint64 i = 0; i < rounds; ++i) {
 22       if (i % interval == 0) printu("Parent running %ld \n", i);
 23     }
 24   }
 25
 26   exit(0);
 27   return 0;
 28 }

和lab3_2类似lab3_3给出的应用仍然是父子两个进程他们的执行体都是两个大循环。但与lab3_2不同的是这两个进程在执行各自循环体时都没有主动释放CPU的动作。显然这样的设计会导致某个进程长期占据CPU而另一个进程无法得到执行。

  • 先提交lab3_2的答案然后切换到lab3_3继承lab3_2及之前实验所做的修改并make后的直接运行结果
//切换到lab3_3
$ git checkout lab3_3_rrsched

//继承lab3_2以及之前的答案
$ git merge lab3_2_yield -m "continue to work on lab3_3"

//重新构造
$ make clean; make

//运行构造结果
$ spike ./obj/riscv-pke ./obj/app_two_long_loops
In m_start, hartid:0
HTIF is available!
(Emulated) memory size: 2048 MB
Enter supervisor mode...
PKE kernel start 0x0000000080000000, PKE kernel end: 0x0000000080010000, PKE kernel size: 0x0000000000010000 .
free physical memory address: [0x0000000080010000, 0x0000000087ffffff]
kernel memory manager is initializing ...
KERN_BASE 0x0000000080000000
physical address of _etext is: 0x0000000080005000
kernel page table is on
Switching to user mode...
in alloc_proc. user frame 0x0000000087fbc000, user stack 0x000000007ffff000, user kstack 0x0000000087fbb000
User application is loading.
Application: ./obj/app_two_long_loops
CODE_SEGMENT added at mapped info offset:3
Application program entry point (virtual address): 0x000000000001017c
going to insert process 0 to ready queue.
going to schedule process 0 to run.
User call fork.
will fork a child from parent 0.
in alloc_proc. user frame 0x0000000087faf000, user stack 0x000000007ffff000, user kstack 0x0000000087fae000
do_fork map code segment at pa:0000000087fb2000 of parent to child at va:0000000000010000.
going to insert process 1 to ready queue.
Parent: Hello world!
Parent running 0
Parent running 10000000
Ticks 0
You need to further implement the timer handling in lab3_3.

System is shutting down with exit code -1.

回顾实验1的lab1_3我们看到由于进程的执行体很长执行过程中时钟中断被触发输出中的“Ticks 0”。显然我们可以通过利用时钟中断来实现进程的循环轮转调度避免由于一个进程的执行体过长导致系统中其他进程无法得到调度的问题

实验内容

实现kernel/strap.c文件中的rrsched()函数,获得以下预期结果:

$ spike ./obj/riscv-pke ./obj/app_two_long_loops
In m_start, hartid:0
HTIF is available!
(Emulated) memory size: 2048 MB
Enter supervisor mode...
PKE kernel start 0x0000000080000000, PKE kernel end: 0x0000000080010000, PKE kernel size: 0x0000000000010000 .
free physical memory address: [0x0000000080010000, 0x0000000087ffffff]
kernel memory manager is initializing ...
KERN_BASE 0x0000000080000000
physical address of _etext is: 0x0000000080005000
kernel page table is on
Switching to user mode...
in alloc_proc. user frame 0x0000000087fbc000, user stack 0x000000007ffff000, user kstack 0x0000000087fbb000
User application is loading.
Application: ./obj/app_two_long_loops
CODE_SEGMENT added at mapped info offset:3
Application program entry point (virtual address): 0x000000000001017c
going to insert process 0 to ready queue.
going to schedule process 0 to run.
User call fork.
will fork a child from parent 0.
in alloc_proc. user frame 0x0000000087faf000, user stack 0x000000007ffff000, user kstack 0x0000000087fae000
do_fork map code segment at pa:0000000087fb2000 of parent to child at va:0000000000010000.
going to insert process 1 to ready queue.
Parent: Hello world!
Parent running 0
Parent running 10000000
Ticks 0
Parent running 20000000
Ticks 1
going to insert process 0 to ready queue.
going to schedule process 1 to run.
Child: Hello world!
Child running 0
Child running 10000000
Ticks 2
Child running 20000000
Ticks 3
going to insert process 1 to ready queue.
going to schedule process 0 to run.
Parent running 30000000
Ticks 4
Parent running 40000000
Ticks 5
going to insert process 0 to ready queue.
going to schedule process 1 to run.
Child running 30000000
Ticks 6
Child running 40000000
Ticks 7
going to insert process 1 to ready queue.
going to schedule process 0 to run.
Parent running 50000000
Parent running 60000000
Ticks 8
Parent running 70000000
Ticks 9
going to insert process 0 to ready queue.
going to schedule process 1 to run.
Child running 50000000
Child running 60000000
Ticks 10
Child running 70000000
Ticks 11
going to insert process 1 to ready queue.
going to schedule process 0 to run.
Parent running 80000000
Ticks 12
Parent running 90000000
Ticks 13
going to insert process 0 to ready queue.
going to schedule process 1 to run.
Child running 80000000
Ticks 14
Child running 90000000
Ticks 15
going to insert process 1 to ready queue.
going to schedule process 0 to run.
User exit with code:0.
going to schedule process 1 to run.
User exit with code:0.
no more ready processes, system shutdown now.
System is shutting down with exit code 0.

实验指导

实际上如果单纯为了实现进程的轮转避免单个进程长期霸占CPU的情况只需要简单地在时钟中断被触发时做重新调度即可。然而为了实现时间片的概念以及控制进程在单时间片内获得的执行长度我们在kernel/sched.h文件中定义了“时间片”的长度

  6 //length of a time slice, in number of ticks
  7 #define TIME_SLICE_LEN  2

可以看到时间片的长度TIME_SLICE_LEN为2个ticks这就意味着我们要每隔两个ticks触发一次进程重新调度动作。

为配合调度的实现,我们在进程结构中定义了整型成员(参见5.1.1tick_count完善kernel/strap.c文件中的rrsched()函数,以实现循环轮转调度时,应采取的逻辑为:

  • 判断当前进程的tick_count加1后是否大于等于TIME_SLICE_LEN
  • 若答案为yes则应将当前进程的tick_count清零并将当前进程加入就绪队列转进程调度
  • 若答案为no则应将当前进程的tick_count加1并返回。

实验完毕后,记得提交修改(命令行中-m后的字符串可自行确定以便在后续实验中继承lab3_3中所做的工作

$ git commit -a -m "my work on lab3_3 is done."

5.5 lab3_challenge1 进程等待和数据段复制(难度:★★☆☆☆)

给定应用

  • user/app_wait.c
  1 /*                                                                             
  2  * This app fork a child process, and the child process fork a grandchild process.
  3  * every process waits for its own child exit then prints.                     
  4  * Three processes also write their own global variables "flag"
  5  * to different values.
  6  */
  7 
  8 #include "user/user_lib.h"
  9 #include "util/types.h"
 10 
 11 int flag;
 12 int main(void) {
 13     flag = 0;
 14     int pid = fork();
 15     if (pid == 0) {
 16         flag = 1;
 17         pid = fork();
 18         if (pid == 0) {
 19             flag = 2;
 20             printu("Grandchild process end, flag = %d.\n", flag);
 21         } else {
 22             wait(pid);
 23             printu("Child process end, flag = %d.\n", flag);
 24         }
 25     } else {
 26         wait(-1);
 27         printu("Parent process end, flag = %d.\n", flag);
 28     }
 29     exit(0);
 30     return 0;
 31 }

wait系统调用是进程管理中一个非常重要的系统调用它主要有两大功能

  • 当一个进程退出之后它所占用的资源并不一定能够立即回收比如该进程的内核栈目前就正用来进行系统调用处理。对于这种问题一种典型的做法是当进程退出的时候内核立即回收一部分资源并将该进程标记为僵尸进程。由父进程调用wait函数的时候再回收该进程的其他资源。
  • 父进程的有些操作需要子进程运行结束后获得结果才能继续执行这时wait函数起到进程同步的作用。

在以上程序中父进程把flag变量赋值为0然后fork生成一个子进程接着通过wait函数等待子进程的退出。子进程把自己的变量flag赋值为1然后fork生成孙子进程接着通过wait函数等待孙子进程的退出。孙子进程给自己的变量flag赋值为2并在退出时输出信息然后子进程退出时输出信息最后父进程退出时输出信息。由于fork之后父子进程的数据段相互独立同一虚拟地址对应不同的物理地址子进程对全局变量的赋值不影响父进程全局变量的值因此结果如下

In m_start, hartid:0
HTIF is available!
(Emulated) memory size: 2048 MB
Enter supervisor mode...
PKE kernel start 0x0000000080000000, PKE kernel end: 0x0000000080009000, PKE kernel size: 0x0000000000009000 .
free physical memory address: [0x0000000080009000, 0x0000000087ffffff] 
kernel memory manager is initializing ...
KERN_BASE 0x0000000080000000
physical address of _etext is: 0x0000000080005000
kernel page table is on 
Switch to user mode...
in alloc_proc. user frame 0x0000000087fbc000, user stack 0x000000007ffff000, user kstack 0x0000000087fbb000 
User application is loading.
Application: obj/app_wait
CODE_SEGMENT added at mapped info offset:3
DATA_SEGMENT added at mapped info offset:4
Application program entry point (virtual address): 0x00000000000100b0
going to insert process 0 to ready queue.
going to schedule process 0 to run.
User call fork.
will fork a child from parent 0.
in alloc_proc. user frame 0x0000000087fae000, user stack 0x000000007ffff000, user kstack 0x0000000087fad000 
do_fork map code segment at pa:0000000087fb2000 of parent to child at va:0000000000010000.
going to insert process 1 to ready queue.
going to schedule process 1 to run.
User call fork.
will fork a child from parent 1.
in alloc_proc. user frame 0x0000000087fa1000, user stack 0x000000007ffff000, user kstack 0x0000000087fa0000 
do_fork map code segment at pa:0000000087fb2000 of parent to child at va:0000000000010000.
going to insert process 2 to ready queue.
going to schedule process 2 to run.
Grandchild process end, flag = 2.
User exit with code:0.
going to insert process 1 to ready queue.
going to schedule process 1 to run.
Child process end, flag = 1.
User exit with code:0.
going to insert process 0 to ready queue.
going to schedule process 0 to run.
Parent process end, flag = 0.
User exit with code:0.
no more ready processes, system shutdown now.
System is shutting down with exit code 0.

实验内容

本实验为挑战实验基础代码将继承和使用lab3_3完成后的代码

  • 先提交lab3_3的答案然后切换到lab3_challenge1_wait、继承lab3_3中所做修改
//切换到lab3_challenge1_wait
$ git checkout lab3_challenge1_wait

//继承lab3_3以及之前的答案
$ git merge lab3_3_rrsched -m "continue to work on lab3_challenge1"

注意:不同于基础实验,挑战实验的基础代码具有更大的不完整性,可能无法直接通过构造过程。 同样,不同于基础实验,我们在代码中也并未专门地哪些地方的代码需要填写,哪些地方的代码无须填写。这样,我们留给读者更大的“想象空间”。

  • 本实验的具体要求为:
    • 通过修改PKE内核和系统调用为用户程序提供wait函数的功能wait函数接受一个参数pid
      • 当pid为-1时父进程等待任意一个子进程退出即返回子进程的pid
      • 当pid大于0时父进程等待进程号为pid的子进程退出即返回子进程的pid
      • 如果pid不合法或pid大于0且pid对应的进程不是当前进程的子进程返回-1。
    • 补充do_fork函数实验3_1实现了代码段的复制你需要继续实现数据段的复制并保证fork后父子进程的数据段相互独立。
  • 注意最终测试程序可能和给出的用户程序不同但都只涉及wait函数、fork函数和全局变量读写的相关操作。

实验指导

  • 你对内核代码的修改可能包含添加系统调用、在内核中实现wait函数的功能以及对do_fork函数的完善。

注意:完成实验内容后,请读者另外编写应用,对自己的实现进行检测。

另外,后续的基础实验代码并不依赖挑战实验,所以读者可自行决定是否将自己的工作提交到本地代码仓库中(当然,提交到本地仓库是个好习惯,至少能保存自己的“作品”)。

5.6 lab3_challenge2 实现信号量(难度:★★★☆☆)

给定应用

  • user/app_semaphore.c
  1 /*                                                                                                         2 * This app create two child process.
  3 * Use semaphores to control the order of
  4 * the main process and two child processes print info. 
  5 */
  6 #include "user/user_lib.h"
  7 #include "util/types.h"
  8 
  9 int main(void) {
 10     int main_sem, child_sem[2];
 11     main_sem = sem_new(1);
 12     for (int i = 0; i < 2; i++) child_sem[i] = sem_new(0);
 13     int pid = fork();
 14     if (pid == 0) {
 15         pid = fork();
 16         for (int i = 0; i < 10; i++) {
 17             sem_P(child_sem[pid == 0]);
 18             printu("Child%d print %d\n", pid == 0, i);
 19             if (pid != 0) sem_V(child_sem[1]); else sem_V(main_sem);
 20         }
 21     } else {
 22         for (int i = 0; i < 10; i++) {
 23             sem_P(main_sem);
 24             printu("Parent print %d\n", i);
 25             sem_V(child_sem[0]);
 26         }
 27     }
 28     exit(0);
 29     return 0;
 30 }

以上程序通过信号量的增减,控制主进程和两个子进程的输出按主进程,第一个子进程,第二个子进程,主进程,第一个子进程,第二个子进程……这样的顺序轮流输出,如上面的应用预期输出如下:

In m_start, hartid:0
HTIF is available!
(Emulated) memory size: 2048 MB
Enter supervisor mode...
PKE kernel start 0x0000000080000000, PKE kernel end: 0x0000000080009000, PKE kernel size: 0x0000000000009000 .
free physical memory address: [0x0000000080009000, 0x0000000087ffffff] 
kernel memory manager is initializing ...
KERN_BASE 0x0000000080000000
physical address of _etext is: 0x0000000080005000
kernel page table is on 
Switch to user mode...
in alloc_proc. user frame 0x0000000087fbc000, user stack 0x000000007ffff000, user kstack 0x0000000087fbb000 
User application is loading.
Application: obj/app_semaphore
CODE_SEGMENT added at mapped info offset:3
DATA_SEGMENT added at mapped info offset:4
Application program entry point (virtual address): 0x00000000000100b0
going to insert process 0 to ready queue.
going to schedule process 0 to run.
User call fork.
will fork a child from parent 0.
in alloc_proc. user frame 0x0000000087fae000, user stack 0x000000007ffff000, user kstack 0x0000000087fad000 
do_fork map code segment at pa:0000000087fb2000 of parent to child at va:0000000000010000.
going to insert process 1 to ready queue.
Parent print 0
going to schedule process 1 to run.
User call fork.
will fork a child from parent 1.
in alloc_proc. user frame 0x0000000087fa2000, user stack 0x000000007ffff000, user kstack 0x0000000087fa1000 
do_fork map code segment at pa:0000000087fb2000 of parent to child at va:0000000000010000.
going to insert process 2 to ready queue.
Child0 print 0
going to schedule process 2 to run.
Child1 print 0
going to insert process 0 to ready queue.
going to schedule process 0 to run.
Parent print 1
going to insert process 1 to ready queue.
going to schedule process 1 to run.
Child0 print 1
going to insert process 2 to ready queue.
going to schedule process 2 to run.
Child1 print 1
going to insert process 0 to ready queue.
going to schedule process 0 to run.
Parent print 2
going to insert process 1 to ready queue.
going to schedule process 1 to run.
Child0 print 2
going to insert process 2 to ready queue.
going to schedule process 2 to run.
Child1 print 2
going to insert process 0 to ready queue.
going to schedule process 0 to run.
Parent print 3
going to insert process 1 to ready queue.
going to schedule process 1 to run.
Child0 print 3
going to insert process 2 to ready queue.
going to schedule process 2 to run.
Child1 print 3
going to insert process 0 to ready queue.
going to schedule process 0 to run.
Parent print 4
going to insert process 1 to ready queue.
going to schedule process 1 to run.
Child0 print 4
going to insert process 2 to ready queue.
going to schedule process 2 to run.
Child1 print 4
going to insert process 0 to ready queue.
going to schedule process 0 to run.
Parent print 5
going to insert process 1 to ready queue.
going to schedule process 1 to run.
Child0 print 5
going to insert process 2 to ready queue.
going to schedule process 2 to run.
Child1 print 5
going to insert process 0 to ready queue.
going to schedule process 0 to run.
Parent print 6
going to insert process 1 to ready queue.
going to schedule process 1 to run.
Child0 print 6
going to insert process 2 to ready queue.
going to schedule process 2 to run.
Child1 print 6
going to insert process 0 to ready queue.
going to schedule process 0 to run.
Parent print 7
going to insert process 1 to ready queue.
going to schedule process 1 to run.
Child0 print 7
going to insert process 2 to ready queue.
going to schedule process 2 to run.
Child1 print 7
going to insert process 0 to ready queue.
going to schedule process 0 to run.
Parent print 8
going to insert process 1 to ready queue.
going to schedule process 1 to run.
Child0 print 8
going to insert process 2 to ready queue.
going to schedule process 2 to run.
Child1 print 8
going to insert process 0 to ready queue.
going to schedule process 0 to run.
Parent print 9
going to insert process 1 to ready queue.
User exit with code:0.
going to schedule process 1 to run.
Child0 print 9
going to insert process 2 to ready queue.
User exit with code:0.
going to schedule process 2 to run.
Child1 print 9
User exit with code:0.
no more ready processes, system shutdown now.
System is shutting down with exit code 0.

实验内容

本实验为挑战实验基础代码将继承和使用lab3_3完成后的代码

  • 先提交lab3_3的答案然后切换到lab3_challenge2_semaphore、继承lab3_3中所做修改
//切换到lab3_challenge2_semaphore
$ git checkout lab3_challenge2_semaphore

//继承lab3_3以及之前的答案
$ git merge lab3_3_rrsched -m "continue to work on lab3_challenge1"

注意:不同于基础实验,挑战实验的基础代码具有更大的不完整性,可能无法直接通过构造过程。 同样,不同于基础实验,我们在代码中也并未专门地哪些地方的代码需要填写,哪些地方的代码无须填写。这样,我们留给读者更大的“想象空间”。

  • 本实验的具体要求为通过修改PKE内核和系统调用为用户程序提供信号量功能。
  • 注意:最终测试程序可能和给出的用户程序不同,但都只涉及信号量的相关操作。
  • 提示:信号灯的结构不能开太大否则会导致kernel_size出现问题

实验指导

  • 你对内核代码的修改可能包含以下内容:
    • 添加系统调用,使得用户对信号量的操作可以在内核态处理
    • 在内核中实现信号量的分配、释放和PV操作当P操作处于等待状态时能够触发进程调度

注意:完成实验内容后,请读者另外编写应用,对自己的实现进行检测。

另外,后续的基础实验代码并不依赖挑战实验,所以读者可自行决定是否将自己的工作提交到本地代码仓库中(当然,提交到本地仓库是个好习惯,至少能保存自己的“作品”)。