文章出處

Linux 0.11 塊設備文件的使用:塊設備的IO操作是非常慢的,它遠遠趕不上內存和CPU的速度。為了減少訪問塊設備的次數,Linux的文件系統提供了內存高速緩沖區。

當需要讀取塊設備的數據時,首先在緩沖區中查找,有則馬上返回,沒有則讀到緩沖區中,然后再復制到用戶數據緩沖區。這樣緩沖塊并沒有馬上消失,還會在內存中待一段時間,如果重復讀取一個文件,下次就可以直接從緩沖塊中讀取。當緩沖塊不夠用時,請求的進程就必須進入休眠,等待顯式喚醒。

緩沖區有自己的管理機制,很久沒有使用的塊可以給其他進程使用,如果是臟塊則要進行寫盤。緩沖在某些情況下才會有寫盤操作,所以要拔出一個設備時,應該先進行卸載,這樣才會寫盤,否則數據可能丟失,文件系統可能損壞。

本文通過Linux 0.11的源碼自頂向下來探究塊設備文件的寫機制,字符設備文件的使用可以參考Linux 0.11字符設備的使用。

二、塊設備寫函數

在sys_read或者sys_write等系統調用中,分別依靠inode節點的mode屬性來識別具體的文件類型,然后調用具體的設備讀寫函數。對于塊設備,在Linux 0.11中有三種:虛擬內存盤,硬盤,軟盤,其寫函數位于fs/block_dev.c(p293, 第14行)。這里主要討論寫操作,而不是讀操作,主要是因為寫操作要先讀取數據到緩沖區,然后對緩沖區進行寫,最后緩沖區會在某些時候進行寫盤,這個過程比較全面復雜,一旦理解了,看懂讀函數的源碼是沒什么問題的。

int block_write(int dev, long * pos, char * buf, int count){    int block = *pos >> BLOCK_SIZE_BITS;    int offset = *pos & (BLOCK_SIZE-1);    int chars;    int written = 0;    struct buffer_head * bh;    register char * p;    while (count>0) {        chars = BLOCK_SIZE - offset;        if (chars > count)            chars=count;        if (chars == BLOCK_SIZE)            bh = getblk(dev,block);        else            bh = breada(dev,block,block+1,block+2,-1);        block++;        if (!bh)            return written?written:-EIO;        p = offset + bh->b_data;        offset = 0;        *pos += chars;        written += chars;        count -= chars;        while (chars-->0)            *(p++) = get_fs_byte(buf++);        bh->b_dirt = 1;        brelse(bh);    }    return written;}

首先,BLOCK_SIZE_BITS,BLOCK_SIZE位于include/linux/fs.h(p394,第49行):

#define BLOCK_SIZE 1024#define BLOCK_SIZE_BITS 10

這里將整個設備看成一個大文件,而pos為這個文件的偏移,它不管是否啟動塊或者超級塊,從整個設備的第一個塊開始。但對于塊設備而言,其操作的基本單位是塊,這里定義了一個塊的大小為1024KB,即兩個扇區。把pos映射為具體的塊block的偏移offset,然后讀取磁盤數據到緩沖塊,并將用戶的數據復制到緩沖塊中,類似覆蓋,最后釋放緩沖塊(count屬性減一)。這里使用了提前讀的思想,提前讀兩塊(breada),這樣下次就可以直接從緩沖區獲取(getblk)。

三、高速緩沖區3.1 獲取一個緩沖塊getblk

getblk這個函數位于fs/buffer.c(p247,第206):

/*Ok, this is getblk, and it isn't very clear, again to *hinder race-conditions. Most of the code is seldom used, *(ie repeating), so it should be much more efficient than *it looks. The algoritm is changed: hopefully better, and *an elusive bug removed. */#define BADNESS(bh) (((bh)->b_dirt<<1)+(bh)->b_lock)struct buffer_head * getblk(int dev,int block){    struct buffer_head * tmp, * bh;repeat:    if ((bh = get_hash_table(dev,block)))        return bh;    tmp = free_list;    do {        if (tmp->b_count)            continue;        if (!bh || BADNESS(tmp)b_next_free) != free_list);    if (!bh) {        sleep_on(&buffer_wait);        goto repeat;    }    wait_on_buffer(bh);    if (bh->b_count)        goto repeat;    while (bh->b_dirt) {        sync_dev(bh->b_dev);        wait_on_buffer(bh);        if (bh->b_count)            goto repeat;    }/* NOTE!! While we slept waiting for this block, somebody else might *//* already have added "this" block to the cache. check it */    if (find_buffer(dev,block))        goto repeat;/* OK, FINALLY we know that this buffer is the only one of it's kind, *//* and that it's unused (b_count=0), unlocked (b_lock=0), and clean */    bh->b_count=1;    bh->b_dirt=0;    bh->b_uptodate=0;    remove_from_queues(bh);    bh->b_dev=dev;    bh->b_blocknr=block;    insert_into_queues(bh);    return bh;}(bh))>

這個函數首先通過get_hash_table檢查緩沖塊是否已經存在,如果存在則直接返回。否則遍歷free_list,當free_list所有的緩沖塊都被使用時(count > 0),則進入休眠,添加到buffer_wait鏈表中,待會重新開始。否則等待解鎖,這里考慮了競爭條件,有多次重復的判斷,每次休眠醒來后重新判斷滿足的條件。當數據塊沒有使用,但是臟標志置位時,將該塊對應的設備的所有inode和block進行寫盤(發起寫盤請求),這里wait_on_buffer會引起休眠,因為寫盤時會上鎖。寫盤結束后還要判斷是否已經在哈希隊列中,在的話重新開始,否則就是得到了一個干凈的緩沖塊。將緩沖塊從舊的隊列移出,添加到新的隊列中,即哈希表的頭,空閑表的尾,這樣能夠迅速找到該存在的塊,而該緩沖塊存在的時間最長。
哈希函數使用的是設備號和邏輯塊號的異或,隊列是一個雙向鏈表。而空閑鏈表是一個雙向的環形鏈表。最終返回的塊數據可能已經存在,或者返回一個未使用但沒有數據的數據塊。需要判斷是否uptodate,不是uptodte,則要調用ll_rw_block。

3.2 getblk調用的函數

哈希函數的定義如下,哈希表頭數組共有307項buffer_head,拉鏈法。

#define _hashfn(dev,block) (((unsigned)(dev^block))%NR_HASH)#define hash(dev,block) hash_table[_hashfn(dev,block)]

find_buffer先利用哈希函數找到對應的哈希隊列,然后遍歷哈希隊列,比對(dev, block),查看緩沖塊是否存在。注意這個函數加了static,不會被外部文件使用。

static struct buffer_head * find_buffer(int dev, int block){           struct buffer_head * tmp;    for (tmp = hash(dev,block) ; tmp != NULL ; tmp = tmp->b_next)        if (tmp->b_dev==dev && tmp->b_blocknr==block)            return tmp;    return NULL;}

get_hash_table對find_buffer進行了封裝,考慮了競爭條件,先對引用計數加一。當緩沖塊在隊列中時,如果加了鎖,則要休眠等待。結束后還要查看該塊的對應設備號和塊號是否已經被修改,如果沒有則返回。因為如果bh被上鎖,則解鎖時將喚醒所有等待該bh的進程鏈表,這時會造成多個進程并發操作該bh,可能會修改bh。這里允許多個進程共享一個數據塊,只要該數據塊不被上鎖。

/* * Why like this, I hear you say... The reason is race-conditions. * As we don't lock buffers (unless we are readint them, that is), something might happen to it while we sleep (ie a read-error will force it bad). This shouldn't really happen currently, but the code is ready. */struct buffer_head * get_hash_table(int dev, int block){    struct buffer_head * bh;    for (;;) {        if (!(bh=find_buffer(dev,block)))            return NULL;        bh->b_count++;        wait_on_buffer(bh);        if (bh->b_dev == dev && bh->b_blocknr == block)            return bh;        bh->b_count--;    }}

wait_on_buffer用于當數據在讀取時,或者寫臟塊等待使用緩沖塊時,使得當前進程進入休眠的狀態,注意當多個進程請求同一個被鎖住的緩沖塊時會形成休眠鏈表。這里有點像互斥鎖,鎖的粒度為一個緩沖區,當緩沖塊在某個設備的請求鏈表時會上鎖。

static inline void wait_on_buffer(struct buffer_head * bh){    cli();    while (bh->b_lock)        sleep_on(&bh->b_wait);    sti();}

sync_dev將指定設備的所有臟塊和inode進行寫盤,產生寫請求。

int sync_dev(int dev){    int i;    struct buffer_head * bh;    bh = start_buffer;    for (i=0 ; ib_dev != dev)            continue;        wait_on_buffer(bh);        if (bh->b_dev == dev && bh->b_dirt)            ll_rw_block(WRITE,bh);    }    sync_inodes();    bh = start_buffer;    for (i=0 ; ib_dev != dev)            continue;        wait_on_buffer(bh);        if (bh->b_dev == dev && bh->b_dirt)            ll_rw_block(WRITE,bh);    }    return 0;}

另外,sync_inodes位于fs/inode.c(p258, 第59行),將所有內存的臟的inode寫到對應的緩沖塊中。

void sync_inodes(void){    int i;    struct m_inode * inode;    inode = 0+inode_table;    for(i=0 ; ii_dirt && !inode->i_pipe)            write_inode(inode);    }

remove_from_queues將bh從哈希隊列和空閑鏈表刪除。

static inline void remove_from_queues(struct buffer_head * bh){/* remove from hash-queue */    if (bh->b_next)        bh->b_next->b_prev = bh->b_prev;    if (bh->b_prev)        bh->b_prev->b_next = bh->b_next;    if (hash(bh->b_dev,bh->b_blocknr) == bh)        hash(bh->b_dev,bh->b_blocknr) = bh->b_next;/* remove from free list */    if (!(bh->b_prev_free) || !(bh->b_next_free))        panic("Free block list corrupted");    bh->b_prev_free->b_next_free = bh->b_next_free;    bh->b_next_free->b_prev_free = bh->b_prev_free;    if (free_list == bh)        free_list = bh->b_next_free;}

insert_into_queues將bh插入到空閑鏈表的尾部,哈希隊列的頭部。

static inline void insert_into_queues(struct buffer_head * bh){/* put at end of free list */    bh->b_next_free = free_list;    bh->b_prev_free = free_list->b_prev_free;    free_list->b_prev_free->b_next_free = bh;    free_list->b_prev_free = bh;/* put the buffer in new hash-queue if it has a device */    bh->b_prev = NULL;    bh->b_next = NULL;    if (!bh->b_dev)        return;    bh->b_next = hash(bh->b_dev,bh->b_blocknr);    hash(bh->b_dev,bh->b_blocknr) = bh;    bh->b_next->b_prev = bh;}

3.3 開放接口breada

這個函數首先獲取對應的塊,判斷該塊是否已經uptodate,也就是是否可以讀,可以的話表示哈希隊列中已經有的。否則必須用ll_rw_block產生讀請求。下面函數循環中的bh應該是tmp。對另外的緩沖塊產生讀請求,但最后只等待first的緩沖塊解鎖,并等待該數據已經從盤讀取到。所以這個函數要保證讀到(dev, first)這個數據塊。這個函數是緩沖區提供的接口。

/* * Ok, breada can be used as bread, but additionally to *mark other blocks for reading as well. End the argument *list with a negative number. */struct buffer_head * breada(int dev,int first, ...){    va_list args;    struct buffer_head * bh, *tmp;    va_start(args,first);    if (!(bh=getblk(dev,first)))        panic("bread: getblk returned NULL\n");    if (!bh->b_uptodate)        ll_rw_block(READ,bh);    while ((first=va_arg(args,int))>=0) {        tmp=getblk(dev,first);        if (tmp) {            if (!tmp->b_uptodate)                ll_rw_block(READA,bh);            tmp->b_count--;        }    }    va_end(args);    wait_on_buffer(bh);    if (bh->b_uptodate)        return bh;    brelse(bh);    return (NULL);}

3.4 釋放一個緩沖塊brelse

在block_write將用戶數據復制到緩沖塊后,會將dirt置位,并調用brelse(bh)。該函數將引用計數減一,然后喚醒buffer_wait的進程鏈表,這個鏈表的進程在等待當前沒使用的緩沖塊。

void brelse(struct buffer_head * buf){    if (!buf)        return;    wait_on_buffer(buf);    if (!(buf->b_count--))        panic("Trying to free free buffer");    wake_up(&buffer_wait);}

四、塊設備底層操作4.1 開放接口ll_rw_block

ll_rw_block函數位于kernel/blk_drv/ll_rw_block.c(p153,第145行)

void ll_rw_block(int rw, struct buffer_head * bh){    unsigned int major;    if ((major=MAJOR(bh->b_dev)) >= NR_BLK_DEV ||    !(blk_dev[major].request_fn)) {        printk("Trying to read nonexistent block-device\n\r");        return;    }    make_request(major,rw,bh);}

其中rw表示讀或者寫請求,bh用來傳遞數據或保存數據。先通過主設備號判斷是否為有效的設備,同時請求函數是否存在。如果是有效的設備且函數存在,即有驅動,則添加請求到相關鏈表中。

4.2 添加請求make_request

這個函數首先判斷是否為提前讀或者提前寫,如果是則要看bh是否上了鎖。上了鎖則直接返回,因為提前操作是不必要的。否則轉化為可以識別的讀或者寫。然后鎖住緩沖區。數據處理結束后在中斷處理函數中會進行解鎖如果是寫操作但是緩沖區不臟,或者讀操作但是緩沖區已經更新,則直接返回。
接著尋找一項request,注意最后的13是留給read操作的。如果沒有找到,對于提前讀則直接返回,否則進入休眠,添加在wait_for_request鏈表中。最后利用bh頭部的信息填充req,將塊號轉化為扇區號,讀取兩個扇區,并添加到對應設備的請求鏈表中。

static void make_request(int major,int rw, struct buffer_head * bh){    struct request * req;    int rw_ahead;/* WRITEA/READA is special case - it is not really needed, so if the buffer is locked, we just forget about it, else it's a normal read */    if ((rw_ahead = (rw == READA || rw == WRITEA))) {        if (bh->b_lock)            return;        if (rw == READA)            rw = READ;        else            rw = WRITE;    }    if (rw!=READ && rw!=WRITE)        panic("Bad block dev command, must be R/W/RA/WA");    lock_buffer(bh);    if ((rw == WRITE && !bh->b_dirt) || (rw == READ && bh->b_uptodate)) {        unlock_buffer(bh);        return;    }repeat:/* we don't allow the write-requests to fill up the queue completely: we want some room for reads: they take precedence. The last third of the requests are only for reads. */    if (rw == READ)        req = request+NR_REQUEST;    else        req = request+((NR_REQUEST*2)/3);/* find an empty request */    while (--req >= request)        if (req->dev<0)            break;/* if none found, sleep on new requests: check for rw_ahead */    if (req < request) {        if (rw_ahead) {            unlock_buffer(bh);            return;        }        sleep_on(&wait_for_request);        goto repeat;    }/* fill up the request-info, and add it to the queue */    req->dev = bh->b_dev;    req->cmd = rw;    req->errors=0;    req->sector = bh->b_blocknr<<1;    req->nr_sectors = 2;    req->buffer = bh->b_data;    req->waiting = NULL;    req->bh = bh;    req->next = NULL;    add_request(major+blk_dev,req);}

再來看看lock_buffer,這個函數其實就是獲取緩沖塊的鎖。當鎖已被其他進程獲取,則進入休眠。

static inline void lock_buffer(struct buffer_head * bh){    cli();    while (bh->b_lock)        sleep_on(&bh->b_wait);    bh->b_lock=1;    sti();}

4.3 Linux電梯調度算法add_request

當該設備沒有請求操作時,直接調用請求函數,對于硬盤是do_hd_request。否則遍歷請求鏈表,將當前req插入,這里插入使用的是電梯調度算法。

這里寫圖片描述vcDvo6zLs9fFxLO49re9z/K24LSmwO3H68fzoaPI57n7cmVxuNW6w9TatMXNt9LGtq+1xLe9z/LJz6OsxMfDtL/J0tTPyLSmwO2jrNXi0fnE3L3ayqFJT7XEyrG85KGjPC9zdHJvbmc+PC9wPg0KPHByZSBjbGFzcz0="brush:java;">/* * add-request adds a request to the linked list. * It disables interrupts so that it can muck with the * request-lists in peace. */static void add_request(struct blk_dev_struct * dev, struct request * req){ struct request * tmp; req->next = NULL; cli(); if (req->bh) req->bh->b_dirt = 0; if (!(tmp = dev->current_request)) { dev->current_request = req; sti(); (dev->request_fn)(); return; } for ( ; tmp->next ; tmp=tmp->next) if ((IN_ORDER(tmp,req) || !IN_ORDER(tmp,tmp->next)) && IN_ORDER(req,tmp->next)) break; req->next=tmp->next; tmp->next=req; sti();}

其中IN_ORDER位于kernel/blk_drv/blk.h(p134,第35行)

/* * This is used in the elevator algorithm: Note that * reads always go before writes. This is natural: reads * are much more time-critical than writes. */#define IN_ORDER(s1,s2) \((s1)->cmd<(s2)->cmd || ((s1)->cmd==(s2)->cmd && \((s1)->dev < (s2)->dev || ((s1)->dev == (s2)->dev && \(s1)->sector < (s2)->sector))))

這個宏的含義是read請求排在寫請求前面;相同請求則次設備號低的排在前面,即低分區的排在前面;或者設備號相同,即同一個分區,則扇區號小的排在前面。

這里寫圖片描述

4.4 相關函數

這里看一下blk.h(位于kernel/blk_drv/,p133)這個文件:

#ifndef _BLK_H#define _BLK_H#define NR_BLK_DEV  7/* * NR_REQUEST is the number of entries in the request-queue. * NOTE that writes may use only the low 2/3 of these: reads take precedence. * 32 seems to be a reasonable number: enough to get some * benefit from the elevator-mechanism, but not so much as * to lock a lot of buffers when they are in the queue. 64 * seems to be too many (easily long pauses in reading when * heavy writing/syncing is going on) */#define NR_REQUEST  32/* Ok, this is an expanded form so that we can use the *same request for paging requests when that is *implemented. In paging, 'bh' is NULL, and 'waiting' is *used to wait for read/write completion. */struct request {    int dev;        /* -1 if no request */    int cmd;        /* READ or WRITE */    int errors;    unsigned long sector;    unsigned long nr_sectors;    char * buffer;    struct task_struct * waiting;    struct buffer_head * bh;    struct request * next;};/* * This is used in the elevator algorithm: Note that * reads always go before writes. This is natural: reads * are much more time-critical than writes. */#define IN_ORDER(s1,s2) \((s1)->cmd<(s2)->cmd || ((s1)->cmd==(s2)->cmd && \((s1)->dev < (s2)->dev || ((s1)->dev == (s2)->dev && \(s1)->sector < (s2)->sector))))struct blk_dev_struct {    void (*request_fn)(void);    struct request * current_request;};extern struct blk_dev_struct blk_dev[NR_BLK_DEV];extern struct request request[NR_REQUEST];extern struct task_struct * wait_for_request;#ifdef MAJOR_NR/* * Add entries as needed. Currently the only block devices * supported are hard-disks and floppies. */#if (MAJOR_NR == 1)/* ram disk */#define DEVICE_NAME "ramdisk"#define DEVICE_REQUEST do_rd_request#define DEVICE_NR(device) ((device) & 7)#define DEVICE_ON(device) #define DEVICE_OFF(device)#elif (MAJOR_NR == 2)/* floppy */#define DEVICE_NAME "floppy"#define DEVICE_INTR do_floppy#define DEVICE_REQUEST do_fd_request#define DEVICE_NR(device) ((device) & 3)#define DEVICE_ON(device) floppy_on(DEVICE_NR(device))#define DEVICE_OFF(device) floppy_off(DEVICE_NR(device))#elif (MAJOR_NR == 3)/* harddisk */#define DEVICE_NAME "harddisk"#define DEVICE_INTR do_hd#define DEVICE_REQUEST do_hd_request#define DEVICE_NR(device) (MINOR(device)/5)#define DEVICE_ON(device)#define DEVICE_OFF(device)#elif 1/* unknown blk device */#error "unknown blk device"#endif#define CURRENT (blk_dev[MAJOR_NR].current_request)#define CURRENT_DEV DEVICE_NR(CURRENT->dev)#ifdef DEVICE_INTRvoid (*DEVICE_INTR)(void) = NULL;#endifstatic void (DEVICE_REQUEST)(void);static inline void unlock_buffer(struct buffer_head * bh){    if (!bh->b_lock)        printk(DEVICE_NAME ": free buffer being unlocked\n");    bh->b_lock=0;    wake_up(&bh->b_wait);}static inline void end_request(int uptodate){    DEVICE_OFF(CURRENT->dev);    if (CURRENT->bh) {        CURRENT->bh->b_uptodate = uptodate;        unlock_buffer(CURRENT->bh);    }    if (!uptodate) {        printk(DEVICE_NAME " I/O error\n\r");        printk("dev %04x, block %d\n\r",CURRENT->dev,            CURRENT->bh->b_blocknr);    }    wake_up(&CURRENT->waiting);    wake_up(&wait_for_request);    CURRENT->dev = -1;    CURRENT = CURRENT->next;}#define INIT_REQUEST \repeat: \    if (!CURRENT) \        return; \    if (MAJOR(CURRENT->dev) != MAJOR_NR) \        panic(DEVICE_NAME ": request list destroyed"); \    if (CURRENT->bh) { \        if (!CURRENT->bh->b_lock) \            panic(DEVICE_NAME ": block not locked"); \    }#endif#endif

這個文件定義了blk_dev的結構,也就是該設備的請求函數(在每個設備的init函數中初始化,對于硬盤是do_hd_request),以及對應的請求鏈表的表頭(一開始為NULL,定義在ll_rw_blk.c中),共有7項。定義了request這個請求,共有32項,屬性dev = -1時,表示request沒有被使用。定義了3項塊設備的請求函數。顯然,這個文件是要被包含的,而且必須在這個文件之前定義MAJOR_NR這個宏,主設備號,表示使用哪個設備。CURRENT表示請求表頭,而CURRENT_DEV表示哪個驅動盤(0或1)。這里還定義了end_request,也就是中斷結束后,對request鏈表移動到下一項,將處理完的request釋放,dev = -1,然后喚醒wait_for_request等,表示有request可以使用。最重要的是要進行解鎖,并喚醒等待該緩沖塊的進程。

五、硬盤驅動程序

在add_request里面,如果當前表頭dev->current_request為空,則直接調用(dev->request_fn)()。這個函數是啟動讀寫的關鍵函數。對于每個設備都有一個request_fn函數。這里以硬盤為例。

5.1 開放接口do_hd_request

do_hd_request位于kernel/blk_drv/hd.c(p145,第294行)

void do_hd_request(void){    int i,r = 0;    unsigned int block,dev;    unsigned int sec,head,cyl;    unsigned int nsect;    INIT_REQUEST;    dev = MINOR(CURRENT->dev);    block = CURRENT->sector;    if (dev >= 5*NR_HD || block+2 > hd[dev].nr_sects) {        end_request(0);        goto repeat;    }    block += hd[dev].start_sect;    dev /= 5;    __asm__("divl %4":"=a" (block),"=d" (sec):"0" (block),"1" (0),        "r" (hd_info[dev].sect));    __asm__("divl %4":"=a" (cyl),"=d" (head):"0" (block),"1" (0),        "r" (hd_info[dev].head));    sec++;    nsect = CURRENT->nr_sectors;    if (reset) {        reset = 0;        recalibrate = 1;        reset_hd(CURRENT_DEV);        return;    }    if (recalibrate) {        recalibrate = 0;        hd_out(dev,hd_info[CURRENT_DEV].sect,0,0,0,            WIN_RESTORE,&recal_intr);        return;    }       if (CURRENT->cmd == WRITE) {        hd_out(dev,nsect,sec,head,cyl,WIN_WRITE,&write_intr);        for(i=0 ; i<3000 && !(r=inb_p(HD_STATUS)&DRQ_STAT) ; i++)            /* nothing */ ;        if (!r) {            bad_rw_intr();            goto repeat;        }        port_write(HD_DATA,CURRENT->buffer,256);    } else if (CURRENT->cmd == READ) {        hd_out(dev,nsect,sec,head,cyl,WIN_READ,&read_intr);    } else        panic("unknown hd-command");}

do_hd_request首先查看硬盤設備的表頭是否為空,空則直接返回 。否則獲取表頭請求項的次設備號,將起始扇區號轉化為絕對扇區號(LBA),再將絕對扇區號轉化為扇區號、磁頭號、柱面號。對于寫硬盤,先將具體的硬盤(第一塊或第二塊)、寫的扇區數、扇區號、磁頭號、柱面號、寫命令、對應的寫中斷函數指針傳遞給hd_out函數,從而將相關參數寫到硬盤相應的寄存器中,之后等待一段時間,然后將一個扇區的數據寫到硬盤中。對于讀硬盤,則只傳遞相關參數給hd_out即可。

5.2 填寫硬盤寄存器hd_out

這個函數主要是將相關的參數寫到硬盤的相應寄存器中,并設置全局中斷句柄do_hd,表示硬盤下一次發生中斷調用的函數,對于讀,do_hd = read_intr,對于寫,do_hd = write_intr。

static void hd_out(unsigned int drive,unsigned int nsect,unsigned int sect,        unsigned int head,unsigned int cyl,unsigned int cmd,        void (*intr_addr)(void)){    register int port asm("dx");    if (drive>1 || head>15)        panic("Trying to write bad sector");    if (!controller_ready())        panic("HD controller not ready");    do_hd = intr_addr;    outb_p(hd_info[drive].ctl,HD_CMD);    port=HD_DATA;    outb_p(hd_info[drive].wpcom>>2,++port);    outb_p(nsect,++port);    outb_p(sect,++port);    outb_p(cyl,++port);    outb_p(cyl>>8,++port);    outb_p(0xA0|(drive<<4)|head,++port);    outb(cmd,++port);}

5.3 硬盤中斷程序

為什么硬盤會調用do_hd?這可以通過硬盤中斷的句柄得知:

void hd_init(void){    blk_dev[MAJOR_NR].request_fn = DEVICE_REQUEST;    set_intr_gate(0x2E,&hd_interrupt);    outb_p(inb_p(0x21)&0xfb,0x21);    outb(inb_p(0xA1)&0xbf,0xA1);}

顯然,上面設置了硬盤中斷的入口函數為hd_interrupt。而這個函數位于kernel/system_call.s(p89,第221行):

hd_interrupt:    pushl %eax    pushl %ecx    pushl %edx    push %ds    push %es    push %fs    movl $0x10,%eax    mov %ax,%ds    mov %ax,%es    movl $0x17,%eax    mov %ax,%fs    movb $0x20,%al    outb %al,$0xA0     # EOI to interrupt controller #1    jmp 1f          # give port chance to breathe1:  jmp 1f1:  xorl %edx,%edx    xchgl do_hd,%edx    testl %edx,%edx    jne 1f    movl $unexpected_hd_interrupt,%edx1:  outb %al,$0x20    call *%edx      # "interesting" way of handling intr.    pop %fs    pop %es    pop %ds    popl %edx    popl %ecx    popl %eax    iret

這段代碼,主要是中斷結束命令字給8259A,然后判斷do_hd是否為空。不為空的話就調用do_hd。
這里來看一下read_intr和write_intr:

static void read_intr(void){    if (win_result()) {        bad_rw_intr();        do_hd_request();        return;    }    port_read(HD_DATA,CURRENT->buffer,256);    CURRENT->errors = 0;    CURRENT->buffer += 512;    CURRENT->sector++;    if (--CURRENT->nr_sectors) {        do_hd = &read_intr;        return;    }    end_request(1);    do_hd_request();}static void write_intr(void){    if (win_result()) {        bad_rw_intr();        do_hd_request();        return;    }    if (--CURRENT->nr_sectors) {        CURRENT->sector++;        CURRENT->buffer += 512;        do_hd = &write_intr;        port_write(HD_DATA,CURRENT->buffer,256);        return;    }    end_request(1);    do_hd_request();}

這兩個函數都是將操作的扇區數前減減,然后請求的起始扇區遞增,緩沖塊的起始地址加一個扇區的長度。只要操作的扇區數不為零,則不斷處理當前的請求項。對于讀請求,將數據從硬盤讀取到緩沖區中,使用port_read;而對于寫請求,則將緩沖區的數據寫到硬盤緩沖區中,使用port_write。不再給寄存器發送命令,因為已經發送了請求兩個扇區的數據的命令。當前請求項處理完時才會end_request,處理下一項。這里有點向鏈表的感覺。

六、整體架構圖

\
從上圖,可以發現,文件系統使用的是高速緩沖區,然后從緩沖區中把數據復制到用戶數據區。而底層驅動讀取磁盤數據時,將磁盤數據保存到高速緩沖區中。這樣中間層隔著一個高速緩沖區,能夠提高IO效率。但是必須把臟塊寫盤,否則可能損壞文件系統。一種是在get_blk時寫盤,一種是在umount文件系統時寫盤,一種是使用系統調用sys_sync寫盤。

看文倉www.kanwencang.com網友整理上傳,為您提供最全的知識大全,期待您的分享,轉載請注明出處。
歡迎轉載:http://www.kanwencang.com/bangong/20170120/92640.html

文章列表


不含病毒。www.avast.com
arrow
arrow
    全站熱搜
    創作者介紹
    創作者 大師兄 的頭像
    大師兄

    IT工程師數位筆記本

    大師兄 發表在 痞客邦 留言(0) 人氣()