Summary: | 博士 === 國立中山大學 === 資訊工程學系研究所 === 93 === The IT community has increasingly come to view storage as a resource that should be shared among computer systems and managed independently of the computer systems that it serves. And, the explosive growth of the Web contents has led to increasing attention on two major challenges: scalability and high availability of network file system. Therefore, the ways to improve the reliability and availability of system, to achieve the expected reduction in operational expenses and to reduce the operations of system management of system have become essential issues. A basic technique for improving reliability of a file system is to mask the effects of failures through replication. Consistency control protocols are implemented to ensure the consistency among these replicas.
In this dissertation, we leveraged the concept of intermediate file handle to cover the heterogeneity of file system. But, the monolithic server system suffered from the poor system utilization due to the lack of dependence checking between writes and management of out-of-ordered requests. Hence, in this dissertation, we followed the concept of intermediate file handle and proposed an efficient data consistency control scheme, which attempts to eliminate unnecessary waits for independent NFS writes to improve the efficiency of file server group. In addition, we also proposed a simple load-sharing mechanism for NFS client to improve system throughput and the utilization of duplicates. Finally, the results of experiments proved the efficiency of the proposed consistency control mechanism and load-sharing policy. Above all, easy to implement is our main design consideration.
|