Mirror file system a multiple server file system
Download
1 / 32

Mirror File System A Multiple Server File System - PowerPoint PPT Presentation


  • 203 Views
  • Uploaded on

Mirror File System A Multiple Server File System. John Wong CTO [email protected] Twin Peak s Software Inc. Multiple Server File System. Conventional File System – UFS, EXT3 and NFS Manage and store files on a single server and its storage devices Multiple Server File system

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Mirror File System A Multiple Server File System' - amiel


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Mirror file system a multiple server file system
Mirror File SystemA Multiple Server File System

John Wong

CTO

[email protected]

Twin Peaks Software Inc.


Multiple server file system
Multiple Server File System

  • Conventional File System – UFS, EXT3 and NFS

    • Manage and store files on a single server and its storage devices

  • Multiple Server File system

    • Manage and store files on multiple servers and their storage devices


Problems
Problems

  • Single resource is vulnerable

  • Redundancy provides a safety net

    • Disk level => RAID

    • Storage level => Storage Replication

    • TCP/IP level => SNDR

    • File System level => CFS, MFS

    • System level => Clustering system

    • Application => Database


Why mfs
Why MFS?

  • Many advantages over existing technologies


Unix linux file system
Unix/Linux File System

Application

1

Application

2

User Space

UFS/EXT3

Kernel Space

Disk Driver

Data


Network file system
Network File System

Application

Application

Application

Application

UFS/EXT3

NFS (Client mount)

NFSD

Data


Ufs nfs
UFS | NFS

Application

Application

Application

UFS/EXT3

UFS/EXT3

NFS (Client mount)

NFSD

Data B

Data B


Ufs nfs1

UFS + NFS

  • UFS manages data on the local server’s storage devices

  • NFS manages data on remote server’s storage devices

  • Combine these two file systems to manage data on both local and remote servers storage devices


Mfs ufs nfs
MFS = UFS + NFS

Active MFS Server

Passive MFS Server

Application

Application

Application

Application

MFS

NFS

UFS/EXT3

UFS/EXT3

Data

Data


Building block approach

Building Block Approach

  • MFS is a kernel loadable module

  • MFS is loaded on top of UFS and NFS

  • Standard VFS interface

  • No change to UFS and NFS


File system framework
File System Framework

File System Operation calls

File System Operation calls

File Operation System Calls

Other System calls

Statfs()

mount ()

link ()

umount ()

sync ()

rmdir ()

lseek ()

ioctl ()

open ()

read ()

creat ()

write ()

close ()

mkdir ()

VFS interfaces

Vnode interfaces

PCFS

NFS (2)

NFS (1)

HSFS

PCFS

UFS (1)

VxFS

UFS (2)

QFS

Data

Data

Data

Data

Optical drive

Network

SOLARIS Internal, Core Kernel Architecture, Jim Mauro. Richard McDougall, PRENTICE HALL


Mfs framework
MFS Framework

File System Operation calls

Other System calls

File Operation System Calls

umount ()

read ()

mount ()

lseek ()

mkdir ()

rmdir ()

creat ()

sync ()

open ()

write ()

close ()

Statfs()

ioctl ()

link ()

Vnode interfaces

VFS interfaces

MFS

PCFS

Vnode VFS interface

QFS

UFS(1)

VxFS

PCFS

NFS (1)

HSFS

NFS (2)

UFS (2)

Data

Data

Data

Data

Optical drive

Network


Transparency

Transparency

  • Transparent to users and applications

    • - No re-compilation or re-link needed

  • Transparent to existing file structures

    • - Same pathname access

  • Transparent to underlying file systems

    • - UFS, NFS


Mount mechanism

Mount Mechanism

  • Conventional Mount

    • - One directory, one file system

  • MFS Mount

    • - One directory, two or more file systems


Mount mechanism1

Mount Mechanism

  • # mount –F mfs host:/ndir1/ndir2 /udir1/udir2

    • First mount the NFS on a UFS directory

    • Then mount the MFS on top of UFS and NFS

    • Existing UFS tree structure /udir1/udir2 becomes a local copy of MFS

    • Newly mounted host:/ndir1/ndir2 becomes a remote copy of MFS

    • Same mount options as NFSexcept no ‘-o hard’ option


Mfs mfsck command

MFS mfsck Command

  • # /usr/lib/fs/mfs/mfsck mfs_dir

    • After MFS mount succeeds, the local copy may not be identical to the remote copy.

    • Use mfsck (the MFS fsck) to synchronize them.

    • The mfs_dir can be any directory under MFS mount point.

    • Multiple mfsck commands can be invoked at the same time.


Read write vnode operation
READ/WRITE Vnode Operation

  • All VFS/vnode operations received by MFS

  • READ related operation: read, getattr,….

  • those operations only need to go to local copy (UFS).

  • WRITE related operation: write, setattr,…..

  • those operations go to both local (UFS) and remote (NFS) copy simultaneously (using threads)


Mirroring granularity

MirroringGranularity

  • Directory Level

    • Mirror any UFS directory instead of entire UFS filesystem

    • Directory A mirrored to Server A

    • Directory B mirrored to Server B

  • Block Level Update

    • Only changed block is mirrored


Mfs msync command

MFS msync Command

  • # /usr/lib/fs/mfs/msync mfs_root_dir

    • A daemon that synchronizes MFS pair after a remote MFS partner fails.

    • Upon a write failure, MFS:

      - Logs name of file to which the write operation failed

    • - Starts a heartbeat thread to verify the remote MFS server is back online

    • Once the remote MFS server is back online, msync uses the log to sync missing files to remote server.


Active active configuration
Active/Active Configuration

Server

Server

Active MFS Server

Active MFS Server

Application

Application

Application

Application

MFS

MFS

UFS

NFS

UFS

NFS

Data B

Data A


M fs locking mechanism

MFS Locking Mechanism

MFS uses UFS, NFS file record lock.

Locking is required for the active-active configuration.

Locking enables write-related vnode operations as atomic operations.

Locking is enabled by default.

Locking is not necessary in active-passive configuration.


Real time and scheduled

Real -Time and Scheduled

  • Real-time

  • -- Replicate file in real-time

  • Scheduled

  • -- Log file path, offset and size

  • -- Replicate only changed portion of a file


Applications

Applications

  • Online File Backup

  • Server File Backup, active  passive

  • Server/NAS Clustering, active  Active


Mfs ntfs cifs
MFS = NTFS + CIFS

Window Desktop/Laptop

Remote Server

Application

Application

Application

Application

MFS

CIFS

NTFS

NTFS

Data

Data


Online file backup real time or scheduled time
Online File BackupReal-time or Scheduled time

MFS

MFS

LAN or Wan

Folder

Folder

Folder

MFS

User Desktop/Laptop

ISP Server


Server replication
Server Replication

Primary

Secondary

Heartbeat

App

Email

Mirror File

System

Mirror File

System

Mirror File

System

Mirroring Path : /home

: /var/spool/mail


Enterprise clusters
Enterprise Clusters

Central

Mirroring Path

App

App

Mirror File

System

App

App

App

App

App

Mirror File

System

Mirror File

System

Mirror File

System

Mirror File

System


Advantages

Advantages

  • Building block approach

  • -- Building upon existing UFS, EXT3 , NFS, CIFS infrastructures

  • No metadata is replicated

  • -- Superblock, Cylinder group, file allocation map are not replicated.

  • Every file write operation is checked by file system

  • -- file consistency, integrity

  • Live file, not raw data replication

  • -- The primary and backup copy both are live files


Advantages1

Advantages

  • Interoperability

  • -- Two nodes can be different systems

  • -- Storage systems can be different

  • Small granularity

  • -- Directory level, not entire file system

  • One to many or many to one replication


Advantages2

Advantages

  • Fast replication

  • -- Replication in Kernel file system module

  • Immediate failover

  • -- No need to fsck and mount operation

  • Geographically dispersed clustering

  • -- Two nodes can be separated by hundreds of miles

  • Easy to deploy and manage

  • -- Only one copy of MFS running on primary server is

  • needed for replication


Why mfs1
Why MFS?

  • Better Data Protection

  • Better Disaster Recovery

  • Better RAS

  • Better Scalability

  • Better Performance

  • Better Resources Utilization


Q & A

Application

Application

Application

Application

MFS

MFS

Data A

Data B


ad