elements of san capacity planning
Download
Skip this Video
Download Presentation
Elements of SAN capacity planning

Loading in 2 Seconds...

play fullscreen
1 / 49

Elements of SAN capacity planning - PowerPoint PPT Presentation


  • 149 Views
  • Uploaded on

Elements of SAN capacity planning. Mark Friedman VP, Storage Technology [email protected] (941) 261-8945. DataCore Software Corporation. Founded 1998 - Storage networking Software 170+ employees, private - Over $45M raised Top Venture firms - NEA, OneLiberty

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Elements of SAN capacity planning' - elina


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
elements of san capacity planning

Elements of SAN capacity planning

Mark Friedman

VP, Storage Technology

[email protected]

(941) 261-8945

datacore software corporation
DataCore Software Corporation
  • Founded 1998 - Storage networking Software
  • 170+ employees, private - Over $45M raised
    • Top Venture firms - NEA, OneLiberty
    • Funds – VanWagoner, Bank of America, etc
    • Intel Business and Technical collaboration agreement
  • Exec. Team
    • Proven Storage expertise
    • Proven Software company experience
    • Operating systems, high-availability, Caching, networking
    • Enterprise level support and training
  • Worldwide: Ft. Lauderdale HQ, Silicon Valley, Canada France, Germany, U.K., Japan
overview
Overview
  • How do we take what we know about storage processor performance and apply it to emerging SAN technology?
  • What is a SAN?
  • Planning for SANs:
    • SAN performance characteristics
    • Backup and replication performance
evolution of disk storage subsystems

Cached Disk

Spindles

Strings & Farms

Storage

Processors

Evolution Of Disk Storage Subsystems

See: Dr. Alexandre Brandwajn,

“A study of cached RAID 5 I/O”

CMG Proceedings, 1994.

Write-thru

Cached

subsystems

what is a san
What Is A SAN?
  • Storage Area Networks are designed to exploit Fibre Channel plumbing
  • Approaches to simplified networked storage:
    • SAN appliances
    • SAN Metadata Controllers (“out of band”)
    • SAN storage managers (“in band”)
the difference between nas and san

Application: HTTP, RPC

Host-to-Host: TCP, UDP

Internet Protocol: IP

Media Access: Ethernet, FDDI

Packet

Packet

Packet

Packet

The Difference Between NAS and SAN
  • Storage Area Network(SAN) designed to exploit Fibre Channel plumbing require a new infrastructure.
  • Network Attached Storage (NAS) devices plug into the existing networking infrastructure.
    • Networked file access protocols (NFS, SMB, CIFS)
    • TCP/IP stack
the difference between nas and san1

Application Interfaces

RPC

DCOM

Winsock

NetBIOS

  • User Mode
  • Kernel

Named Pipes

Redirector

Server

NetBT

TDI

TCP

UDP

IP

ARP

ICMP

IGMP

IP Filtering

IP Forwarding

Packet Scheduler

NDIS

NDIS Wrapper

NDIS Miniport

NIC Device Driver

The Difference Between NAS and SAN
  • NAS devices plug into existing TCP/IP networking support.
  • Performance considerations:
    • 1500 byte Ethernet MTU
    • TCP requires acknowledgement of each packet, limiting performance.
the difference between nas and san2
The Difference Between NAS and SAN
  • Performance considerations:
    • e.g.,
    • 1.5 KB Ethernet MTU
      • Requires processing 80,000 Host interrupts/sec @ 1 Gb/sec
      • or Jumbo frames, which also requires installing a new infrastructure
    • Which is why Fibre Channel was designed the way it is!

Source: Alteon Computers, 1999.

competing network file system protocols
Competing Network File System Protocols
  • Universal data sharing is developing ad hoc on top of de facto industry standards designed for network access.
    • Sun NFS
    • HTTP, FTP
    • Microsoft CIFS (and DFS)
      • also known as SMB
      • CIFS-compatible is the the largest and fastest growing category of data
cifs data flow

File

Server

System

Cache

System

Cache

Client

MS Word

Server

Redirector

Network

Interface

Network

Interface

SMB Request

SMB Request

CIFS Data Flow

Session-oriented: e.g., call backs

what about performance

Application: HTTP, RPC

User Process

Host-to-Host: TCP, UDP

Internet Protocol: IP

Media Access: Ethernet, FDDI

Client Process

TCP/IP Driver

What About Performance?

NFS Server

NFS Client

Remote

Procedure

Call (RPC)

NFSD Daemon

Response

Data

TCP/IP Driver

TCP/IP Network

what about performance1

File

Server

System

Cache

System

Cache

Client

MS Word

Application: HTTP, RPC

Server

Host-to-Host: TCP, UDP

Internet Protocol: IP

Redirector

Media Access: Ethernet, FDDI

Network

Interface

Network

Interface

SMB Request

SMB Request

What About Performance?
  • Network-attached yields fraction of the performance of direct-attached drives when block size does not match frame size.
  • See ftp://ftp.research.microsoft.com/pub/tr/tr-2000-55.pdf
what about modeling
What about modeling?
  • Add a network delay component to interconnect two Central Server models and iterate.
the holy grail
The Holy Grail!

Storage Area Networks

  • Uses low latency, high performance Fibre Channel switching technology (plumbing)
  • 100 MB/sec Full duplex serial protocol over copper or fiber
  • Extended distance using fiber
  • Three topologies:
    • Point-to-Point
    • Arbitrated Loop: 127 addresses, but can be bridged
    • Fabric: 16 MB addresses
the holy grail1
The Holy Grail!

Storage Area Networks

  • FC delivers SCSI commands, but Fibre Channel exploitation requires new infrastructure and driver support

Objectives:

    • Extended addressing of shared storage pools
    • Dynamic, hot-plugable interfaces
    • Redundancy, replication & failover
    • Security administration
    • Storage resource virtualization
distributed storage centralized administration
Distributed Storage & Centralized Administration

Traditional tethered vs untethered SAN storage

  • Untethered storage can (hopefully) be pooled for centralized administration
  • Disk space pooling (virtualization)
    • Currently, using LUN virtualization
    • In the future, implementing dynamic virtual:real address mapping (e.g., the IBM Storage Tank)
  • Centralized back-up
    • SAN LAN-free backup
storage area networks

Upper Level Protocol

SCSI

IPI-3

HIPPI

IP

Fc4

Common Services

Fc3

Framing Protocol/Flow Control

Fc2

8B/10B Encode/Decode

Fc1

100MB/s Physical Layer

Fc0

Storage Area Networks
  • FC is packet-oriented (designed for routing).
  • FC pushes many networking functions into the hardware layer.
    • e.g.,
    • Packet fragmentation
    • Routing
storage area networks1
Storage Area Networks
  • FC is designed to work with optical fiber and lasers consistent with Gigabit Ethernet hardware
    • 100 MB/sec interfaces
    • 200 MB/sec interfaces
  • This creates a new class of hardware that you must budget for: FC hubs and switches.
storage area networks2
Storage Area Networks

Performance characteristics of FC switches:

  • Extremely low latency ( 1sec), except when cascaded switches require frame routing
  • Deliver dedicated 100 MB/sec point-to-point virtual circuit bandwidth
  • Measured 80 MB/sec effective data transfer rates per 100 MB/sec Port
storage area networks3

Upper Level Protocol

SCSI

IPI-3

HIPPI

IP

Fc4

Common Services

Fc3

Framing Protocol/Flow Control

Fc2

8B/10B Encode/Decode

Fc1

100MB/s Physical Layer

Fc0

Storage Area Networks

When will IP and SCSI co-exist on the same network fabric?

  • iSCSI
  • Nishan
  • Others?
storage area networks4
Storage Area Networks
  • FC zoning is used to control access to resources (security)
  • Two approaches to SAN management:
    • Management functions must migrate to the switch, storage processor, or….
    • OS must be extended to support FC topologies.
approaches to building sans
Approaches to building SANs
  • Fibre Channel-based Storage Area Networks (SANs)
    • SAN appliances
    • SAN Metadata Controllers
    • SAN Storage Managers
  • Architecture (and performance) considerations
approaches to building sans1
Approaches to building SANs
  • Where does the logical device:physical device mapping run?
    • Out-of-band: on the client
    • In-band: inside the SAN appliance, transparent to the client
  • Many industry analysts have focused on this relatively unimportant distinction.
san appliances
SAN appliances

Conventional storage processors with

  • Fibre Channel interfaces
  • Fibre Channel support
    • FC Fabric
    • Zoning
    • LUN virtualization
san appliance performance

Host Interfaces

FC Interfaces

FC Disks

Cache Memory

SAN Appliance Performance

Same as before, except faster Fibre Channel interfaces

  • Commodity processors, internal buses, disks, front-end and back-end interfaces
  • Proprietary storage processor architecture considerations

Multiple Processors

Internal Bus

san appliances1
SAN appliances

SAN and NAS convergence?

  • Adding Fibre Channel interfaces and Fibre Channel support to a NAS box
  • SAN-NAS hybrids when SAN appliances are connected via TCP/IP.

Current Issues:

  • Managing multiple boxes
  • Proprietary management platforms
san metadata controller

SAN Clients

1

3

Token

2

Fibre Channel

SAN

Metadata

Controller

Pooled Storage Resources

SAN Metadata Controller
  • SAN clients acquire an access token from the Metadata Controller (out-of-band)
  • SAN clients then access disks directly using proprietary distributed file system
san metadata controller1
SAN Metadata Controller
  • Performance considerations:
    • MDC latency (low access rate assumed)
    • Additional latency to map client file system request to the distributed file system
  • Other administrative considerations:
    • Requirement for client-side software is a burden!
san storage manager

SAN Clients

Fibre Channel

Storage

Domain

Servers

Pooled Storage Resources

SAN Storage Manager

Requires all access to pooled disks through the SAN Storage Manager

  • (in-band)!
san storage manager1
SAN Storage Manager
  • SAN Storage Manager adds latency to every I/O request
  • How much latency is involved?
  • Can this latency be reduced using traditional disk caching strategies?

SAN Clients

Fibre Channel

Storage

Domain

Servers

Pooled Storage Resources

architecture of a storage domain server

Client I/O

Initiator/Target Emulation

SANsymphony Storage Domain Server

FC Adaptor Polling Threads

Security

Fault Tolerance

Data Cache

Disk Driver

Natives W2K I/O Manager

Diskperf (measurement)

Fault Tolerance (Optional)

SCSI miniport Driver

Fibre Channel HBA Driver

Architecture of a Storage Domain Server
  • Runs on an ordinary Win2K Intel server
  • The SDS intercepts SAN I/O requests, impersonating a SCSI disk
  • Leverages:
    • Native Device drivers
    • Disk management
    • Security
    • Native CIFS support
sizing the san storage manager server
Sizing the SAN Storage Manager server
  • In-band latency is a function of Intel server front-end bandwidth:
    • Processor speed
    • Number of processors
    • PCI bus bandwidth
    • Number of HBAs
  • and performance of the back-end Disk configuration
san storage manager2
SAN Storage Manager

Can SAN Storage Manager in-band latency be reduced using traditional disk caching strategies?

  • Read hits
  • Read misses
    • Disk I/O + (2 * data transfer)
  • Fast Writes to cache (with mirrored caches)
    • 2 * data transfer
    • Write performance ultimately determined by the disk configuration
san storage manager3

SCSI Read

Command

Length = 4000

Status

Frame

16x1024 Byte Data Frames

140sec

27sec

SAN Storage Manager

Read hits (16 KB block):

  • Timings from an FC hardware monitor
  • 1Gbit/s Interfaces
  • No bus arbitration delays!
read vs write hits 16 kb block

SCSI Command

Write Setup

Data Frames

SCSI Status

Read vs. Write hits (16 KB block)

Fibre Channel Latency (16KB Blocks)

decomposing san in band latency

SCSI Command

Write Setup

Data Frames

SCSI Status

Decomposing SAN in-band Latency

How is time being spent inside the server?

  • PCI bus?
  • Host Bus adaptor?
  • Device polling?
  • Software stack?
benchmark configuration

4x550MHz

XEON

Processors

Memory Bus

64bit/33MHz PCI

32bit/33MHz PCI

32bit/33MHz PCI

Benchmark Configuration
  • 4-way 550 MHz PC
    • Maximum of three

FC interface polling threads

  • 3 PCI buses

(528MB/s Total)

  • 1, 4, or 8

QLogic 2200 HBAs

decomposing san in band latency1
Decomposing SAN in-band Latency

How is time being spent inside the SDS?

  • PCI bus?
  • Host Bus adaptor?
  • Device polling:
    • 1 CPU is capable of 375,000 unproductive polls/sec
    • 2.66secs per poll
  • Software stack:
    • 3 CPUs are capable of fielding 40,000 Read I/Os per second from cache
    • 73secs per 512-byte I/O
decomposing san in band latency2

SDS

FC Interface

Data Transfer

Decomposing SAN in-band Latency

SANsymphony in-band Latency (16KB Blocks)

impact of new technologies
Impact Of New Technologies

Front-end bandwidth:

    • Different speed Processors
    • Different number of processors
    • Faster PCI Bus
    • Faster HBAs

e.g. Next Generation Server

    • 2GHz GHz Processors (4x Benchmark System)
    • 200MB/sec FC interfaces (2x Benchmark System)
    • 4x800MB/s PCI bus (6x Benchmark System)
  • ...
impact of new technologies1

2GHz CPU, New HBAs,

2Gbit Switching

2GHz CPU & New HBAs

Today

Impact Of New Technologies
sizing the san storage manager
Sizing the SAN Storage Manager

Scalability

  • Processor speed
  • Number of processors
  • PCI bus bandwidth
    • 32bit/33MHz 132MB/sec
    • 64bit/33MHz 267MB/sec
    • 64bit/66MHz 528MB/sec
    • 64bit/100MHz 800MB/s (PCI-X)
  • Infiniband technology???
  • Number of HBAs
    • 200 MB/sec FC interfaces feature faster internal processors
sizing the san storage manager1
Sizing the SAN Storage Manager

Entry level system:

  • Dual Processor, single PCI bus, 1 GB RAM

Mid-level departmental system:

  • Dual Processor, dual PCI bus, 2 GB RAM

Enterprise-class system:

  • Quad Processor, triple PCI bus, 4 GB RAM
san storage manager pc scalability1

Entry level

SAN Storage Manager PC scalability

Departmental SAN

Enterprise class

sansymphony performance
SANsymphony Performance

Conclusions

  • FC switches provide virtually unlimited bandwidth with exceptionally low latency so long as you do not cascade switches
  • General purpose Intel PCs are a great source of inexpensive MIPS.
  • In-band SAN management is not a CPU-bound process.
  • PCI bandwidth is the most significant bottleneck in the Intel architecture.
  • FC Interface cards speeds and feeds are also very significant
san storage manager next steps
SAN Storage Manager – Next Steps
  • Cacheability of Unix and NT workloads
    • Domino, MS Exchange
    • Oracle, SQL Server, Apache, IIS
  • Given mirrored writes, what is the effect of different physical disk configurations?
    • JBOD
    • RAID 0 disk striping
    • RAID 5 write penalty
  • Asynchronous disk mirroring over long distances
  • Backup and Replication (snapshot)
ad