iscsi and windows server getting best performance high availability and better virtualization l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
iSCSI and Windows Server: Getting Best Performance, High Availability, and Better Virtualization PowerPoint Presentation
Download Presentation
iSCSI and Windows Server: Getting Best Performance, High Availability, and Better Virtualization

Loading in 2 Seconds...

play fullscreen
1 / 49

iSCSI and Windows Server: Getting Best Performance, High Availability, and Better Virtualization - PowerPoint PPT Presentation


  • 139 Views
  • Uploaded on

WSV302. iSCSI and Windows Server: Getting Best Performance, High Availability, and Better Virtualization. Greg Shields, MVP Senior Partner and Principal Technologist Concentrated Technology www.ConcentratedTech.com. To Begin, A Poll…. What’s the best SAN for business today? Fibre Channel?

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'iSCSI and Windows Server: Getting Best Performance, High Availability, and Better Virtualization' - taffy


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
iscsi and windows server getting best performance high availability and better virtualization
WSV302

iSCSI and Windows Server:Getting Best Performance, High Availability, and Better Virtualization

Greg Shields, MVP

Senior Partner and Principal Technologist

Concentrated Technologywww.ConcentratedTech.com

to begin a poll
To Begin, A Poll…
  • What’s the best SAN for business today?
    • Fibre Channel?
    • iSCSI?
    • Fibre Channel over Ethernet?
    • Infiniband?
    • An-array-of-USB-sticks-all-linked-together?
to begin a poll3
To Begin, A Poll…
  • What’s the best SAN for business today?
    • Fibre Channel?
    • iSCSI?
    • Fibre Channel over Ethernet?
    • Infiniband?
    • An-array-of-USB-sticks-all-linked-together?
  • Studies suggest the answer to this questiondoesn’t matter…
the storage war is over everybody won
The Storage War is Over & Everybody Won
  • An EMC Survey from 2009 found that…
    • Selected SAN medium does not appear to be based on virtual platform.
    • While this study was virtualization-related, it does suggest one thing…
    • You’re probably stuck with what you’ve got.

Source: http://www.emc.com/collateral/analyst-reports/2009-forrester-storage-choices-virtual-server.pdf

iscsi the protocol iscsi the cabling
iSCSI, the Protocol. iSCSI, the Cabling.
  • iSCSI’s Biggest Detractors
    • Potential for oversubscription
    • Less performance for some workloads
    • TCP/IP security concerns
      • E.g., you just can’t hack a strand of light that easily…
iscsi the protocol iscsi the cabling6
iSCSI, the Protocol. iSCSI, the Cabling.
  • iSCSI’s Biggest Detractors
    • Potential for oversubscription
    • Less performance for some workloads
    • TCP/IP security concerns
      • E.g., you just can’t hack a strand of light that easily…
  • iSCSI’s Biggest Benefits
    • Reduced administrative complexity
    • Existing in-house experience
    • (Potentially) lower cost
    • Existing cabling investment and infrastructure
network accelerations in server 2008 r2
Network Accelerations in Server 2008 & R2
  • TCP Chimney Offload
    • Transfers TCP/IP protocol processing from the CPU to network adapter.
    • First available Server 2008 RTM, R2 adds automatic mode and new PerfMon counters.
    • Often an extra licensable feature in hardware, with accompanying cost.
network accelerations in server 2008 r29
Network Accelerations in Server 2008 & R2
  • TCP Chimney Offload
    • Transfers TCP/IP protocol processing from the CPU to network adapter.
    • First available Server2008 RTM, R2 adds automatic mode and new PerfMon counters.
    • Often an extra licensable feature in hardware, with accompanying cost.
  • Virtual Machine Queue
    • Distributes received frames into different queues based on target VM. Different CPUs can process.
    • Hardware packet filtering to reduce the overhead of routing packets to VMs.
    • VMQ must be supported by the network hardware. Typically Intel NICs & Procs only.
network accelerations in server 2008 r210
Network Accelerations in Server 2008 & R2
  • TCP Chimney Offload
    • Transfers TCP/IP protocol processing from the CPU to network adapter.
    • First available Server2008 RTM, R2 adds automatic mode and new PerfMon counters.
    • Often an extra licensable feature in hardware, with accompanying cost.
  • Virtual Machine Queue
    • Distributes received frames into different queues based on target VM. Different CPUs can process.
    • Hardware packet filtering to reduce the overhead of routing packets to VMs.
    • VMQ must be supported by the network hardware. Typically Intel NICs & Procs only.
  • Receive Side Scaling
    • Distributes load from network adapters across multiple CPUs.
    • First available in Server2008 RTM, R2 improves initialization and CPU selection at startup, adds registry keys for tuning performance, and new PerfMon counters.
    • Most server-class NICs include support.
network accelerations in server 2008 r211
Network Accelerations in Server 2008 & R2
  • TCP Chimney Offload
    • Transfers TCP/IP protocol processing from the CPU to network adapter.
    • First available Server2008 RTM, R2 adds automatic mode and new PerfMon counters.
    • Often an extra licensable feature in hardware, with accompanying cost.
  • Virtual Machine Queue
    • Distributes received frames into different queues based on target VM. Different CPUs can process.
    • Hardware packet filtering to reduce the overhead of routing packets to VMs.
    • VMQ must be supported by the network hardware. Typically Intel NICs & Procs only.
  • Receive Side Scaling
    • Distributes load from network adapters across multiple CPUs.
    • First available in Server2008 RTM, R2 improves initialization and CPU selection at startup, adds registry keys for tuning performance, and new PerfMon counters.
    • Most server-class NICs include support.
  • NetDMA
    • Offloads the network subsystem memory copy operation to a dedicated DMA engine.
    • First available in Server 2008 RTM, R2 adds no new capabilities
network accelerations in server 2008 r212
Network Accelerations in Server 2008 & R2
  • TCP Chimney Offload
    • Transfers TCP/IP protocol processing from the CPU to network adapter.
    • First available Server2008 RTM, R2 adds automatic mode and new PerfMon counters.
    • Often an extra licensable feature in hardware, with accompanying cost.
  • Virtual Machine Queue
    • Distributes received frames into different queues based on target VM. Different CPUs can process.
    • Hardware packet filtering to reduce the overhead of routing packets to VMs.
    • VMQ must be supported by the network hardware. Typically Intel NICs & Procs only.
  • Receive Side Scaling
    • Distributes load from network adapters across multiple CPUs.
    • First available in Server2008 RTM, R2 improves initialization and CPU selection at startup, adds registry keys for tuning performance, and new PerfMon counters.
    • Most server-class NICs include support.
  • NetDMA
    • Offloads the network subsystem memory copy operation to a dedicated DMA engine.
    • First available in Server 2008 RTM, R2 adds no new capabilities

Acceleration features were availablein Server 2003’s Scalable Networking Pack.

Server 2008 & R2 now include these in the OS.

However, ensure your NICs support them!

getting better performance availability
Getting Better Performance & Availability
  • Big Mistake #1:Assuming NIC Teaming = iSCSI Teaming
    • NIC Teaming is common in production networks
    • Leverages proprietary driver from NIC manufacturer
    • However, iSCSI teaming requires MPIO or MCS
    • These are protocol-driven, not driver-driven.
getting better performance availability14
Getting Better Performance & Availability
  • MCS = Multiple Connections per Session
    • Operates at the iSCSI Initiator level.
    • Part of the iSCSI protocol itself.
    • Enables multiple, parallel connectionsto target.
    • Does not require special multipathingtechnology for manufacturer.
    • Does require storage device support.
getting better performance availability15
Getting Better Performance & Availability
  • MCS = Multiple Connections per Session
    • Configured per-session and applies toall LUNs exposed to that session.
    • Individual sessions are given policies.
      • Fail Over Only
      • Round Robin
      • Round Robin with a subset of paths
      • Least Queue Depth
      • Weighted Paths
multiple connections per session

Multiple Connections per Session

Greg Shields, MVP

Senior Partner and Principal Technologist

Concentrated Technologywww.ConcentratedTech.com

demo

getting better performance availability17
Getting Better Performance & Availability
  • MPIO = Multipath Input/Output
    • Same functional result as MCS,but with different approach.
      • Manufacturers create MPIO-enabled drivers.
      • Drivers include Device Specific Modulethat orchestrates requests across paths.
      • A single DSM can support multiple transportprotocols (such as Fibre Channel & iSCSI).
      • You must install and manage DSM driversfrom your manufacturer.
      • Windows includes a native DSM, not alwayssupported by storage.
getting better performance availability18
Getting Better Performance & Availability
  • MPIO = Multipath Input/Output
    • MPIO policies are applied to individualLUNs. Each LUN gets its own policy.
      • Fail Over Only
      • Round Robin
      • Round Robin with a subset of paths
      • Least Queue Depth
      • Weighted Paths
      • Least Blocks
    • Not all storage supports every policy!
multipath i o

Multipath I/O

Greg Shields, MVP

Senior Partner and Principal Technologist

Concentrated Technologywww.ConcentratedTech.com

demo

which option to choose
Which Option to Choose?
  • Many storage devices do not support the use of MCS.
    • In these cases, your only option is to use MPIO.
which option to choose21
Which Option to Choose?
  • Many storage devices do not support the use of MCS.
    • In these cases, your only option is to use MPIO.
  • Use MPIO if you need to support different load balancing policies on a per-LUN basis.
    • This is suggested because MCS can only define policies on a per-session basis.
    • MPIO can define policies on a per-LUN basis.
which option to choose22
Which Option to Choose?
  • Many storage devices do not support the use of MCS.
    • In these cases, your only option is to use MPIO.
  • Use MPIO if you need to support different load balancing policies on a per-LUN basis.
    • This is suggested because MCS can only define policies on a per-session basis.
    • MPIO can define policies on a per-LUN basis.
  • Hardware iSCSI HBAs tend to support MPIO over MCS.
    • Not that many of us use hardware iSCSI HBAs…
    • But if you are, you’ll probably be running MPIO.
which option to choose23
Which Option to Choose?
  • Many storage devices do not support the use of MCS.
    • In these cases, your only option is to use MPIO.
  • Use MPIO if you need to support different load balancing policies on a per-LUN basis.
    • This is suggested because MCS can only define policies on a per-session basis.
    • MPIO can define policies on a per-LUN basis.
  • Hardware iSCSI HBAs tend to support MPIO over MCS.
    • Not that many of us use hardware iSCSI HBAs…
    • But if you are, you’ll probably be running MPIO.
  • MPIO is not available on Windows XP, Windows Vista, or Windows 7.
    • If you need to create iSCSI direct connections to virtual machines, you must use MCS.
which option to choose24
Which Option to Choose?
  • Many storage devices do not support the use of MCS.
    • In these cases, your only option is to use MPIO.
  • Use MPIO if you need to support different load balancing policies on a per-LUN basis.
    • This is suggested because MCS can only define policies on a per-session basis.
    • MPIO can define policies on a per-LUN basis.
  • Hardware iSCSI HBAs tend to support MPIO over MCS.
    • Not that many of us use hardware iSCSI HBAs…
    • But if you are, you’ll probably be running MPIO.
  • MPIO is not available on Windows XP, Windows Vista, or Windows 7.
    • If you need to create iSCSI direct connections to virtual machines, you must use MCS.
  • MCS tends to have marginally better performance over MPIO.
    • However, it can require more processing power. Offloads reduce this impact.
    • This may a negative impact in high-utilization environments.
    • For this reason, MPIO may be a better selection for these types of environments.
better hyper v virtualization
Better Hyper-V Virtualization
  • iSCSI for Hyper-V best practices suggest usingnetwork aggregation and segregation.
    • Aggregation of networks for increased throughput and failover.
    • Segregation of networks for oversubscription prevention.
hyper v cluster minimal redundancy30
Hyper-V Cluster, Minimal Redundancy

Note the separate management connection for segregation of security domains and/or Live Migration traffic.

hyper v cluster maximum redundancy32
Hyper-V Cluster, Maximum Redundancy

10Gig-E and VLANs significantly reduce physical complexity.

hyper v iscsi disk options
Hyper-V iSCSI Disk Options
  • Option #1: Fixed VHDs
    • Server 2008 RTM: ~96% of native
    • Server 2008 R2: Equal to Native
hyper v iscsi disk options34
Hyper-V iSCSI Disk Options
  • Option #1: Fixed VHDs
    • Server 2008 RTM: ~96% of native
    • Server 2008 R2: Equal to Native
  • Option #2: Pass Through Disks
    • Server 2008 RTM: Equal to Native
    • Server 2008 R2: Equal to Native
hyper v iscsi disk options35
Hyper-V iSCSI Disk Options
  • Option #1: Fixed VHDs
    • Server 2008 RTM: ~96% of native
    • Server 2008 R2: Equal to Native
  • Option #2: Pass Through Disks
    • Server 2008 RTM: Equal to Native
    • Server 2008 R2: Equal to Native
  • Option #3: Dynamic VHDs
    • Server 2008 RTM: Not a great idea
    • Server 2008 R2: ~85%-94% of native
which to use
Which to Use?
  • VHDs are believed to be most commonly used option.
    • Particularly in the case of System drives.
  • Choose Pass Through Disks not necessarily for performance, but VM workload requirements.
    • Backup and recovery
    • Extremely large volumes
    • Support for storage management software
    • App Compat requirement for unfiltered SCSI.
hyper v iscsi option 4
Hyper-V iSCSI Option #4
  • iSCSI Direct
    • Essentially, connect a VM directly to an iSCSI target.
    • Hyper-V host does not participate in connection.
    • VM LUN not visible to Hyper-V host.
    • VM LUNs can be hot added/removed without requiring reboot.
    • Transparent support for VSS hardware provider.
    • Enables guest clustering.
hyper v iscsi option 438
Hyper-V iSCSI Option #4
  • iSCSI Direct
    • Essentially, connect a VM directly to an iSCSI target.
    • Hyper-V host does not participate in connection.
    • VM LUN not visible to Hyper-V host.
    • VM LUNs can be hot added/removed without requiring reboot.
    • Transparent support for VSS hardware provider.
    • Enables guest clustering.
  • Potential concern…
    • Virtually no degradation in performance.
    • Some NIC accelerations not pulled into VM.
demartek test lab hyper v
Demartek Test Lab – Hyper-V
  • Comparison of 10Gb iSCSI performance
    • Native server vs. Hyper-V guest, iSCSI direct
    • Same iSCSI target & LUNs (Windows iSCSI Storage Target)
    • Exchange Jetstress 2010: mailboxes=1500, size=750MB, Exchange IOPS=0.18, Threads=2
demartek test lab 10gb iscsi performance
Demartek Test Lab – 10Gb iSCSI Performance
  • Perfmon trace of single-host Exchange Jetstress to fast Windows iSCSI storage target consuming 37% of 10Gb pipe
demartek test lab jumbo frames
Demartek Test Lab – Jumbo Frames
  • Jumbo Frames allow larger packet sizes to be transmitted and received
  • Jumbo Frames testing has yielded variable results
    • All adapters, switches and storage targets must agree on size of jumbo frame
    • Some storage targets do not fully support jumbo frames or have not tuned their systems for jumbo frames – check with your supplier
demartek test lab 1gb vs 10gb iscsi
Demartek Test Lab – 1Gb vs. 10Gb iSCSI
  • 10GbE adoption is increasing
    • Server Virtualization is a big driver
      • Not too difficult for one host to consume a single 1GbE pipe
      • Difficult for one host to consume a single 10GbE pipe
    • SSD adoption in storage targets increases performance of the storage and can put higher loads on the network
    • Big server vendors are beginning to offer 10GbE on server motherboards
demartek test lab iscsi
Demartek Test Lab – iSCSI
  • Demartek Lab video of ten-year old girl deploying iSCSI on Windows 7:www.youtube.com/Demartek
  • Demartek iSCSI Zone: www.demartek.com/iSCSI
    • Includes more test results
    • The Demartek iSCSI Deployment Guide 2011 will be published this month
final thoughts
Final Thoughts
  • Server 2008 R2 adds significant performance improvements to iSCSI storage.
    • Hardware accelerations and MPIO improvements
    • Hyper-V enhancements
  • Configuring iSCSI is easy, if…
    • Keep network aggregation and separation in mind.
    • Avoid the most common mistakes.
    • Get on 10Gig-E as soon as you can!

Greg Shields, MVP

Senior Partner and Principal Technologist

Concentrated Technologywww.ConcentratedTech.com

track resources
Track Resources

Don’t forget to visit the Cloud Power area within the TLC (Blue Section) to see product demos and speak with experts about the Server & Cloud Platform solutions that help drive your business forward.

You can also find the latest information about our products at the following links:

  • Cloud Power - http://www.microsoft.com/cloud/
  • Private Cloud - http://www.microsoft.com/privatecloud/
  • Windows Server - http://www.microsoft.com/windowsserver/
  • Windows Azure - http://www.microsoft.com/windowsazure/
  • Microsoft System Center - http://www.microsoft.com/systemcenter/
  • Microsoft Forefront - http://www.microsoft.com/forefront/
resources
Resources
  • Connect. Share. Discuss.

http://northamerica.msteched.com

Learning

  • Sessions On-Demand & Community
  • Microsoft Certification & Training Resources

www.microsoft.com/teched

www.microsoft.com/learning

  • Resources for IT Professionals
  • Resources for Developers

http://microsoft.com/technet

http://microsoft.com/msdn