1 / 12

CFS Update SE Interlock 2006

CFS Update SE Interlock 2006. Oscar Wahlberg. Agenda. Dedicated PM/TPM What is new in 5.0 Multiple Transaction Servers Nested Mounts Improvements to CVM and DMP Use cases. What’s new in 5.0. New CFS architecture – Multiple Transaction Servers Nested mounts

dbaier
Download Presentation

CFS Update SE Interlock 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CFS UpdateSE Interlock 2006 Oscar Wahlberg

  2. Agenda • Dedicated PM/TPM • What is new in 5.0 • Multiple Transaction Servers • Nested Mounts • Improvements to CVM and DMP • Use cases

  3. What’s new in 5.0 • New CFS architecture – Multiple Transaction Servers • Nested mounts • CVM and DMP changes that directly address customer pain-points • 32-node support • Infiniband support – first release with 5.0 MP1

  4. Current limitations in 4.x • File system transactions done primarily by one master node per file system • That node becomes the bottleneck in transaction intensive workloads • The amount of transactions performed locally has been improved in both 4.0 and 4.1 - but it is still the bottleneck • Acceptable performance for some workloads, not for others – hard to qualify sales Metadata/Data CFS Data Data Data

  5. Multiple Transaction Servers • All nodes in the cluster can perform transactions – No need to manually balance the load • Not directly visible to end users • Administrative commands can be executed from all nodes • Support for additional mount options delaylog, tmplog • Initial performance test indicates linear scalability • Scale-out capabilities have increased dramatically with MTS and 32-node support • Gotcha’s • MTS requires file system layout 6 • QuickLog no longer supported – Use MVS CFS Metadata/Data Metadata/Data Metadata/Data Metadata/Data

  6. Customer Access Program - MTS • Exact dates are not decided yet - expect Q1 • Good candidates: • Current CFS customers that have had or have performance problems • Customers with good knowledge about SF and VCS • Contact PM directly – Oscar Wahlberg

  7. Nested Mounts • Requested by customers since v3.5 • Enables: /cfs1 - First file system /cfs1/second - Second file system • Only available for VxFS/CFS

  8. Improvements to CVM and DMP • CVM improvements • Change from a serialized to parallel protocol for cluster reconfiguration • Significantly reduces cluster startup and reconfiguration time • Fewer (no?) node timeouts/failed node starts • Enhancements to I/O fencing • DMP updates include • Polling and better utilization of HBA information • Completely redesigned way of handling failover/failback and intermittent errors

  9. Use Cases – When can CFS help? • SAP – High availability with SFCFS/HA • ~ $1M in 2005 • Potential up-sell for all customers using VCS to cluster SAP • Solves NFS cross-mount problems and shortens failover time • Tibco EMS – Fast failover of message queues • Good traction – Several large deals last year and more in the pipeline • With CFS Tibco can fail over a message queue in <10s • Improving throughput and availability of the message queue solves business problems • Parallel Applications – When to promote CFS • Almost any application that share data among multiple nodes • If NFS can be used to share the data – CFS can be used • Except in heterogeneous environments • Examples: FTP/Web/Streaming/E-Mail/Image and document storage • Parallel NFS file servers • With MTS we have a much stronger story – no need to manually load balance • NFS is not CIFS – Still no integration between the two • Fast-Failover for Applications and Databases • For a large number of file systems removing the deport/import stages of a failover can cut a non-trivial amount of time from the failover

  10. Thank You! Oscar Wahlberg oscar_wahlberg@symantec.com

  11. Competitive update • Head-to-head competition most notably seen on Linux from both PolyServe and RedHat GFS • Dell has started to go after HPC customers together with EMC and IBRIX • QFS and GPFS are still on the market, but not making any waves • Weak spots • Heterogeneous OS support • Very large clusters 100’s of nodes • Rolling upgrades

  12. Infiniband • First release is targeted specifically at Storage Foundation Cluster File System • GA date is with 5.0MP1 for Linux • Based on TopSpin API • RedHat x86_64 only

More Related