introduction to storage appliance nas n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Introduction to Storage Appliance - NAS PowerPoint Presentation
Download Presentation
Introduction to Storage Appliance - NAS

Loading in 2 Seconds...

play fullscreen
1 / 96

Introduction to Storage Appliance - NAS - PowerPoint PPT Presentation


  • 83 Views
  • Uploaded on

Introduction to Storage Appliance - NAS. Prepared by: Deo-Lama Bahadur Modified by: Kok-Lim.Liew. Logistics. Introduction Schedule (12 June 2009) Start time – 09:00 Breaks – 11:15 / 15:30 (15 mins each) Lunch – 12:30 – 14:30 Close – 17:00 Telephone and messages Food and drinks

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Introduction to Storage Appliance - NAS' - arleen


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
introduction to storage appliance nas

Introduction to Storage Appliance- NAS

Prepared by: Deo-Lama Bahadur Modified by: Kok-Lim.Liew

logistics
Logistics
  • Introduction
  • Schedule (12 June 2009)
    • Start time – 09:00
    • Breaks – 11:15 / 15:30 (15 mins each)
    • Lunch – 12:30 – 14:30
    • Close – 17:00
  • Telephone and messages
  • Food and drinks
  • Restrooms
course objectives
Course Objectives
  • Day1 Introduction

Overview of NetApp

Overview of Netapp Hardware

  • Day2 Setup

Overview of DataONTAP

Basic Administration

Basic Networking

  • At the end of this course, you will be able:
    • Identify Key Hardware component in a NetApp environment
    • Identify Key Software component in a NetApp environment
evolution of computer data storage 2
Evolution of Computer Data Storage (2)

With the evolution of HDD, storage solutions evolves into 3 core enterprise storage architecture

  • DAS (Direct Attached Storage)
  • NAS (Network Attached Storage)
  • SAN (Storage Area Network)
history of nas
History of NAS
  • Network-attached storage was introduced with file sharing Novell's NetWare Server in 1983.
  • In the UNIX world, Sun Microsystems' 1984 release of NFS allowed network servers to share their storage space with networked clients.
  • Inspired by the success of file servers from Novell, IBM, and Sun, several firms developed dedicated file servers. Auspex Systems was one of the first to develop a dedicated NFS server for use in the UNIX market.
  • A group of Auspex engineers split away in the early 1990s to create the integrated NetApp filer, which supported both Windows' CIFS and UNIX'es NFS, and had superior scalability and ease of deployment.
the basic information lifecycle view

StorageNetwork

LANor WAN

IPor FC

The basic information lifecycle view

Primary

Secondary

Tertiary

Servers

Optical

Library

Tape

Library

Heterogeneous

Storage

FAS Servers

Near-line

netapp storage appliance
NetApp Storage Appliance
  • NetApp storage appliances are designed for data storage. They use a streamlined Data ONTAP operating system, hardware and support tools which operates simply, quickly and reliably.
netapp products 1
NetApp Products (1)
  • Business Continuity
    • MetroCluster
    • SnapMirror
  • Archive and Compliance
    • SnapLock
  • Protocols
    • FC SAN
    • FCoE SAN
    • IP SAN (iSCSI)
    • NFS
    • CIFS
  • Storage Systems
    • FAS6000 Series
    • FAS3100 Series
    • FAS2000 Series
    • NearStore on FAS
    • NetApp VTL
    • V-Series
    • Information Server
  • Platform OS
  • Data ONTAP
    • Data ONTAP 7G
    • Data ONTAP GX
    • Features
      • Snapshot
      • SyncMirror
      • FlexVol
      • RAID-DP
      • FilerView
      • FlexShare
      • Deduplication
      • HA System Configuration
      • SnapValidator for Oracle
    • Additional Capabilities
      • FlexCache
      • FlexClone
      • MultiStore
netapp products 2
NetApp Products (2)
  • Protection SoftwareBackup and Recovery:
    • Open Systems SnapVault
    • SnapRestore
    • SnapVault
  • Support
    • SupportEdge Premium
    • SupportEdge Secure for Government
    • SupportEdge Standard
    • e-Support
  • Security
    • Decru DataFort
  • Software
  • Management Software
    • Application Suite:
      • Single Mailbox Recovery
      • SnapManager for Exchange
      • SnapManager for Microsoft SharePoint Server
      • SnapManager for SAP
    • Database Suite:
      • SnapManager for Oracle
      • SnapManager for Microsoft SQL Server
    • Server Suite:
      • SnapManager for Virtual Infrastructure
      • Virtual File Manager
      • SnapDrive for UNIX
      • SnapDrive for Windows
    • Storage Suite:
      • Operations Manager
      • File Storage Resource Manager
      • Protection Manager
      • Provisioning Manager
fas one scalable architecture

FAS202024TB

40 drives

FAS205069TB

104 drives

FAS: One scalable architecture
  • FAS6080
    • 1176 TB1176 drives2
  • FAS6040
    • 840 TB840 drives
  • FAS3070
    • 252 TB504 drives
  • FAS3050
    • 168 TB336 drives
  • FAS3040
    • 126 TB252 drives
  • FAS3020
    • 84 TB168 drives
  • Capacities reflect hardware raw capacity
  • Data capacity typically < max hardware raw capacity
slide14

Storage System Basic Components

NFS / CIFS / FTP / iSCSI over

1/10 GbE

FCPover2/4 Gbps FC

Controller1

One or moredual-FCAL-attached disk shelves, with FC or ATA drives

FCAL: Fibre Channel Arbitrated Loop

ATA: Advanced Technology Attachment

  • One controller is shown,two are typical (active/active)
hot swappable components
Hot Swappable components
  • Hot swappable components and components which can be replaced with power on.
    • Power Supply
    • Cooling Unit
    • Hot spare disks
    • LCD Panel (depends)
what an active active configuration is
What an active/active configuration is
  • An active/active configuration is two storage systems (nodes) whose controllers are connected to each other either directly or through switches.
  • You can configure the active/active pair so that each node in the pair shares access to a common set of disks, subnets, and tape drives, or each node can own its own distinct set of storage.
  • The nodes are connected to each other through a cluster adapter or an NVRAM adapter.
ds14mk4
DS14mk4

Front

Front

14 x Fibre Channel disk drives

Back

Shelf controllers (ESH)

DS14mk4 close-up

Power supplies w/fans

slide20

FAS200 series controller(s): First to be embedded into the disk shelf by itself

Embedded controller(s)

NFS / CIFS / FTP / iSCSI over

10/100Mb/GbE

FCPover2Gbps FC

Such “shrunken” controllers are sometimes called“storage controller modules”

Start with a DS14mk2 shelf

0, 1, 2, or 3 FCAL-attached disk shelves, with FC or ATA drives

fas2050 front view without bezel
FAS2050: Front View without Bezel

Disk LED

Storagecontroller LED

Up to 20 15kRPM SAS disks

HDD filler

fas2050 rear view
FAS2050: Rear View

Power Supplies

PCIe Expansion Slot

Storage controller B (added for active-active configs)

4Gb FC

ports

Storage controller A (present in all FAS2050 configs)

fas2050 storage controller module
FAS2050: Storage Controller Module

PCIe expansion slot (FC HBA shown)

Handle

Two 4Gbps FC ports

(autosensing for 1, 2 or 4 Gbps operation,target or initiator)

Remotemanagement1 GbE port

Console port

Two 1GbE copper ports

front view of fas6000 series controller
FRONT View of FAS6000 Series Controller

6U

LCD panel

Cooling fan

Even with system power off, LED will keep blinking to identify the failed component

rear view of fas6000 series controller
REAR View of FAS6000 Series Controller

Additional slots: 5 PCIe and 3 PCI-X, e.g., for 4 Gbps FC connection

NVRAM6

Power supplies

FRU(Field-replaceable unit)

4XFC (Fib)2 Gbps

4XFC (Fib)2 Gbps

2XGbE

Cu NICS

2XGbE

Cu NICS

2XGbE

Cu NICS

Console port

RLM

v series the heterogeneous solutions
V-Series: The Heterogeneous Solutions
  • “Opens closed doors”
  • Applies Data ONTAP’s power to 3rd-party SAN storage arrays to solve a range of business requirements

DataONTAP®

supported san arrays
Supported SAN Arrays
  • IBM
  • HP
  • Hitachi
  • Fujitsu
  • EMC2
v series maxima are comparable to the corresponding fas system

FAS205069TB

104 drives

FAS202024TB

40 drives

V-Series maxima are comparableto the corresponding FAS system
  • FAS6070
    • 504 TB1008 drives
  • FAS limits number of disks, V-Series limitsnumbers of LUNs from underlying array
  • System max-capacities from 16 to 504 TB
  • LUN sizes: 1 to 750GB, but not toexceed system max-capacity
  • FAS6030
    • 420 TB840 drives
  • FAS3070
    • 252 TB504 drives
  • FAS3050
    • 168 TB336 drives
  • V6070
    • 504 TB1008 LUNs
  • FAS3040
    • 126 TB252 drives
  • FAS3020
    • 84 TB168 drives
  • V6030
    • 420 TB840 LUNs

V-series versionsof FAS20x0not offered

  • V3070
    • 252 TB504 LUNs
  • V3050
    • 168 TB336 LUNs
  • V3040
    • 126 TB252 LUNs
  • V3020
    • 84 TB168 LUNs
  • GF270
    • 16 TB56 LUNs
what is near line the information lifecycle view

StorageNetwork

LANor WAN

IPor FC

What is “near-line?”The information lifecycle view

Primary

Secondary

Tertiary

Servers

Optical

Library

Tape

Library

Heterogeneous

Storage

FAS Servers

Near-line

nearstore r200 configurations
NearStore® R200 Configurations
  • Capacities
  • 500GB 7200 RPM ATA
  • 2 shelves in minimum system
  • 336 drives max
  • 168 TB max with 500 GB drives
  • Base System
    • Single controller (the limit)
    • SAN, NAS* & HTTP protocols
    • RAID-DP™
    • 3-year hardware warranty

* SAN implies FCP and iSCSI protocolsNAS Implies NFS and CIFS protocols

  • Popular Software Options
  • SnapRestore®
  • SnapMirror®
  • SnapVault®
  • SnapLock®
  • LockVault™
  • MultiStore®
  • Virtual File Manager™
  • Hardware Options
    • Gigabit Ethernet
    • 2Gb FC SAN attach (target)
    • SCSI & FC tape adapters (initiator)
    • AT-FCX shelves
why serial attached scsi and why now
Why Serial-Attached SCSI, and why now?
  • High performance – equivalent to FC drives
  • Wider adoption of SAS Standard
    • SAS has matured – deployed in servers since 2004
    • SAS expected to be in the majority by 2012
    • Much higher disk density than FC for scalability requirements
    • Native support for SATA without need for bridging
are sas and fc similar
Are SAS and FC similar?

SAS disks same as FC disks – except for drive interface:

  • Same disk technologies
  • Same rotational speeds
  • Same reliability
disk details on now
Disk details on NOW
  • Maps drives to suppliers
  • Lists (most) supported

shelves

netapp data ontap
NetApp Data ONTAP

Network Appliance’s Data ONTAP provides a comprehensive software architecture

for it’s storage appliances to ensure that storage management is simplified, and

business continuance is maximised.

  • This architecture contains 3 main elements that work together to provide speed and reliability
  • Real time mechanism for process execution
  • WAFL file system containing NVRAM and SnapShots
  • RAID manager
netapp dataontap internals
NetApp DataONTAP Internals

This graphic shows how the pieces of the

NetApp software architecture fits together.

wafl write anywhere file locator
WAFL (Write Anywhere File Locator)
  • Data going to the disk has two parts – the data itself and information about the data, commonly referred to as metadata
  • Most filesystems must write data and metadata at specific location. But WAFL can write metadata and data to the first available location, thus increase performance.
nvram operation 1
NVRAM operation (1)

Client

Storage system

GbE

Dual-attached FC

nvram operation 2

operation

NIC

NIC

ack

+

+

BATT

NVRAM operation (2)

Client

Storage system

  • Operation is now safe in battery-backed RAM
  • Also in controller’s main memory, from which further processing will take place
  • Client free to “forget about it” – it’s done!
  • Purely electronic, memory-to-memory path

NVRAM

Main memory

Mainmemory

NIC = network interface card

nvram operation 3

NIC

NIC

+

+

BATT

NVRAM operation (3)

Client

Storage system

  • Consequences of operation consume main memory

NVRAM

Main memory

Mainmemory

  • As many as 10 seconds elapse, during which many other ops arrive (not shown)
  • Consequences of this and many other ops written to disk
  • NVRAM is zeroed
nvram benefits

For active-active controller configs

NVRAM benefits
  • Performance: Lowers latency, raises throughput
  • Availability: Speeds reboots, connects redundant controllers

NVRAM is not necessarily a discrete, removable PCI card. On models like FAS200 and FAS2000 it is “on-board.”

NVRAM5

IB CFOconnectors

NVRAM6

  • Battery life after shutdown
  • “Clean” – weeks
  • “Dirty” – 3 to 7 days, partial to full charge
data ontap fundamentals

Data ONTAP Fundamentals

Basic Administration

accessing the console
Accessing the Console

A terminal or terminal server can be

connected to the storage appliance

console port via a standard RS232

connection. For example, a DB9 to

DB9 serial cable (null modem), with

the following settings for the serial

communications port

Bits per second: 9600

Data Bits: 8

Parity: None

Stop Bits: 1

Flow control: None

You can also access the storage appliance via RLM (Remote LAB Module).

Which ever method is used, console

access can be password protected.

console command set
Console Command Set

To view all available commands, at the console prompt enter help or ?. To display a brief description

of each, enter help [command]

remote lan module fas2000 fas3000 and fas6000

Remote

CLI/Console

Access

Remote LAN Module(FAS2000, FAS3000 and FAS6000)
  • Remote and local diagnostics
  • Remote power cycle
  • Down appliance notification
  • Remote failure diagnostics
  • Remotely initiate core dumps
  • Capture console logs
  • Access to HW event logs

RLM1

DedicatedEthernet2

customer LAN

SSH

Local Admin

  • Shown as a separate card, but embedded on-board in some models.
  • Shown connected to customer LAN; could be a separate management LAN
netapp filerview
NetApp FilerView

FilerView is an administration tool available on every NetApp storage appliance. This tool

allows IT administrator to use a web browser to access a consistent, easy-to-use graphical

interface for every administration tasks.

Administrators can set up and control any NetApp storage appliance remotely without

disruptions to business-critical operations. While file systems remains available to users, they

can:

  • Monitor status
  • Satisfy requests for additional storage capacity
  • Make changes to the file system
  • Make changes to the file system configurations

FilerView runs in local client web browsers and communicates to the storage appliance mostly

with HTML and SNMP protocol. It also establishes a real telnet session to the storage appliance

upon requesting the “Use Command Line” function

netapp filerview1
NetApp FilerView

Accessing FilerView remotely requires either:

  • Netscape 4.51 or later
  • Mozilla or Firefox 1.0 or later
  • Internet Explorer 4.x or later
  • Browser must be Java enabled

To access the command line via FilerView:

  • Point web browser to http://filername/na_admin
  • Click FilerView
  • Login
  • Click Filer
  • Click Use Command Line
filerview interface
FilerView Interface

The FilerView interface allows only one telnet session at a time. If you try to open a telnet session in either FilerView or the Command Line Interface directly and you or someone else already has one opened, you will receive the following message.

Too many users logged in! Please try again later. Connection closed

When you leave the Use Command Line window in FilerView, the telnet session is closed and other

administrators can connect to storage appliance using telnet client software

data ontap fundamentals1

Data ONTAP Fundamentals

Basic Storage Appliance Configurations

basic configurations
Basic Configurations

Many console commands provide filer systems configuration information. These commands

can be used to:

  • Check your system configuration
  • Monitor system status
  • Verify correct system configuration

Following are some commands and their functions:

sysconfig
Sysconfig

Example output from the sysconfig –v and sysconfig –r.

editing system configurations
Editing System Configurations

The system configurations is managed via the use

of options and configuration files.

Options commands:

  • Can be entered on the console
  • Are automatically added to the storage appliance’s registry
  • Are persistent across reboots
  • Do not require editing of configuration files

Configuration files such as /etc/rc,

/etc/hosts.equiv, /etc/dgateways,

/etc/hosts must be edited to make non-options

configurations permanent.

Comparison of Options Commands and Configuration File Methods

Execute an options comand

Syntax = options [options name [value]]

Execute a vol options command

Syntax = vol options <vol-name>

<option-name> <option-value>

Edit configuration files

Syntax = Depends on file being edited.

Edit file from a client machine.

editing boot configurations
Editing Boot Configurations

The storage appliance’s boot configuration file

contains commands that are run automatically

whenever the storage appliance is booted.

The configuration file is named rc and is located in

the /etc directory of the appliance’s root volume. The

default root volume is /vol/vol0

The /etc/rc file contains:

  • Network interface configuration information
  • Commands to automatically export NFS mounts.
  • Other commands to run at appliance startup.

Do not use Notepad to edit the /etc/rc file. Use vi

emacs or wordpad.

Step and actions

  • Make a backup copy of the /etc/rc file.
  • Edit the /etc/rc file using a text editor.
  • Save the edited file.
  • Reboot the storage appliance to testthenew configuration.Note: To ensure the configurationchanges are persistent, edit the /etc/rcfile so the commands are persistentacross reboots.
administrative access
Administrative Access

Managing Administrative Console User ID commands

An administrative user is a named account that exists on the filer.

Having multiple administrative accounts means that each administrative user will have a unique login name and password, which increases security.

Administrative console user have the same

privileges as root console users. Syslog

(/etc/messages) records console logins by

username, time of access and node

name/address

useradmin useradd login_name

Creates a new administrative user and password

useradmin userdel login_name

Deletes and administrative user

useradmin userlist

Lists administrative users

passwd

Changes console administrative user’s password for

the current user logged in.

adminhost
Adminhost

The term adminhost is used to describe a NFS or CIFS client machines that has the ability to

view and modify configuration files stored in the /etc directory of the filer’s root volume. The

filer grants root permission to the administration host after the setup procedure is complete.

NFS requirements:

  • Host name must be entered in the /etc/hosts.equiv file. The setup procedure automatically populates this file.
  • Allowed to mount the filer root directory with root permissions and edit configuration files. Enter filer commands by using a remote shell program like rsh.

CIFS requirements:

  • User must be a member of the “Domain Administrators” or “Administrators” W2K groups.
  • Is given privileges to edit configuration files by accessing the \\filer\C$ share.
autosupport
AutoSupport

AutoSupport is a service provide by NetApp systems that monitors the functions of a

storage appliance. The AutoSupport daemon triggers automatic email messages to

members of Network Appliance Technical Support, alerting them to potential

storage appliance problems.

If necessary, technical support contacts the administrator via email and provides

troubleshooting information for resolution. Specific storage appliance conditional

events can be configured as traps that will trigger AutoSupport sequences.

AutoSupport is enabled by the command options autosupport.enable

[on|off]. We encourage all customers to enable AutoSupport. Our AutoSupport

mechanism can be proactive and we can be better able to assist you when you call.

data ontap fundamentals2

Data ONTAP Fundamentals

Network Administration

hostname resolution
Hostname Resolution

Maintaining Host Information

Data ONTAP can resolve host information on a storage appliance using:

  • The /etc/hosts file on the root volume of the storage appliance
  • An NIS server
  • A DNS server

By default, the storage appliance first tries to resolve host names locally by searching the

/etc/hosts and /etc/nsswitch.conf files, then NIS, then DNS if needed. To specify a different search order, you must modify the /etc/nsswitch.conf file.

Note: DNS and NIS can be configured using the setup command during installation of a

storage appliance. As a result, many of the commands and files in this section are executed automatically. Entering NIS or DNS commands is usually done if:

  • NIS or DNS was not configured during setup
  • You need to make a change to a configuration
etc hosts file
/etc/hosts File

Resolving Hostnames with the /etc/hosts File

Since the /etc/hosts file is checked first, it is important to keep the file current because updated changes take effect immediately.

The file can be edited using a standard editing program and should include a blank line at the end. The format of the /etc/hosts entries is as follows:

IP address hostname alias(es)

Example /etc/hosts file

#Auto-generated by setup Tue Jul 8 16:27:32 GMT 2005

127.0.0.1 localhost

192.168.1.10 toaster_black toaster

192.168.1.11 toaster_red

192.168.1.12 toaster_blue

192.168.1.13 toaster_yellow

192.168.1.14 toaster_green

#0.0.0.0 NetApp2-e6

hostname resolution via dns
Hostname Resolution via DNS

Using FilerView to configure DNS/NIS services Domain Name Service matches domain names to IP addresses and enables you to centrally maintain host information so that you do not have to update the /etc/hosts file every time you add a new host to the network.

configuring dns
Configuring DNS

Configure DNS

If DNS has not been configured and you want to use DNS for hostname resolution, follow

these steps:

  • Create and edit the /etc/resolv.conf filein the root volume of the storage appliance, if it does not exist. The number of lines depends on the number of DNS servers configured, each specifying a nameserver host in the format nameserver ip_address.
  • Specify the domain name of the storage appliance using the dns.domainname option in the /etc/rc file.
  • Enable DNS using the dns.enable option in the /etc/rc file.
specify the search order
Specify the Search Order

The /etc/nsswitch.conf file lists the order in which a storage appliance searches for

resolution. To resolve hostnames, for example, a storage appliance uses the search order

listed for hosts and, in this example, first searches using /etc/hosts file, then NIS, and then

DNS.

Each line in the /etc/nsswitch.conf file uses the format:

map: service(s)

You can modify the file at any time to change the default search order for hostname

resolution.

Once the storage appliance resolves the hostname, the search ends.

routing
Routing

Even though a storage appliance may have multiple network interfaces it does not function as a router for other network hosts. It does, however, route its own packets.

To display the default and explicit routes for routing its own packets, you can check the current routing table using the netstat –r command. The netstat command displays network-related data structures.

default route
Default Route

For 6.0 and later versions of Data ONTAP, you can set the default route during initial setup or later by modifying the /etc/rc file (to make the command persistent across reboots), or by using router discovery or RIP.

For Data ONTAP versions before 6.0, if explicit routes are not listed in the routing table, the

storage appliance uses the default route, specified in the /etc/dgateways file.

Either use "rdfile /etc/rc" to view content of the file or "wrfile /etc/rc" to edit content of the

file.

default gateways
Default Gateways

Routed is a daemon invoked at boot time to

manage the network routing tables. The

daemon processes incoming packets and

periodically checks the routing table entries.

To display the status of the default gateway

list, you can use the status option to display

the following information:

  • Whether RIP snooping is active
  • The current list of default gateways
  • The metrics of default gateways 1-15
  • The state of the gateways (ALIVE or DEAD)
  • The last time each gateway was heard from
configuring and managing network interfaces
Configuring and Managing Network Interfaces

Naming Single Interfaces

Storage appliances support four

network types for NAS:

  • Ethernet 10/100 Base-T
  • Gigabit Ethernet
  • FDDI
  • ATM

Each port on an interface card is

named using a combination of

interface type and slot number. The

Ethernet port on the system board,

for example, is e0.

configuring and managing network interfaces1
Configuring and Managing Network Interfaces

Naming Quad-Port Interfaces

Some Ethernet interface cards

support four ports and are referred to

as quad-port interfaces. A storage

appliance refers to each port on the

card using a letter. The four ports are

numbered from top to bottom on

the card and lettered a-d from top to

bottom when named.

virtual interfaces
Virtual Interfaces

What is a VIF?

A VIF is a group of Ethernet interfaces working together as a logical unit. You can

group up to four Ethernet interfaces into a single logical interface.

The advantages of VIFs over single network interfaces are threefold:

  • Higher throughput because multiple interfaces work as one interface
  • Fault tolerance; if one interface in a VIF goes down, the remaining interfaces maintain connection to the network
  • Protection against a switch port becoming a single point of failure
single mode trunk
Single-Mode Trunk

VIFs are also known as trunks, virtual aggregations, link aggregations, or EtherChannel virtual interfaces.

Trunks can be single-mode or multi-mode. In a

singlemode trunk, one interface is active while the

other interface is on standby. Failure signals the

inactive interface to take over and maintain the

connection with the switch.

In the graphic, the active link on e0 fails. The e1

interface then takes over and becomes the active

link to the switch.

multi mode trunk
Multi-Mode Trunk

In a multi-mode trunk, all interfaces are active,

providing greater speed when multiple hosts

access the storage appliance. The switch

determines how the load is balanced among the

interfaces and must therefore support manually

configurable trunking.

In the figure to the right, four active interfaces

comprise the multi-mode trunk MultiTrunk1. If any

three interfaces fail, the storage appliance still

remains connected to the network.

prerequisites to initial configuration
Prerequisites to initial configuration
  • Requirements for the administration host
  • Active/active configuration requirements
  • Requirements for Windows domains
  • Requirements for Active Directory authentication
  • Time services requirements
  • Switch configuration requirements for vifs
what is operations manager
What is Operations Manager?
  • Web-client/server application that discovers, monitors and manages NetApp storage from a single management console for maximum availability, reduced TCO, and to ensure business policy compliance
operations manager key features illustrated in subsequent slides
Operations Manager Key Features(illustrated in subsequent slides)
  • Centralized monitoring
  • Configuration management
  • Capacity management
  • Detailed health and performance monitoring
  • Security and access control
  • MultiStore monitoring
  • Custom reporting
  • Chargeback
  • Event management
centralized monitoring
Centralized Monitoring
  • Auto-discovers;then monitors
  • Alerting and notification
  • FilerView® launch
  • Run commands on groups of systems simultaneously
operations manager configuration management
Operations Manager:Configuration Management
  • Establish “golden” template for NetApp systems
  • Monitor changes against the template and receive alerts
  • Push changes and revert to template
capacity management
Capacity management
  • Capacity utilization reports for the enterprise, groups, and individual objects
  • Detailed data and Snapshot™ usage reports
  • Volume/qtree growth trending
detailed health and performance monitoring
Detailed Health and Performance Monitoring
  • Appliance CPU usage
  • Protocol utilization: NFS, CIFS, FCP, iSCSI ops/sec
  • LUN IO statistics
  • Network and interface traffic
  • Any counter exposed by Data ONTAP®
security and access control
Security and Access Control
  • True role-based access control architecture
  • Use LDAP or Active Directory for authentication lookup
  • Limits user access to particular devices or groups of devices as well as kinds of actions that can be taken
operations manager multistore monitoring
Operations Manager:MultiStore Monitoring
  • Monitor at a finer granularity than the physical device – visibility into “vFilers”
  • Assign admins with rights to only specific “vFilers”
  • Edit quotas on MultiStore-hosted volumes and qtrees
custom reporting
Custom Reporting
  • All reports can be exported to .csv, .xml, flat text, and Excel
  • Custom queries can be created via CLI or GUI and reports run in the GUI or exported