Advance Network Storage Systems
The storage networking is a dedicated, high-speed network established to directly connect storage peripherals. It allows the direct communication between the storage devices and the client machines on the network. It provides a efficient solution for historical view of server attached storage. The emergence of storage networking brings a new trend in todayâ„¢s networking technology. The two-major components of storage networking are SAN and NAS.
SAN is a storage area network. This is a high performance network deployed between the servers and storage devices. Typically SAN uses the fibre channel technology. It provides the various advance facilities over the LAN and WAN.
NAS is a network-attached storage. It is a storage element, which is directly connected to network to provide file access to the servers. NAS is generally used on TCP/IP networks. NAS devices are mainly used for security purposes.
1.1 The Storage Explosion
There is no doubt that the demand for storage has been increasing exponentially. Business communications have evolved from simple text to elaborate graphics-heavy documents and fancy graphics-based presentations.
iCommerce, eCommerce, and other ever-increasing eBusiness solutions that require corporate data centers to provide information in various formats have all contributed to the explosion in demand for storage capacity. Enterprise class businesses are deploying huge RAID devices that are Terabytes (TB) in size.
1.2 Requirements for Nonstop Data Availability
Even though cost-per-megabyte for storage is declining at the rate of 35% to 40% per year, business requirements for fault-tolerance, high availability, disaster recovery and online backup have significantly increased the costs of managing all the storage. As the stored data has become more critical and irreplaceable, the need for uninterrupted availability of data and for fast and complete recovery is now absolutely required.
Leading industry analysts have estimated that the ratio of management cost versus acquisition cost for storage has increased over time to 8:1. This means that unless an alternative to the traditional directattached storage (DAS) model is implemented, an additional eight dollars will be spent to perform on-going maintenance and management for every dollar spent on purchasing storage systems. Such quickly mounting costs make it impractical for a business to grow â€œ yet growth is the essential goal of business. The solution Networked storage, of which there are two different but not incompatible breeds: storage area networks (SANs) and network-attached storage (NAS). Each will be described in detail, with discussion of how they have converged and grown highly intelligent to provide enterprises with the end-to-end storage solutions they need.
2.1 Storage and networking â€œ in the beginning:
Since the first computer was developed, there has been an increasing demand to make them faster, cheaper, and more applicable to everyday lives. The early mainframe computers have evolved from large, centralized systems to more nimble, enterprise-class servers brought on by cheaper and more efficient computing technologies. In turn, networking technologies have had an effect on the evolution of computing platforms. The maturation of two technologies and the increasing hunger for computer processing power and associated data has driven the need for faster, more accessible data storage techniques and the advent of storage networking.
2.1.1 Historical View:
Historically, storage devices have always resided inside or behind servers and were tightly coupled with these serversâ„¢ operating systems. Connection between servers and storage devices were typically facilitated using the (Small Computer Systems Interface (SCSI) protocol. SCSI is a storage interface that has become the dominant standard for storage devices such as disk and tape drives inside and outside of high-performance servers. Because of its high data transfer rates, reliability, and low latency SCSI is an ideal protocol for connecting various storage devices to a server. In the past 10 to 15 years, changes in the ways business is conducted and the explosion of Internet-based business practices has resulted in additional challenges that affect the way information is accessed and stored. The resulting increased need for storage capacity has placed heavy pressure on IT organizations to keep up with the demand for storage, while expanding its capacity in a cost-effective and manageable fashion.
2.2 Introduction to computer storage:
Computer storage holds the lifeblood of todayâ„¢s Internet-based economy: information. Information and the knowledge derived from it are the core elements by which our society increases its productivity and grows at unprecedented rates. Along with the explosion of the Internet and the creation of our information-based economy, digital data needs to be stored and made available to those systems needing access to them. Computer storage comprises various technologies and media that enable information storage, access, and protection.
2.2.1 Types of storage:
The most common types of information storage use magnetic media to save and retrieve imbedded data. Magnetic media is made up of two primary formatsâ€fixed and removable.
Fixed is commonly associated with disk storage technology where the read/write heads and the spinning media (disk platters) are held together in one unit.
Just a Bunch of Disks and represents the simplest and least expensive raw storage option. The individual disks are arranged in a simple cabinet and are available to its servers as a group of independently accessible disks. They have little or no buffering (or cache memory) or an intelligent controller that enables striping (or replicating data across disks) for improved performance or data protection. JBOD drives share power and the physical cabinet with one another. JBOD has limited growth capacity and usually scales to much less than 1 terabyte (TB) per cabinet. Since they have no inherent intelligence, parity checking or data striping, there is no protection in the event of a drive failure. Tape storage would be a common method of saving information in the event of a drive failure.
In contrast to JBOD, RAID storage stands for Redundant Array of Independent (or Inexpensive) Disks and is controlled by an intelligent controller (usually with large amounts of solid state memory) that provides parity checking and enables data striping (or replicating the same data) across multiple drives for improved protection and performance. In addition to parity checking and data striping across drives, critical business data can be protected by mirroring specific data volumes within an array cabinet or across multiple arrays. Disk mirroring provides an identical copy of the data in the event of a failure occurring on the production data volume. This fault-tolerant grouping of disks can present itself as a single disk volume to the servers attached to it. RAID storage commonly scales from a few drives per cabinet up to 512 drives per cabinet. That translates to 10â„¢s of terabytes per cabinet depending on the manufacturer, density and size of drives used, and levels of RAID protection desired.
b) Removable media:
Removable media is commonly associated with tape technology where the read/write heads are contained in a transport (or drive) and the cartridge (or tape reels in older systems) hold the tape that is sequentially passed across the heads to copy or retrieve the data. The inexpensive tape media can also be removed and stored either onsite or offsite locations for improved data protection.
2.2.2 Benefits of magnetic media:
The primary benefits of magnetic media are its flexibility (fixed or removable), relatively low cost, good performance, and its ability to be reused easily. This media can be rewritten with different digital information and safely stored for many years and accessed in the future. One drawback of magnetic media is that itâ„¢s a physical system with moving parts that can wear and fail. Also, natural disasters and system failures can destroy data or make it unavailable to those applications needing access it. This inherent risk places a large responsibility on the storage system manager to implement ways to protect their valuable information assets. The traditional method of protecting data utilizes various forms of replication and parity error protection. The idea behind this is that the more redundant the data is the more likely you are to be able to retrieve it before it becomes lost or corrupted. Each media format has its strengths and weaknesses in the areas of price/performance, scalability and reliability.
2.3 Storage networking:
Storage networking is a technology emerging from work done in the high performance com-puting environment called the "High Performance Data System" by the IEEE. Storage networking was originally developed to provide a high-speed network for data transfer.
With storage networking, a dedicated high-speed network is established to directly connect storage peripherals. This allows files and data to be directly transferred between storage devices and client machines, bypassing the traditional server bottlenecks and network control. Increased flexibility and performance are achieved by separating control from data. Storage management processes can operate in the background and in a continuous mode. By not conflicting with network traffic, network performance is maintained and even enhanced. Network traffic and load balancing applications operate, taking the network path of least resistance or of lowest load.
2.3.2 Implementation of storage network:
The implementation of a storage network transcends traditional Ethernet or token-ring-type networks. Storage devices are directly connected via controllers or intelligent drive-interfaces instead of a network interface card. This direct-connection offloads the host CPU from the overhead of managing all read and write activity. This concept is not new. Some examples of existing implementations are:
Server clusters - where the servers are connected in a storage network consisting of a SCSI or FDDI loop to allow sharing of centralized storage resources.
Server mirroring such as Novell's SFT-III which fully duplexes servers and storage. The storage systems are mirrored and maintained in sync over a SCSI or FDDI storage network
2.4 Storage networking fundamentals:
The SCSI is the most common methodology for connecting storage devices to servers. SCSI emerged in 1979 as an 8-bit parallel bus interface with support for one or two disks. This protocol has evolved over the years, increasing its scope as the foundation for other storage related technologies. Today, serial SCSI is a layered and well-architected suite of protocols for requesting services from storage devices. SCSI driver software, physical interconnections, commands implementation and storage management provide the framework for SCSI interoperability and scalability. There is support for multiple device types, queuing, multi-tasking, improved performance, caching, cabling and termination, automatic device ID configuration and dual port operation.
Transmission Control Protocol and Internet Protocol (TCP/IP) make up the communication protocol suite. These protocols were used to develop the Internet infrastructure that is so prevalent. The suiteâ„¢s roots were based largely around the U.S. governmentâ„¢s need to interconnect various defense departmental computer networks. Developed in the early 1970s, it has been expanded upon and continues to grow as new technologies evolve.
TCP/IP has played a major role in the success of the Internet. Besides having the ability to scale in very large network environments, TCP/IP fosters the ability for subscribers to share information safely and reliably with other users. These characteristics provide a truly open network framework to support millions of individuals in homes, schools, governments, small businesses and corporations in the remote corners of the world. With its support for a wide-variety of network technologies, TCP/IP is fully capable of providing the underlying foundation for global storage networks.
2.4.3 Fibre channel:
Most storage-area networks (SANs) are predominantly based on a Fibre Channel (FC) architecture. FC was developed to address the speed, capacity, and reliability requirements associated with communication between storage and server devices. It offered a solution to IT professionals who required improved reliability, higher performance and increased scalability when building the storage infrastructure. Fibre Channel storage area networks (SANs) provided servers the access to storage using the storage protocols and server-to-server communications using IP network protocols.
2.5 Architecture basics:
The basic components of the storage networking architecture are DAS, SAN, & NAS.
2.5.1 Direct Attached Storage:
Direct attached storage (DAS) was developed in the mainframe environment where networking was quite simple. In the 1980s, computing shifted from large centralized system into more flexible and networked client/server distributed model.
With increases in compute power, memory, storage density, and network bandwidth, more and more data was stored on PCs and workstations. Distributed computing and storage growth have proliferated and are driving the high demand for storage. Today, all storage access methods require CPU involvement in all I/O requests. With DAS, storage devices are very tightly coupled with the host computer's operating system and are typically managed using a parallel bus-based architecture such as SCSI. Storage sharing is limited because of its affinity (direct association) to the server. Additionally, these systems can pose maintenance intensive burdens, as they require constant tuning in order to optimize CPU cycles between disk access and application processing.
2.5.2 Storage area network:
A storage area network (SAN) describes a dedicated, high-performance network infrastructure deployed between servers and storage resources (Figure 5). The storage area infrastructure is a separate, dedicated network entity optimized for the efficient movement of large amount of raw block data. In effect, SAN is an extended link between server and storage and enables the extension of the SCSI protocol over longer distances.
SANs are typically built using the SCSI and Fibre Channel (SCSI-FCP) protocols. Fibre Channel is well suited to this application because it can transfer large blocks of data (such as is possible with SCSI) while at the same time being able to transfer these blocks over longer distances (unlike SCSI). The SAN market has historically addressed high-end, enterprise-class storage applications where performance, redundancy, and availability are paramount. Through a SAN, large numbers of users can simultaneously access data stored on various storage subsystems. This architecture allows more flexibility in scaling bandwidth, computational power (CPU) and storage capacity. SAN storage devices commonly include disk subsystems, tape siglibraries and optical disk libraries.
2.5.3 Network attached storage:
A NAS device (appliance), usually an integrated processor plus disk storage, is attached to a TCP/IP-based network (LAN or WAN), and accessed using specialized file access/file sharing protocols. The internal processor to device requests translates file requests received by a NAS. The most popular file access protocols are CIFS (Common Internet File System, with origins in the Microsoft WindowsÃ‚Â® platform) and NFS (Network File System with origins on UNIXÃ‚Â® platforms).
Storage Area Network (SAN)
SAN is an abbreviation of storage area network.
A network whose primary purpose is the transfer of data between computer systems and storage elements and among storage elements. SAN consists of a communication infrastructure, which provides physical connections, and a management layer, which organizes the connections, storage elements, and computer systems so that data transfer is secure and robust. The term SAN is usually (but not necessarily) identified with block I/O services rather than file access services.
3.1.1 Component of SAN:
When the term SAN is used in connection with Fibre Channel technology, use of a qualified phrase such as "Fibre Channel SAN" is encouraged. According to this definition an Ethernet-based network whose primary purpose is to provide access to storage elements would be considered a SAN. SANs are sometimes also used for system interconnection in cluster.
A SAN (storage area network) connects a group of servers (or hosts) to their shared storage devices (such as disks, disk arrays and tape drives) through an interconnection fabric consisting of hubs, switches and links.
3.1.2 Emergence of SAN:
The emergence of storage area networks (SANs) has created the need for new storage management tools and capabilities. While SANs provide many benefits such as lower cost of ownership and increased configuration flexibility, SANs are more complex than traditional storage environments. This inherent complexity associated with storage area networks creates new storage management challenges.
3.1.3 Technology used by SAN:
The prominent technology for implementing storage area networks is Fibre Channel. Fibre Channel technology offers a variety of topologies and capabilities for interconnecting storage devices, subsystems, and server systems. These varying topologies and capabilities allow storage area networks to be designed and implemented that range from simple to complex configurations. Due to the potential complexity and diverse configurations of the Fibre Channel SAN environment, new management services, policies, and capabilities need to be identified and addressed
3.2 The fibre channel SAN environment:
Historically in storage environments, physical interfaces to storage consisted of parallel SCSI channels supporting a small number of SCSI devices. With Fibre Channel, the technology provides a means to implement robust storage area networks that may consist of 100â„¢s of devices. Fibre Channel storage area networks yield a capability that supports high bandwidth storage traffic on the order of 100 MB/s, and enhancements to the Fibre Channel standard will support even higher bandwidth in the near future.
Depending on the implementation, several different components can be used to build a Fibre Channel storage area network. The Fibre Channel SAN consists of components such as storage subsystems, storage devices, and server systems that are attached to a Fibre Channel network using Fibre Channel adapters. Fibre Channel networks in turn may be composed of many different types of interconnects entities. Examples of interconnect entities are switches, hubs, and bridges. Different types of interconnect entities allow Fibre Channel networks to be built of varying scale. In smaller SAN environments, Fibre Channel arbitrated loop topologies employ hub and bridge products. As SANs increase in size and complexity to address flexibility and availability, Fibre Channel switches may be introduced. Each of the components that compose a Fibre Channel SAN must provide an individual management capability, and participate in an often-complex management environment. Due to the varying scale of SAN implementations described above, it is useful to view a SAN from both a physical and logical standpoint. The physical view allows the physical components of a SAN to be identified and the associated physical topology between them to be understood. Similarly, the logical view allows the relationships and associations between SAN entities to be identified and understood.
A SAN environment typically consists of four major classes of components. These four classes are:
Â¢ End-user platforms such as desktops and/or thin clients;
Â¢ Server systems;
Â¢ Storage devices and storage subsystems;
Â¢ Interconnect entities.
Typically, network facilities based on traditional LAN and WAN technology provide connectivity between end-user platforms and server system components. However in some cases, end-user platforms may be attached to the Fibre Channel network and may access storage devices directly. Server system components in a SAN environment can exist independently or as a cluster. As processing requirements continue to increase, computing clusters are becoming more prevalent. A cluster is defined as a group of independent computers managed as a single system for higher availability, easier manageability, and greater scalability. Server system components are interconnected using specialized cluster interconnects or open clustering technologies such as the Fibre Channel - Virtual Interface mapping. Storage subsystems are connected to server systems, to endâ€œuser platforms, and to each other using the facilities of a Fibre Channel network. The Fibre Channel network is made up of various interconnects entities that may include switches, hubs, and bridges. The figure below depicts a typical physical Fibre Channel SAN environment.
A SAN environment consists of SAN components and resources, as well as their relationships, dependencies and other associations. Relationships, dependencies, and associations between SAN components are not necessarily constrained by physical connectivity.
For example, a SAN relationship may be established between a client and a group of storage devices that are not physically co-located.
Logical relationships play a key role in the management of SAN environments. Some key relationships in the SAN environment are identified below:
Â¢ Storage subsystems and interconnect entities;
Â¢ Between storage subsystems;
Â¢ Server systems and storage subsystems (including adapters);
Â¢ Server systems and end-user components;
Â¢ Storage and end-user components;
Â¢ Between server systems.
Network Attached Storage(NAS)
NAS is a abbreviation of Network attached storage
A term used to refer to storage element that connect to a network and provide file access services to computer systems. Abbreviated NAS. A NAS Storage Element consists of an engine, which implements the file services, and one or more devices, on which data is stored. NAS elements may be attached to any type of network. When attached to SANs, NAS elements may be considered to be members of the SAS class of storage elements.
A class of systems that provide file services to host computers. A host system that uses network-attached storage uses a file system device driver to access data using file access protocols such as NFS or CIFS. NAS systems interpret these commands and perform the internal file and device I/O operations necessary to execute them.
Network-attached storage (NAS) is a concept of shared storage on a network. It communicates using Network File System (NFS) for UNIXÃ‚Â® environments, Common Internet File System (CIFS) for Microsoft Windows environments, FTP, http, and other networking protocols. NAS brings platform independence and increased performance to a network, as if it were an attached appliance. A NAS device is typically a dedicated, high-performance, high-speed communicating, single-purpose machine or component. NAS devices are optimized to stand-alone and serve specific storage needs with their own operating systems and integrated hardware and software. Think of them as types of plug-and-play appliances, except with the purpose of serving your storage requirements. The systems are simplified to address specific needs as quickly as possible-in real time. NAS devices are well suited to serve networks that have a mix of clients, servers, and operations and may handle such tasks as Web cache and proxy, firewall, audio-video streaming, tape backup, and data storage with file serving. These highly optimized servers enable file and data sharing among different types of clients. It also defines NAS benefits with respect to storage area networks (SANs).
NAS devices known as filers focus all of their processing power solely on file service and file storage. As integrated storage devices, filers are optimized for use as dedicated file servers. They are attached directly to a network, usually to a LAN, to provide file-level access to data. Filers help you keep administrative costs down because they are easy to set up and manage, and they are platform-independent. NAS filers can be located anywhere on a network, so you have the freedom to place them close to where their storage services are needed. One of the chief benefits of filers is that they relieve your more expensive general-purpose servers of many file management operations. General-purpose servers often get bogged down with CPU-intensive activities, and thus can't handle file management tasks as efficiently as filers. NAS filers not only improve file-serving performance but also leave your general-purpose servers with more bandwidth to handle critical business operations.
4.2 Managing network attached storage:
The fundamental goal of our network-attached storage research is to enable scalable storage systems while minimizing the file manager bottleneck. One solution is to use homogeneous clusters of trusted clients that issue unchecked commands to shared storage. However, few environments can tolerate such weak integrity and security guarantees. Even if only for accident prevention, file protections and data/metadata boundaries should be checked by small number of administrator-controlled file manager machines. To provide this more appropriate degree of integrity and security, we identify two basic architectures for direct network-attached storage.
The first, NetSCSI, makes minimal changes to the hardware and software of SCSI disks, while allowing NetSCSI disks to send data directly to clients, similar to the support for third-party transfers already supported by SCSI. Drivesâ„¢ efficient data-transfer engines ensure that each driveâ„¢s sustained bandwidth is available to clients. Further, by eliminating file management from the data path, manager workload per active client decreases. Cryptographic hashes and encryption, verified by the NetSCSI disks, can provide for integrity and privacy. The principal limitation of NetSCSI is that the file manager is still involved in each storage access; it translates namespaces and sets up the third-party transfer on each request.
Figure 14 Network attached secure disk
The second architecture, Network-Attached Secure Disks (NASD), relaxes the constraint of minimal change from the existing SCSI interface. The NASD architecture provides a command interface that reduces the number of client-storage interactions that must be relayed through the file manager, thus avoiding a file manager bottleneck without integrating file system policy into the disk. In NASD, data-intensive operations (e.g., reads and writes) go straight to the disk, while less-common policy making operations (e.g., namespace and access control manipulations) go to the file manager.
4.3 Network support for NAS:
The success of the NASD architecture depends critically on its networking environment. Clearly, support for high-bandwidth, large data transfers is essential. Unfortunately, traditional client-server communication paths do not support efficient network transport. For example, measurements of our NASD prototype drive (running DCE/RPC over UDP/IP) show that non-cached read or write requests can easily be serviced by modest hardware. However, requests that hit in the drive cache incur order-of magnitude increases in service time due to the NASD drive and client both spending up to 97% of their time in the network stack [Gibson98]. This problem with traditional protocol stacks forces network-attached storage to explore alternative techniques for delivering scalable bandwidth to client applications. Several other network issues are also important to consider in a NASD environment. These include:
File system traffic patterns:
Network file access entails significant small message traffic: attribute manipulation, command and status, small file access, and metadata access. In our NASD prototype, modest size messages between 100 and 300 bytes) account for over 75% of the total messaging in the storage system. Network protocols that impose significant connection overhead and long code paths will be a primary determinant of cached storage response time and overall storage system scalability.
Disk drives are price-conscious, resource-constrained devices. Current drives contain only 1- Mbytes of RAM but are capable of streaming 25 MB/sec and bursting at 100 Mbytes/sec. This efficiency is achieved with hardware-based network support and streamlined protocols. However, network trends are increasing the resource requirements and complexity of drives. For example, Fibre-Channelâ„¢s rich set of service classes requires significant support that is unnecessary for many storage applications; a much smaller subset can be deployed to meet storageâ„¢s essential needs [HP99].
Cluster SAN, LAN and WAN:
High-performance clusters will be based on commodity system area networks (SANs), which will support protocols optimized for high bandwidth and low-latency. Such SANs are a natural fit for the needs of scalable storage in a cluster environment. LAN-based workgroups, however, typically access distributed file systems using internet-based protocols (e.g., RPC/UDP/IP), forcing them to suffer significant protocol processing overhead. Incorporating client LANs into a cluster SAN can overcome this problem. Specifically, using the same media and link layers for both cluster and workgroup storage will increase SANsâ„¢ commodity advantages, enable thin protocols, and improve support for small messages. When necessary, remote access to cooperating workgroupsâ„¢ storage can use optimized servers or gateway protocol converters.
Storage requires reliable delivery of data for most applications. Ideally, the network should provide reliability between client and storage. However, in most cluster SANs, errors are rare enough that the cost of complex hardware-based error handling is outweighed by the flexibility of efficiently exposing infrequent errors to higher-level software or firmware (e.g., Fibre Channel Class 3). Essential to efficient software error handling is hardware support to quickly identify errors (e.g., hardware checksum) and support for network problems that endpoint software cannot solve alone (e.g., switch buffer overflow). This provides both the efficiency and flexibility applications need for handling errors.
Large storage systems are designed to detect and handle drive failures (e.g., RAID). Client-based RAID over network-attached storage requires that the system expose both drive and network failures. Efficient recovery requires rapid remapping of communication channels to alternative storage devices.
On-site and off-site redundancy is widely used for ensuring data integrity and availability. Current solutions either force clients to transmit copies of data to each storage replica or require one replica to transmit the data to other replicas. This creates significant bandwidth requirements in the client memory system, the client network interface, and/or the storage node responsible for replication. Network support for multicast, either in a NIC that automatically replicates data or in the network fabric, can significantly reduce bandwidth requirements.
SAN Vs NAS
5.1 Storage area network Vs Network attached storage:
Some people confuse NAS with storage area networks (SANs); after all NAS is SAN spelled backwards. The technologies also share a number of common attributes. Both provide optimal consolidation, centralized data storage, and efficient file access. Both allow you to share storage among a number of hosts, support multiple different operating systems at the same time, and separate storage from the application server. In addition, both can provide high data availability and can ensure integrity with redundant components and redundant array of independent disks (RAID).
Others may view NAS as competitive to SAN, when both can, in fact, work quite well in tandem. Their differences NAS and SAN represent two different storage technologies and they attach to your network in very different places. NAS is a defined product that sits between your application server and your file system (see Figure 1). SAN is a defined architecture that sits between your file system and your underlying physical storage (see Figure 2). A SAN is its own network, connecting all storage and all servers. For these reasons, each lends itself to supporting the storage needs of different areas of your business.
5.2 NAS â€œ Think network user:
NAS is network-centric. Typically used for client storage consolidation on a LAN, NAS is a preferred storage capacity solution for enabling clients to access files quickly and directly. This eliminates the bottlenecks users often encounter when accessing files from a general-purpose server. NAS provides security and performs all file and storage services through standard network protocols, using TCP/IP for data transfer, Ethernet and Gigabit Ethernet for media access, and CIFS, http, and NFS for remote file service. In addition, NAS can serve both UNIX and Microsoft Windows users seamlessly, sharing the same data between the different architectures. For client users, NAS is the technology of choice for providing storage with unen-cumbered access to files.
Although NAS trades some performance for manageability and simplicity, it is by no means a lazy technology. Gigabit Ethernet allows NAS to scale to high performance and low latency, making it possible to support a myriad of clients through a single interface. Many NAS devices support multiple interfaces and can support multiple networks at the same time. As networks evolve, gain speed, and achieve latency (connection speed between nodes) that approaches locally attached latency, NAS will become a real option for applications that demand high performance.
4.3 SAN â€œ Think back-end / computer room storage:
A SAN is data-centric - a network dedicated to storage of data. Unlike NAS, a SAN is separate from the traditional LAN or messaging network. Therefore, a SAN is able to avoid standard network traffic, which often inhibits performance. Fibre channel-based SANs further enhance performance and decrease latency by combining the advantages of I/O channels with a distinct, dedicated network.
SANs employ gateways, switches, and routers to facilitate data movement between heterogeneous server and storage environments. This allows you to bring both network connectivity and the potential for semi-remote storage (up to 10 km distances are feasible) to your storage management efforts. SAN architecture is optimal for transferring storage blocks. Inside the computer room, a SAN is often the preferred choice for addressing issues of bandwidth and data accessibility as well as for handling consolidations.
Due to their fundamentally different technologies and purposes, you need not choose between NAS and SAN. Either or both can be used to address your storage needs. In fact, in the future, the lines between the two may blur a bit according to Evaluator Group, Inc. analysts. For example, down the road you may choose to back up your NAS devices with your SAN, or attach your NAS devices directly to your SAN to allow immediate, nonbottlenecked access to storage.
4.4 The difference table:
Application of SAN and NAS
6.1 Application of SAN:
6.1.1 Data mining on PC cluster 
Disk to disk copy function is very efficient using PC connected SAN clusters than the PC connected LAN clusters.
6.1.2 Distributed file system 
Distributed file system using SAN is having large number of advantages over a historically storage attached systems. In this the data transfer takes place without file bottlenecks.
6.2 Application of NAS:
6.2.1 Integrity and security management
The emergence storage networking provides a very high performance networks. The majour components of the storage networking are SAN and NAS. These components performs various functions as follows
Data transfer reliability
Centralized management of storage
Simplified addition of file sharing capacity
 Garth A. Gibson and Rodney Van Meter
Network attached storage architecture
 David f. nagle, Gregory R. ganger, Jeff butler, Garth Goodson and
Network supports for network-attached storage
18-20 august 1999
Masato oguchi and Masaru kitsuregawa
Data mining on PC cluster connected with storage area network
Distributed file systems for storage area network
 Auspex system NAS-SAN Convergence today 2002
1. Introduction 2
2. Storage networking 4
2.1 Storage and network â€œ at the beginning
2.2 Introduction to computer storage
2.3 Storage networking
2.4 Storage networking fundamentals
2.5 Architecture basics
3. SAN 14
3.2 The fibre channel SAN environment
4 NAS 19
4.2 Managing network attached storage
4.3 Network support for NAS
5 SAN Vs NAS 25
5.1 Storage area network Vs Network attached storage
5.2 NAS â€œ Think network user
5.3 SAN â€œ Think back-end / computer room
5.4 The difference Table
6 Applications of SAN and NAS 28
6.1 Applications of SAN
6.2 Applications of NAS
7 Conclusion 29