Amazon Web Services have announced that Amazon Elastic File System, a new, fully managed service that makes it easy to set up and scale file storage in the AWS Cloud, is now available to all customers.
With a few clicks in the AWS Management Console, customers can use Amazon EFS to create file systems that are accessible to multiple Amazon Elastic Compute Cloud (Amazon EC2) instances via the Network File System (NFS) protocol. Amazon EFS can automatically scale without needing to provision storage or throughput, enabling file systems to grow seamlessly to petabyte scale, while supporting thousands of concurrent client connections with consistent performance.
Amazon EFS is designed to support a broad range of file workloads – from big data analytics, media processing, and genomics analysis that are massively parallelized and require high levels of throughput, to latency-sensitive use cases such as content management, home directory storage, and web serving. Amazon EFS is highly available and durable, redundantly storing each file system object across multiple Availability Zones. There is no minimum fee or setup cost, and Amazon EFS customers pay only for the storage they use.
Today, companies of all sizes are moving their critical workloads to the AWS Cloud. Many of these workloads depend on Network Attached Storage. Traditionally, it has been costly and time consuming to operate shared file systems because file growth is unpredictable, procurement times are long, and monitoring and patch management are administrative burdens. Now, with Amazon EFS, customers can create and use shared file systems that are simple, scalable, and reliable.
Easy set up
Amazon EFS is easy to set up and use and doesn’t require customers to provision and manage file system software or storage hardware. When mounted to Amazon EC2 instances, an Amazon EFS file system provides a standard file system interface and file system semantics, allowing customers to seamlessly integrate Amazon EFS with their existing applications and tools. Amazon EFS is designed to provide the throughput, Input/Output Operations per Second (IOPS), and low latency that file workloads require. Every file system can burst to at least 100 MB per second, and file systems greater than 1 TB in size can burst to higher throughput as file system capacity grows.
“As customers continue to move more and more of their IT infrastructure to AWS, they’ve asked for a shared file storage service with the elasticity, simplicity, scalability, and on-demand pricing they enjoy with our existing object (Amazon S3), block (Amazon EBS), and archive (Amazon Glacier) storage services,” said Peter DeSantis, Vice President, Compute Services, AWS.
“Initially, our customers most passionately asking for a file system were trying to solve for throughput-heavy use cases like data analytics applications, large-scale processing workloads, and many forms of content and web serving. Customers were excited about Amazon EFS’s performance for those workloads, and pretty soon they were asking if we could expand Amazon EFS to work excellently for more latency-sensitive and metadata-heavy workloads like highly dynamic web applications. That’s what we’ve been working on for the last few months and we’re excited to release it to customers today.”
Customers can launch Amazon EFS using the AWS Management Console, AWS Command Line Interface (CLI), or AWS SDKs. Amazon EFS is available in the US East (N. Virginia), US West (Oregon), and EU (Ireland) Regions and will expand to additional Regions in the coming months.