target audience: TECH BUYER  Publication date: Aug 2022 - Document type: IDC Perspective - Doc  Document number: # US49490622

Storage Considerations When Using Accelerated Compute in Enterprise Artificial Intelligence Environments

By:  Eric Burgener


Get More

When you purchase this document, the purchase price can be applied to the cost of an annual subscription, giving you access to more research for your investment.

Related Links


This IDC Perspective discusses how artificial intelligence (AI) workloads in the enterprise introduce new storage requirements. As AI–driven workloads become more widely deployed in the enterprise, standard access methods like NFS and SMB will begin to impose performance limitations for some stages of the AI data pipeline. Enterprises are already starting to look at parallel scale-out file system platforms (which had traditionally been sold primarily in HPC markets) and their intelligent, highly parallel, POSIX–compliant clients to meet these performance requirements. Increasingly, four different types of enterprise storage vendors (clustered scale-up file, distributed scale-out file, parallel scale-out file, and object-based storage) will be competing for the chance to handle customers' unstructured data storage requirements, and IT managers need to understand the pros and cons of each approach before making a storage purchase decision. Accelerated compute (i.e., GPUs) is often also required when it comes to AI workloads and achieving efficient integration with storage to optimize compute resource utilization introduces additional considerations into the storage purchase decision.

"Artificial intelligence workloads are penetrating the enterprise and driving new requirements for performance, availability, and scalability in the underlying storage infrastructure," said Eric Burgener, research vice president, Infrastructure Systems, Platforms, and Technologies Group, IDC. "To effectively drive these big data analytics–oriented workloads, enterprises will need new storage architectures that offer massive parallelism and increased infrastructure efficiencies, and they are increasingly looking outside of traditional unstructured data storage architectures to get them."


Do you have questions about this document
or available subscriptions?