Timezone isn't accessible, please provide the correct parameters
eventFeedUrl=http://realintelligence.com/customers/expos/00Do0000000aAt2/FMS_xmlcreator/a0J1J00001H0ji2_specific-event-list.xml
trackCategory=Session
eventID=a0J1J00001H0ji2
timezone=
duration=PTH
, NaNth
8:30-10:50 AM
AIML-301-1:Using AI/ML for Flash Performance Scaling, Part 1 (AI/Machine Learning Track)
Paper Title: Exploring the Impact of System Storage on AI/ML Workloads via MLPerf

Paper Abstract: With machine learning and deep learning becoming more prevalent in the datacenter, optimizing the system infrastructure has never been more important. This session looks to answer some of the questions about how the system memory and storage of AI servers needs to be architected to provide optimal training performance. SATA vs NVMe SSDs? High memory density vs low memory density? How does architecture impact the data science pipeline? The data presented uses the recently launched MLPerf benchmark suite to compare different memory and storage configurations in a standard Nvidia GPU AI server.

Paper Author: Wes Vaske, Principal Storage Solutions Enginee, Micron Technology

Author Bio: Wes Vaske is a Principal Solutions Engineer on the Storage Solutions Engineering team at Micron Technology in Austin, TX. He analyzes application performance for various workloads on enterprise systems such as databases and software defined storage solutions. His current focus of work is analyzing the performance of data science systems--primarily model training and inference systems. Before Micron, Wes was a Oracle Systems Engineer at Dell in the Global Solutions Engineering group analyzing the performance and design of Oracle Database systems.