Timezone isn't accessible, please provide the correct parameters
eventFeedUrl=http://realintelligence.com/customers/expos/00Do0000000aAt2/FMS_xmlcreator/a0J1J00001H0ji2_specific-event-list.xml
trackCategory=Session
eventID=a0J1J00001H0ji2
timezone=
duration=PTH
, NaNth
2:10-5:00 PM
AIML-302-1: Using AI/ML for Flash Performance Scaling, Part 2 (AI/Machine Learning Track)
Paper Title: Flash Memory, Storage and Data Challenges for Production Machine Learning

Paper Abstract: Machine Learning is one of the most promising uses of massive data. As industries race to monetize the insights hidden in their data, new challenges emerge for storage. The performance needs of AI workloads have been known for some time and Flash, for example, has been successfully applied to mitigate some of these challenges. As AI usage grows and becomes more dynamic and distributed (such as on edge), these performance requirements continue to expand and be coupled with other needs such as power efficiency. Secondly, as AI moves to production, other concerns are emerging, such regulatory requirements and business needs to demonstrate AI trustworthiness while managing risk. These requirements generate new data challenges from security to provenance and governance. This talk will describe recent trends and focus areas in AI (such as productization, trust and distributed execution) and how they create challenges and opportunities for storage and data management systems. The talk will also cover how storage systems are used in production AI workflows and how innovations in storage and data management can impact and improve the production AI lifecycle.

Paper Author: Nisha Talagala, CEO, Pyxeda AI

Author Bio: Nisha Talagala is the CEO and founder of Pyxeda AI. Previously, Nisha co-founded ParallelM which pioneered the MLOps practice of managing machine learning in production. Nisha is a recognized leader in the operational machine learning space, having also driven the USENIX Operational ML Conference, the first industry/academic conference on production AI/ML. Nisha was previously a Fellow at SanDisk and Fellow/Lead Architect at Fusion-io, where she drove innovation in non-volatile memory technologies and applications. Nisha has more than 20 years of expertise in software development, distributed systems, technical strategy and product leadership. She has worked as technology lead for server flash at Intel - where she led server platform non-volatile memory technology development, storage-memory convergence, and partnerships. Prior to Intel, Nisha was the CTO of Gear6, where she designed and built clustered computing caches for high performance I/O environments. Nisha earned her PhD at UC Berkeley where she did research on clusters and distributed systems. Nisha holds 63 patents in distributed systems and software, is a frequent speaker at industry and academic events, and is a contributing writer to several online publications.