Program at a Glance

07:30 AM to 07:00 PM
Open REG: Registration
Main Lobby (Santa Clara Convention Center, First Floor)
Track: General Events
General Event Description:
Registration for August 5-7, 2025 FMS Conference.
08:00 AM to 08:30 AM
Open BRK: Tuesday Continental Breakfast
Foyer - Main Lobby (Santa Clara Convention Center, First Floor)
Track: General Events
General Event Description:
Description Not Available
08:30 AM to 09:35 AM
PRO AIML-101-1: AI Techniques
Ballroom A (Santa Clara Convention Center, First Floor)
Track: AI and ML Applications
Organizer + Chairperson:
David McIntyre, Director Product Planning, Samsung Electronics
David McIntyre focuses on computational storage acceleration solutions development and strategic business enablement for cloud to edge applications including AI inference/video analytics, database processing and blockchain networks. He has held senior management positions with IBM, Samsung, Xilinx, Intel (formerly Altera) and at Silicon Valley startups. He has consulted for institutional investors including Fidelity, Goldman Sachs and UBS. David is a frequent presenter and chairperson at the Flash Memory Summit and other technical conferences including SNIA.
Presenters:
Aabha Mishra, Senior Engineer, Sandisk
Presentation Title:
Speeding Up Nearest Neighbor Search: SSD-Resident Hardware Accelerator for RAG
Presentation Abstract:
Approximate Nearest Neighbor Search (ANNS) is vital for AI and RAG systems. SOTA graph-based indexes offer best performance but require large RAM—unsustainable as datasets grow. Recent innovations like Disk ANN and LM-Disk ANN shift vector and index storage to SSDs, cutting memory costs while maintaining efficiency. We have identified distance calculations as the bottleneck in these systems, consuming up to 63% of index build time and 37% of search time. We propose a novel SSD-resident hardware accelerator to offload a portion of these computations from the CPU, ensuring no CPU idle time while significantly improving latency. Learn how to design a HW accelerator that performs 10x faster than CPU - leading to a minimum of 90% offload - search times reduce by ~33%. Our SSD Integrated HW accelerator design will boost ANNS efficiency, lower CPU load, and enable diversion of expensive CPU resources to other non-repetitive processes, hence driving ultra cost-effective large-scale AI applications.
Author Bio:
Aabha Mishra is a Senior Engineer at SanDisk, working at the intersection of machine learning, storage systems, and hardware acceleration. She holds a bachelor’s degree in Computer Science, Economics, and Entrepreneurship from the University of Wisconsin–Madison, class of 2023. Her experience spans software engineering at Zendesk, consulting in health tech, and working on AgriTech and automation projects in West Africa. Aabha is currently focused on AI inference optimization, firmware development for SanDisk's cutting edge products, and optimization of disk-based vector indexing systems for scalable, efficient performance
Rohit Mittal, Head and Senior Director of AI Products and Technologies, Auradine
Rohit Mittal is the Head and Senior Director of AI Products and Technologies at Auradine, where he leads the development of cutting-edge AI infrastructure solutions. Prior to Auradine, he spent five years at Google Cloud driving systems and silicon architecture for large-scale AI and ML infrastructure. Earlier in his career, he co-founded BioBit Cloud Inc. at Stanford University, where he served as CPTO, building cloud-native platforms for bioinformatics. Rohit brings a unique blend of startup agility and hyperscale systems expertise to the future of AI computing.
Vishwas Saxena, Senior Technologist, Firmware Engineering, Sandisk
Presentation Title:
Speeding Up Nearest Neighbor Search: SSD-Resident Hardware Accelerator for RAG
Presentation Abstract:
Approximate Nearest Neighbor Search (ANNS) is vital for AI and RAG systems. SOTA graph-based indexes offer best performance but require large RAM—unsustainable as datasets grow. Recent innovations like Disk ANN and LM-Disk ANN shift vector and index storage to SSDs, cutting memory costs while maintaining efficiency. We have identified distance calculations as the bottleneck in these systems, consuming up to 63% of index build time and 37% of search time. We propose a novel SSD-resident hardware accelerator to offload a portion of these computations from the CPU, ensuring no CPU idle time while significantly improving latency. Learn how to design a HW accelerator that performs 10x faster than CPU - leading to a minimum of 90% offload - search times reduce by ~33%. Our SSD Integrated HW accelerator design will boost ANNS efficiency, lower CPU load, and enable diversion of expensive CPU resources to other non-repetitive processes, hence driving ultra cost-effective large-scale AI applications.
Author Bio:
Vishwas is Senior Technologist in Firmware Engineering at Sandisk/Western Digital, Vishwas has spearheaded innovative products across Machine Learning, Security, Blockchain, Networking, and Wireless technologies. His key contributions include WD Crypto HW Wallet, Encrypted Content Search, Wireless Storage Drives, Edge Analytics-based Video Surveillance Systems, and Semantic Image Retrieval. With over 24 years of industry experience, he holds 30+ Patents and Trade Secrets along with 12 publications. Vishwas earned his Master’s in Machine Learning & AI from Liverpool John Moores University (2021) and a bachelor’s in computer engineering from Netaji Subhas Institute of Technology (2000)
Prasad Venkatachar, AI Solutions Director, VAST DATA
Presentation Title:
Large Language Model Quantization and Optimzation
Presentation Abstract:
Large Language Models (LLMs) have revolutionized the field of natural language processing, enabling applications such as sentiment analysis, classification and language translation. However, the model sizes and its memory requirements increasing becoming complex and challenging for solution architects to design and configure systems that meet the needs of the enterprises. This session evolves various model quantization and optimization techniques.
Author Bio:
Prasad Venkatachar is the Director of AI Solutions Engineering at Vast Data. He is focused on building AI Solutions by working with AI partners like NVIDIA. Prasad Venkatachar is an IEEE Senior Member, BCS Fellow, serving as Conference Advisory Board for Future of Memory and Storage, Google Databases Partner Advisory and served as Lenovo Technology Innovation panel member and Microsoft Data and AI Partner Advisory Member. As subject matter expert in Data and Ai filed served fortune 500 enterprise customers to deliver business value outcomes for Datacenter and Cloud deployments. He has good experience and certified in Multiple AI/Gen AI certifications from Google, Nvidia, Deep Learning and Cloud (AWS/Azure/GCP/IBM) Database (Oracle/DB2/Azure Data) and A regular speaker in Industry Conferences: Microsoft Ignite, Oracle Open World, Gartner Conference, Developer conferences: Pass Summit, Oracle users’ group, Percona live and SNIA, SDC, Future of Memory & Storage. Prior to Vast Data he worked at Pliops, Lenovo, Hewlett Packard Enterprise.
Nilesh Shah, VP Business Development, ZeroPoint Technologies
Presentation Title:
Optimizing Foundational Models: Hardware-Accelerated Memory Compression
Presentation Abstract:
Efficient memory usage is essential as foundational models scale in complexity. Our hardware-accelerated memory compression algorithm enhances HBM and LPDDR in AI accelerators, compressing and decompressing AI workloads within just a few clock cycles. Achieving a 1.5X compression ratio on transformer models like LLAMA3, the algorithm ensures no loss in model accuracy. It employs lossless compression at cache line granularity and can be integrated within any memory controller, or positioned near SRAM or DMA engines for optimal performance. This capability allows it to effectively handle models that have already undergone lossy compression and to improve the functionality of Key-Value caches, embeddings, and VectorDB operations. Our ongoing development of hybrid lossy algorithms continues to push the boundaries of memory efficiency. This technological advancement is crucial for the scalable deployment of foundational AI models, transforming memory and storage infrastructure to meet advanced computational demands.
Author Bio:
Nilesh Shah is VP Business Development, ZeroPoint Technologies. Additionally. He participates and contributes regularly at standards bodies like SNIA, OCP, JEDEC, RISC-V, CXL Consortium. He is regularly invited to speak at conferences, and has led multiple panels and is featured in Analyst/ Press interviews, focused on AI and memory technologies. Previously, Nilesh led Strategic Planning at Intel Corporation's Non Volatile Memory Solutions Group, where he was responsible for the product planning and launch of the Data Center SSD products and Pathfinding innovations. Nilesh advises GPU and memory Chiplet startups.
Adam Manzanares, Director of Software Strategy & Development, Samsung
Presentation Title:
RAG pipeline optimization leveraging HC SSD and CXL memory
Presentation Abstract:
In this session we will detail the performance and potential TCO benefits of leveraging high-capacity (HC) SSDs combined with the memory expansion capabilities of CXL in a RAG pipeline. The storage demands of AI are driving higher SSD capacity points which necessitates SSD controller evolution that can be optimized by software ecosystem changes. Specifically, the SSD controller indirection unit (IU) is being increased, and we will showcase how IU aware software impacts RAG performance. In addition, CXL is a fundamental technology that enables memory expansion including the use of memory, storage, and fabric capabilities. This presentation will also cover scenarios where we combine high-capacity SSDs along with CXL devices in the RAG pipeline. We will demonstrate that hardware/software co-design is the key to unlock the potential of the evolving memory and storage hardware in the context of AI workloads.
Author Bio:
Adam Manzanares is a director of software engineering and strategy at Samsung Electronics focused on ecosystem enablement for emerging devices. He has worked in the storage/memory industry for over a decade and currently focuses on data placement, QLC, and CXL ecosystems' enablement.
Presentation Session Description:
This session brings together cutting-edge innovations in AI infrastructure, focusing on optimizing memory and storage solutions to meet the demands of increasingly complex AI workloads. A common theme across the presentations is the need for efficient resource utilization to support the scalability of AI applications, particularly in systems like Approximate Nearest Neighbor Search (ANNS) and Large Language Models (LLMs). Presenters will explore the benefits of shifting data storage to SSDs and leveraging hardware accelerators to reduce CPU load, thus enhancing system efficiency and cost-effectiveness. The integration of high-capacity SSDs and CXL memory expansion is highlighted as a pivotal development, enabling the optimization of Retrieval Augmented Generation (RAG) pipelines and driving total cost of ownership (TCO) benefits. Additionally, advancements in memory compression algorithms are showcased, offering transformative approaches for handling large-scale AI models without compromising accuracy. This session underscores the critical role of hardware-software co-design in unlocking the full potential of AI infrastructure, providing attendees with insights into state-of-the-art techniques for optimizing AI system performance and scalability.
PRO AUTO-101-1: Software Defined Vehicles
Ballroom B (Santa Clara Convention Center, First Floor)
Track: Automotive Applications
Organizer:
Bill Gervasi, Principal Memory Solutions Architect, Monolithic Power Systems
Mr. Gervasi has nearly 5 decades of experience in high speed memory subsystem definition, design, and product development. He piloted the definition of Double Data Rate SDRAM since its earliest inception, authoring the first standard specification, and created the Automotive SSD standard. With MPS, Bill is driving some of the memory and storage system management mechanisms for a post-quantum world. He received the JEDEC Technical Excellence award, their highest honor, in 2020.
Presenters:
Junjian Zhao, Sr. Manager, Technical Marketing, Monolithic Power Systems
Presentation Title:
Emerging Trends in Automotive Fabrics and Data Security
Presentation Abstract:
The blurring of the lines between data centers and automobiles continues to grow fuzzier. This talk explores the trends in automotive fabrics tying together a wild array of sensors, displays, processors, memory, and storage. Another data center trend that may actually appear first in cars is the need for post-quantum security algorithms, preventing malicious intruders from steering our cars off bridges.
Author Bio:
Mr. Zhao joined MPS in 2016. He is in charge of the IC definition, development, and customer support in their automotive product line, focusing on power management solutions for automotive ADAS and Lighting . Mr. Zhao holds a Ph.D. in Electrical Engineering from the University of Wisconsin-Madison.
Elaine Lee, Product Marketing Manager, Silicon Motion
Presentation Title:
Trending for product safety features in the Automotive field
Presentation Abstract:
Session #1 Important System in Automotive Field and how to integrate it Quality system isn’t just a process; it’s the engine driving every automotive makers. ISO9001 is an important process in each company because everyone needs to make sure of the product quality whenever deliver it. In automotive field, only basic quality requirement is not enough to fulfill customer and the market trend. New ISO management systems, such as ISO26262, ISO21434, VDA6.3, ASPICE…, are springing up in the markets. Implementing those standards in existing processes and optimizing and interacting with those system to become the company’s management process could be the way to win the business. Session #2 How to make the product to be safer ? For the autonomous cars, everything needs to rely on the cooperation between the software and hardware in the vehicle. Gigantic damage will occur when the product is designed unsafely. Designing a safety compliance product is the most complete way to fulfill automotive safety requirements, but it takes time and effort. Design the comprehensive safety mechanism for current in-effect hardware to avoid failure to protect personal safety.
Author Bio:
Elaine Lee is the Automotive Project PM at Silicon Motion. She has Master degree in Institute of Management of Technology at National Yang Ming Chiao Tung University in Taiwan. She has good knowledge and experiencing in several automotive activities such as IATF16949, VDA6.3 for automotive quality process and ISO26262, ISO21434, ASPICE…and so on. Elaine also holds PMP certification and several automotive certification such as ASPICE Provisional Assessor, Safety Manager, Cybersecurity Manager, IATF 16949 internal auditor…and so on.
Kevin Hsu, Applications Engineering Senior Manager, KIOXIA
Presentation Title:
How UFS Storage is Evolving for On-Device AI and Autonomous Vehicles
Presentation Abstract:
As AI becomes increasingly integrated into everyday applications, the need for efficient edge inferencing is more critical than ever. Today, most AI systems process inferencing data in the cloud, introducing latency when transmitting results back to connected devices—particularly in mobile and automotive environments. By localizing AI/ML models at the edge, devices can deliver faster, more detailed responses, enhancing real-time applications such as image recognition and voice translation. Additionally, on-device AI improves privacy and security by keeping sensitive personal and corporate data local, reducing exposure to potential cyber threats. To meet the growing demands of edge AI, Universal Flash Storage (UFS) must evolve. Future UFS solutions will require faster interfaces, newer features, and optimized power efficiency to support the next generation of intelligent devices.
Author Bio:
Kevin is a Senior Manager in the Managed Flash Applications Engineering group at KIOXIA America Inc. He has worked in the memory industry for over 25 years and held various roles in engineering, marketing, and sales. He is the technical support lead of the Managed Flash product line for all of North America and has worked with key customers in the mobile, networking and automotive space. He holds a BSEE degree from UCLA.
Presentation Session Description:
This session delves into the transformative landscape of the automotive industry, emphasizing the critical role of advanced memory and storage solutions in supporting next-generation applications such as autonomous driving, AI-powered systems, and high-endurance data management. Central to the discussions is the evolution of chipset and storage technologies that address the unique challenges of modern automotive environments, characterized by elevated operating temperatures and increasing computational demands. The integration of high-bandwidth non-volatile memory and NAND Flash-based storage is explored, highlighting their significance in managing vast amounts of real-time sensor data and ensuring system reliability and safety over the long lifespan of vehicles. Additionally, the session underscores the importance of edge AI and machine learning in automotive applications, advocating for the localization of AI models to enhance real-time processing, reduce latency, and protect data privacy. To accommodate the burgeoning demands of these intelligent systems, advancements in Universal Flash Storage (UFS) are deemed essential, necessitating faster interfaces and optimized power efficiency. Collectively, these themes illustrate the imperative for memory semiconductor companies to innovate and adapt, ensuring that automotive technologies continue to deliver robust and reliable performance in an increasingly complex digital age.
Open BMKT-101-1: Market Analyst Panel
GAMR-1 & 2 (Great America Meeting Rooms, SCCC 2nd Floor)
Track: Business Strategies & Memory Markets
Organizer + Chairperson:
Jean Bozman, President, Cloud Architects Advisors
Jean S. Bozman is President of Cloud Architects Advisors, a market research and consulting firm focused on hardware and software for enterprise and hybrid multi-cloud computing. She analyzes the markets for servers, storage, and software related to datacenters and cloud infrastructure. A highly-respected IT professional, she has spent many years covering the worldwide markets for operating environments, servers, and server workloads. She was a Research VP at IDC, where she focused on the worldwide markets for servers and server operating systems. She is a frequent conference participant as a speaker, chairperson, and organizer at such events as Flash Memory Summit, OpenStack, and Container World. She is often quoted in a variety of publications including BusinessWeek, Investor’s Business Daily, the Los Angeles Times, CNET, Bloomberg, and Reuters. Ms. Bozman has also been VP/Principal Analyst at Hurwitz and Associates and Sr Product Marketing Manager at Sandisk. She earned a master’s degree from Stanford.
Panel Members:
Jeff Janukowicz, Research Vice President, IDC
Jeff Janukowicz is a Research Vice President at IDC where he provides insight and analysis on the SSD market for the Client PC, Enterprise Data Center, and Cloud market segments. In this role, Jeff provides expert opinion, in-depth market research, and strategic analysis on the dynamics, trends, and opportunities facing the industry. His research includes market forecasts, market share reports, and technology trends of clients, investor, suppliers, and manufacturers.
Russ Fellows, VP, Futurum Group
Russ brings over 25 years of diverse experience in the IT industry to his role at The Futurum Group. As a partner at Evaluator Group, he built the highly successful ;lab practice, including IOmark benchmarking. Prior to Evaluator Group he worked as a Technology Evangelist and Storage Marketing Manager at Sun Microsystems. He was previously a technologist at Solbourne Computers in their test department and later moved to Fujitsu Computer Products. He started his tenure at Fujitsu as an engineer and later transitioned into IT administration and management.
Jim Handy, General Director, Objective Analysis
Jim Handy of Objective Analysis is a 35-year semiconductor industry executive and a leading industry analyst. Following marketing and design positions at Intel, National Semiconductor, and Infineon he became highly respected as an analyst for his technical depth, accurate forecasts, industry presence, and numerous market reports, articles, white papers, and quotes. He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com.
Avril Wu, SVP, TrendForce
TrendForce Research Vice President, Avril Wu, has over a decade of experience in various aspects of memory. Before TrendForce, Avril had worked with an established memory company, also covering the same sector. Despite focusing on the DRAM market initially, Avril extended her expertise in 2019 to include NAND Flash, and semiconductor in 2023 as well, meaning she is currently covering the entire memory sector.
Simone Bertolazzi, Principal Analyst, Yole Group
Simone Bertolazzi, PhD, is a Principal Technology & Market analyst, Memory, at Yole Intelligence, part of Yole Group. As member of Yole’s memory team, he contributes on a day-to-day basis to the analysis of memory markets and technologies, their related materials, device architecture and fabrication processes. Simone obtained a PhD in physics in 2015 from École Polytechnique Fédérale de Lausanne (Switzerland) and a double M. A. Sc. degree from Polytechnique de Montréal (Canada) and Politecnico di Milano (Italy).
Panel Session Description:
The fast-moving markets for memory and storage cause rapid changes in the business environments surrounding them, making it difficult for customers and consumers to see the “patterns in the data.” Industry analysts around the world are tracking these market dynamics – with data and reports that describe this era of rapid change. In this session, analysts from North America, Europe (EMEA) and Asia/Pacific will show the data and trends that are shaping the marketplace – affecting the pricing and availability for memory and storage products worldwide. As this session concludes, a brief question-and-answer (Q&A) session will follow this series of industry analyst presentations.
PRO COMP-101-1: Computational Storage Discussions and Explorations
Ballroom E (Santa Clara Convention Center, First Floor)
Track: Computational Storage
Organizer:
Scott Shadley, Director of Leadership Narrative and Evangelist, Solidigm
Scott Shadley has spent over 25 years in the semiconductor and storage space. He has time in Production, Engineering, R&D, Customer focused roles including Marketing and Strategy. His current focus is in efforts to drive adoption of new storage technology as a Director of Leadership Narrative and Evangelist at Solidigm. He has been a key figure in promoting SNIA as a Board member and leading the computational storage efforts as a co-chair of the SNIA Technical Working Group. He participates in several industry efforts like Open Compute, NVM Express and is seen as a subject matter expert in SSD technology and semiconductor design. He has and still speaks on the subject at events like the Open Compute Summit, Flash Memory Summit, SDC, and many other events, press interviews, blogs, and webinars. While at NGD Systems, Scott developed and managed the Computational Storage products and ecosystem. Scott previously managed the Product Marketing team at Micron, was the Business Line Manager for the SATA SSD portfolio, and was the Principal Technologist for the SSD and emerging memory portfolio. He launched four successful innovative SSDs for Micron and two for STEC, all of which were billion dollar programs. Scott earned a BSEE in Device Physics from Boise State University and an MBA in marketing from University of Phoenix.
Presenters:
Dhruv Garg, R&D Engineering, Staff Engineer, Synopsys
Presentation Title:
Offloading Host to NVMe Subsystems: Computational Storage
Presentation Abstract:
In today’s fast-moving world, accessing memory/storage and processing at high speeds has been the need of the hour for technologies such as Artificial Intelligence/Machine Learning/Cloud Computing. To cater these needs, market has adopted NVMe at a wider scale. Recent advancements in NVMe such as Subsystem Local Memory (SLM) and Computational Storage command sets have offloaded host with data processing, thus enabling software’s to operate at even faster speeds, thus providing high bandwidth, low latency systems. This command set has enabled the NVMe Controller to process the data locally and NVMe Host to expect the result of the computed data. From the functional verification perspective, the addition of these command sets has increased the complexity of the systems to be verified at a lesser time to market. These complexities are addressed by our VIP, which provides a complete solution to verify the Computational Storage design along with Subsystem local memory command set with the VIP’s prominent features such as vast built-in sequence library, exhaustive assertion plan, performance analysis metrics and various test scenarios in our Testsuite product.
Author Bio:
Dhruv Garg is a Staff Engineer in the Verification IP team at Synopsys, specializing in the development of NVMe and CXL protocols from past 5 years. He received his Bachelor of Technology degree in Electronics and Communication from Jaypee Institute of Information Technology, Noida in 2018 and Master of Technology degree in VLSI from National Institute of Technology (NIT), Jalandhar in 2020.
Jungki Noh, Director, SK hynix
Presentation Title:
Vertical query optimization using Data-aware CSD for Data analytics
Presentation Abstract:
As data volumes grow, the cost of moving large datasets increasingly limits analytics performance. This talk introduces Data-Aware CSD, a new type of NVMe storage devices that perform computation in-place to minimize data movement. Unlike block-based systems, Data-aware CSD store data as variable-size objects and natively handle formats like Parquet, JSON, and CSV. This inherent capability allows Data-aware CSD to integrate seamlessly into existing analytical pipelines, enabling interoperable, distributed data processing across diverse computing environments. Whether deployed under parallel file systems or object storage, they provide a path to vertically optimized, scalable analytics — allowing workloads to be flexibly distributed across storage devices, storage servers, and clients for optimal performance. This talk will present the high-level design of this new device, and its role as a building block in a multi-layer computational storage architecture.
Author Bio:
Jungki Noh searves as the Team Leader for Solution SW at SK Hynix, where he leads a team dedicated to the research and development of next-generation storage systems and innovative storage and memory device solutions.
Wei Lin, CTO, Phison Electronics
Presentation Title:
Computational Storage Drive for LLM
Presentation Abstract:
This presentation focuses on how users can achieve model fine-tuning with lower costs and improve inference efficiency with the same budget. Traditional model training requires sufficient HBM memory capacity to proceed. To reduce the dependency on HBM, we can partition the model and move the update process to the CPU for computation. While this method can reduce the demand for HBM memory capacity, it frequently requires writing model parameters to the storage device and reading them back into HBM or DRAM, which significantly reduces the lifespan of the storage device and is limited by the bandwidth of the PCIe interface. To address these issues, we have developed a Computational Storage Device called aiDAPTIVCache2.0, which can offload part of the computation to the CSD, thereby improving the model fine-tuning time by 40%. Additionally, Cache Memory can help edge devices improve performance during inference. By partitioning the model and allowing DRAM to store more KV Cache, and by swapping KV Cache to Cache Memory, the TTFT (Time to First Token) can be reduced by 14 times, and the token length can be increased by 8 times.
Author Bio:
Head of Phison's NAND team and AI team. Ph.D from National Chiao Tung University
Jason Molgaard, Principal Storage Solutions Architect, Solidigm
Presentation Title:
Securing Computational Storage for the AI Era
Presentation Abstract:
As the demands of AI burden the storage infrastructure, Computational Storage provides an offload solution to alleviate some of the burden. The benefits of Computational Storage as an offload have been discussed in other presentations. However, security is often ignored in Computational Storage but is paramount to the protection of user data that is subject to the in-drive computation. SNIA has documented security recommendations in the Computational Storage Architecture and Programming Model, but how do you take the recommendations and convert them to an actual implementation? This presentation by Solidigm will discuss the practical implementation of security in Computational Storage and the steps required to progress from the security recommendations to an actual secure implementation.
Author Bio:
Jason Molgaard is an experienced storage controller RTL designer and architect having worked for various storage device companies architecting and designing HDD and SSD storage controllers. As a Principal Storage Solutions Architect on the Solidigm Pathfinding and Advanced Development Team, Jason focuses on future storage controller architectures and technologies, including Computational Storage and CXL. Jason is co-chair of the SNIA Computational Storage TWG and the SNIA Technical Council. Jason helps drive the Computational Storage standard at both SNIA and NVMe. Jason holds a Master of Science degree in Electrical Engineering.
Viacheslav Dubeyko, Linux kernel developer, IBM
Presentation Title:
Can file systems survive in data-centric world?
Presentation Abstract:
The volume of processing data is growing exponentially. AI/ML algorithms, financial transactions, social networks, cloud computing represent modern trends that latency, performance sensitive, and data hungry. File systems represent crucial and fundamental technology that builds foundation of data storage stack. However, pressure of data-centric and data-intensive nature of modern applications revealed significant overhead that file systems introduce in data storage stack. Moreover, massive amount of hardware accelerator and kernel bypassing technologies, disaggregated architecture, ultra-fast storage devices create “illusion” or “impression” that file systems could be a redundant item of data storage stack. Can file systems survive in data-centric world?
Author Bio:
I was born in a small and nice Russian town in 1973. My first passion was physics and I graduated in 1997 with specialization in X-ray spectroscopy. Then, I acquired a Ph.D degree (X-ray spectroscopy) after finishing my postgraduate studies in 2002. But I always had a passion for programming and algorithms designing and I started my career as C++ developer in 2004. As a result, I was in production development for around 6 years. My research career started in 2010 and I served as a researcher in several companies (Samsung Electronics, Huawei, HGST, Western Digital). I am involved in Linux kernel open-source activity and I contributed in HFS+ and NILFS2 file system drivers. Also, I designed and implemented a SSDFS (flash-friendly) open-source file system. My research interests include file systems and data storage design, neuromorphic computing, data-centric and memory-centric computing, cognitive computing, and quantum computing. I have several papers and around 50 patents.
Presentation Session Description:
This session explores the transformative impact of Computational Storage Devices (CSDs) and NVMe advancements on modern data processing and storage architectures, highlighting innovations that harmonize speed, efficiency, and security. Presentations underscore the role of NVMe's Subsystem Local Memory and Computational Storage command sets in offloading data processing to achieve high bandwidth and low latency, vital for AI/ML and Cloud Computing. The introduction of Data-Aware CSDs, capable of in-place computation and seamless integration with analytical pipelines, offers a scalable, distributed processing solution, enhancing data analytics performance. Meanwhile, aiDAPTIVCache2.0 showcases a significant leap in model fine-tuning efficiency by reducing dependency on traditional memory hierarchies, thereby prolonging storage device lifespan. Security, a paramount concern often overlooked, is addressed with practical steps for secure implementation in Computational Storage. Finally, the session delves into the evolving role of file systems amidst burgeoning data demands, questioning their future relevance in a landscape dominated by ultra-fast storage and disaggregated architectures. Together, these discussions illuminate a pathway towards more efficient, secure, and scalable data infrastructures.
PRO DCTR-101-1: Hyperscale Applications 1
Ballroom F (Santa Clara Convention Center, First Floor)
Track: Data Center Storage and Memory
Chairperson:
Steven Wells, Retired, Self
Steven Wells is a 40+ year veteran with most of that time focused on flash memory component and SSD design. He holds 65+ patents covering flash memory and security. He is currently retired and enjoying contributing time and energy to the industry he's spent his entire adult life developing.
Organizer:
Jonathan Hinkle, Senior Director - Azure Memory and Storage Pathfinding, Microsoft
Jonathan Hinkle is Senior Director - Azure Memory and Storage Pathfinding at Microsoft. He previously was In Micron's Storage Business Unit, where he investigated new technology and products, both internally as well as with customers and partners. He was previously Executive Director and Distinguished Researcher of System Architecture at Lenovo, where he led their research of datacenter computing architecture. Jonathan is an industry leading technical expert in memory, storage devices, and data center systems architecture with over 24 years of experience. In the JEDEC standards organization, Jonathan serves on the Board of Directors as Vice-Chair of Marketing and Chairs the CXL Memory Task Group, standardizing CXL-attached memory devices. He also invented and drove the first development of the EDSFF 1U Short (E1.S) NVMe drive, the VLP DIMM, and NVDIMM Persistent Memory. He has generated more than 34 granted or pending patents, and earned BS and MS degrees in Computer Engineering from North Carolina State University.
Presenters:
Ratnesh Muchhal, Product Manager, Solidigm
Presentation Title:
EDSFF – how it enables & addresses emerging AI workloads/use cases
Presentation Abstract:
The AI storage landscape is shifting in both server-attached and network-attached architectures, facing density scaling challenges in the latter and performance scaling constraints in the former due to thermal and cooling limitations. EDSFF, particularly E1.S, addresses these challenges with high-density NVMe storage, improved thermal efficiency, and scalable performance, enabling better PCIe bandwidth utilization and lower latency for AI training and inference. This session will explore EDSFF for compute, highlighting air-cooled and liquid-cooled solutions, and discuss the optimal form factor for future density growth
Author Bio:
"Ratnesh is a Senior Product Marketing Manager with over 20yrs of industry experience. He has over decade of Storage experience and maanges NVMe TLC Enterprise planning & product marketing. His primary interests lies in AI, customer workloads and how storage can meet today's & future AI & customer workload requirements. Leads through product design, execution, customer qualifications, and product end-of-life processes. Possesses deep technical expertise in cloud storage architectures, data center system design, and customer collaboration. Skilled at analyzing market trends, competitive intelligence, and understanding customer requirements for next-gen products."
Alok Ranjan, Software Engineering Manager, Dropbox Inc
Presentation Title:
Abstracting the Cloud: The Evolution of Dropbox’s Object Store
Presentation Abstract:
In today’s cloud-driven world, storage solutions must balance cost, performance, and security while adapting to evolving user needs. This talk explores Dropbox’s evolution from legacy systems such as Amazon S3 and HDFS to the development of Object Store—an innovative internal abstraction layer that intelligently routes data between backends like Magic Pocket and S3. By employing strategies such as batched writes, efficient object chunking, and robust layered encryption with crypto-shredding for instant deletions, Object Store tackles operational challenges and delivers substantial cost savings and improved performance. Moreover, the session will detail how Dropbox is evolving Object Store to support the growing demands of the AI era. With AI-powered features enabling natural language search, content summarization, and intelligent file preview analysis, traditional file storage is being transformed into a dynamic knowledge management system. Join us to explore the technical innovations behind this transformation and gain insights into the future of intelligent, AI-integrated cloud storage.
Author Bio:
I’m Alok Ranjan, an Engineering Manager at Dropbox overseeing the Storage Platform team in the infrastructure org. My team focuses on providing interfaces for file and block storage, along with encryption, compression, and verification of user data. With a master’s degree from Carnegie Mellon University, I began my career at Cisco Systems, working on networking technologies. I've led projects at VMware, Big Switch Networks, and Dropbox, focusing on improving network monitoring and system operations.
Vinit Dhatrak, Lead Engineer, Docusign
Presentation Title:
Revolutionizing Storage @ Docusign: From Petabytes to Intelligence
Presentation Abstract:
In an era driven by exponential data growth and the need for AI insights, cloud storage solutions must evolve to meet the dynamic demands of modern enterprises. This talk explores a strategic cloud adoption initiative focused on modernizing legacy blob storage at DocuSign. We'll go beyond simply "lifting and shifting" data to Azure, detailing how we built a cloud-native solution designed for scalability, security, and AI-powered insights. Discover how this modernized storage infrastructure empowered our Intelligent Agreement Management (IAM) platform, enabling new AI-driven capabilities. This talk offers practical strategies and lessons learned, emphasizing a holistic, cloud-native approach to unlock the full potential of your data in Azure. Attendees will gain insights into optimizing resource utilization, achieving cost efficiency, and ensuring scalability in cloud migrations, drawing from a successful case implementing an intelligence-enabled cloud ecosystem. We will explore the architecture and implementation challenges faced while leading the Blob Storage team at DocuSign. This initiative did not just facilitate a seamless transition but also pioneered a new category in cloud
Author Bio:
Vinit Dhatrak Ex-Google, @DocuSign Lead Software Engineer with a passion for cloud, AI, and data. Vinit is a seasoned software engineer with a demonstrated history of building on-premise and cloud-native distributed systems at scale. Currently, Vinit serves as a Lead Software Engineer at DocuSign, contributing to the Docusign's Storage team. With expertise encompassing cloud storage, distributed systems, and virtualization technologies such as Kubernetes, Docker, and the Linux Kernel, Vinit stands out as a thought leader in the tech industry. Throughout his career, Vinit has held pivotal roles at notable companies like Google, Box, Commvault, and Marvell, where he played an instrumental role in developing highly scalable and distributed cloud storage solutions. His proficiency in object-oriented design and systems programming, coupled with his capability to scale infrastructures to handle concurrent requests and planet-scale storage, positions him as a true expert in his field.
Presentation Session Description:
As the landscape of AI and cloud storage rapidly evolves, organizations are compelled to innovate to meet increasing demands for data efficiency, scalability, and performance. This session delves into emerging solutions across various platforms, unified by a focus on optimizing resource utilization and harnessing AI capabilities. From the advanced thermal and density efficiencies offered by EDSFF in AI storage to Dropbox's transformative Object Store, emphasizing cost-effective, secure, and AI-integrated data management, the presentations highlight a common theme of integrating AI-driven features to enhance storage infrastructures. DocuSign's strategic shift to a cloud-native solution further exemplifies this trend, showcasing a comprehensive approach to modernizing storage systems while unlocking AI-powered insights. Attendees will gain a comprehensive understanding of how these innovations are reshaping the storage landscape, paving the way for intelligent, scalable, and secure data ecosystems tailored for the future.
PRO DSEC-101-1: Hardware Trust Building Blocks for Secure Systems
Ballroom C (Santa Clara Convention Center, First Floor)
Track: Data Security/Ransomware Protection
Organizer:
Rohan Puri, Staff Engineer, Samsung Seminconductor
"Rohan Puri serves as a Staff Engineer at Samsung Semiconductor Inc, bringing over 14 years of expertise in systems software development with a focus on file systems, storage technologies, and distributed systems. His technical leadership spans prestigious organizations including Veritas Technologies, Oracle, and various storage technology companies, where he has optimized file system performance, enhanced storage reliability, and designed advanced distributed storage solutions. Currently serving on the Conference Advisory Board for FMS'25 and as Co-industry Chair for MSST'24, he's also an active reviewer for ACM Transactions on Storage Journal and sits on Artifact Evaluation Committees for FAST'25 and OSDI'25. Rohan holds a Master's degree in Computer Science & Engineering from Pennsylvania State University and a Bachelor's in Information Technology from the University of Pune, India.
Presenters:
Po Chun Wang, VIP Designer, Siemens Digital Industries Software
Presentation Title:
CXL Security Stack Verification and its Challenges
Presentation Abstract:
CXL is a cache coherent, low-latency and high-bandwidth interconnect. It is becoming challenging to secure confidential and sensitive data from physical attacks on the link, such as interposers that may snoop, modify, inject, or replay data. Data Security can be achieved by implementing the following protocol stack: 1. Security Protocol and Data Model (SPDM): Device Attestation 2. IDE Key Management: Key Exchange via CXL.IO 3. Integrity and Data Encryption (IDE): Provides confidentiality, integrity, and replay protection at a FLIT level (Flow Control Units). 4. TSP: Confidential computing and memory encryption We will review the importance of each layer in the security stack and the verification strategies required for all types of architecture - CXL.io, CXL.mem and CXL.cache. We will see the significant challenges faced while verifying this feature and various corner case scenarios that are mandatory for verification closure to catch DUT bugs. We will also cover how “access control” feature can prevent a non-trusted device from accessing unauthorized data. Use of memory encryption to prevent one host from reading/writing resources belonging to another host will also be covered.
Author Bio:
Experienced VIP Designer with three years of expertise, specializing in CXL TL/DL layer and CXL Cache/Memory Protocol Interface implementation.
Jaime Coreano, Vice President of Sales , X-PHY Inc.
Presentation Title:
Building a Community Root of Trust with Hardware-Driven Memory Security
Presentation Abstract:
The February 2025 breach of a major cybersecurity company, where hackers chained OS flaws to compromise thousands of systems, laid bare software security’s fragility. This wasn’t a fluke but a stark warning. Drawing from this wake up call, this presentation enforces a new era of trust through a hardware-anchored, community-driven model, with memory storage as the bedrock. Imagine built-in security mechanisms at the hardware level ensuring system integrity from boot to runtime, paired with AI-embedded memory like next-gen SSDs that actively guard data, detecting threats like ransomware in real time. Backed by a Community Root of Trust, where a collaborative ecosystem of vendors, researchers, and users shrinks exploit windows to close the gaps where solo patches fail. Aligned with FMS25’s innovation focus, this presentation will map a future where memory doesn’t just store, it secures. From hardware-enforced security measures to AI-storage standards, it’s a call to bake resilience into silicon.
Author Bio:
Mr Jaime Coreano is Flexxon’s Vice-President of Sales and has represented the company since 2015. He plays an instrumental role in establishing Flexxon’s presence in the US by leveraging strategic opportunities and growing networks, expanding the company’s customer base, and consequently building positive long-term customer relations. He possesses a wealth of experience in tech and IT sales, having been in the industry for over three decades. He has closely with a wide-range of customers from industrial technology, to gaming and military applications - Mr Coreano plays an essential role in bridging the unique needs of each customer with Flexxon’s suite of cybersecurity and storage solutions. As a strong advocate for Flexxon’s flagship cybersecurity solution, the X-PHY® Cyber Secure SSD - the world's first Al-embedded firmware based cybersecurity solution – Mr Coreano connects a range of businesses and organisations across the country with the holistic and highly-autonomous cybersecurity solution for greater all-round protection and peace of mind. Mr. Coreano holds a Bachelor of Science in Electrical Engineering from the Florida Institute of Technology.
Pravallika Anchuri, Senior Applications Engineer, Synopsys
Presentation Title:
Overcoming Verification Hurdles in IDE and TDISP Systems
Presentation Abstract:
This paper discusses the importance of IDE (Integrity and Data Encryption) and TDISP in providing security and integrity to Transaction Layer Packets (TLPs) in PCIe/CXL links. It addresses the threats posed by physical attacks on the links, such as examining confidential data, modifying TLP contents, and reordering or deleting TLPs. The paper outlines various IDE verification scenarios, including aggregation, TLP ordering, K-bit toggling, and selective IDE. For TDISP, the primary requirements involve establishing a trust relationship between a TVM and the device, securing the PCIe/CXL data path to prevent traffic interception or masquerading, and protecting confidential TDI data from host driver controls. Verification scenarios include accessing TEE_MEM and NON_TEE_MEM in different TDI states, validating TDISP TLP rules for DMA and interrupts, handling multiple VFs with random traffic, and testing TDI behavior under various conditions including resets, FLRs, hot plugs, and error injections. By addressing these challenges and scenarios, the paper aims to provide a comprehensive understanding of the complexities involved in verifying both IDE and TDISP systems.
Author Bio:
Pravallika Anchuri – Senior Applications Engineer, Synopsys, Inc. Pravallika Anchuri is a Senior Applications Engineer at Synopsys, specializing in hardware verification and emulation workflows. She has hands-on experience with PCIe bring-ups, utilizing virtual host solutions and speed adapters to validate and optimize system performance. Her background includes work on PCIe and CXL link validation, focusing on LTSSM state analysis and Gen6 configuration. Pravallika holds a Master’s degree in Electrical Engineering and brings expertise in RTL design, SystemVerilog verification, and a wide range of EDA tools.She is known for her problem-solving abilities and her contributions to high-performance hardware development environments.
Sakul Gupta, Sr. Principal Security Firmware Member Of Technical Staff, Micron Technology Inc
Presentation Title:
State Of the Art in Memory Security, secure, boot and Device Attestation
Presentation Abstract:
Secure Boot and Measured Boot have quickly become mandatory security features for CXL DRAM controllers, SSD memory controllers. Device attestation using TCG DICE, the DMTF's SPDM 1.4 is expected as the de-facto industry standard specs for device manufactures to be accepted, to be added to the memory pool in the data centers. PLDM Type 5 over MCTP, over SMBUS has gained prominence for Firmware update by CSP's. Streaming boot in-lieu of persistent memory resident signed firmware images is gaining prominence. This is over I3C and over PLDM-type-5, and provides more resilience to data corruption for critical data at rest. Post Quantum Algorithms like ML-DSA, LMS, Kyber have been approved by NIST and CNSA and we are working in the DMTF's SPDM working group to add support for PQC in SPDM 1.4 The OCP Security Working Group and Chips Alliance, CC led efforts to roll out Caliptra the Root Of Trust, in RTL ROM as well as firmware update Firmware using PLDM type-5, device attestation over MCTP SPDM 1.2 These implementations done in the safer Rust language have been made available to the industry to vet out and use, rather than each company re-invent the wheel with proprietary implementation.
Author Bio:
Sakul Gupta, is a Sr. Principal Security Firmware Member Of Technical Staff at Micron Technology Inc, leading Secure Enclave firmware development on CXL DDR memory controllers and contributes to SSD security. He provides thought leadership from Micron to forums like the OCP Security Working Group, DMTF's SPDM, RAS working groups, Chips Alliance Caliptra working group. He has worked in teh industry for 22+ years working on CXL Security, Safety Level Integrity-4 products, touch and biomentric sensors for Apple, Samsung, from companies like Micron, Honeywell, Synaptics, Apple.
Presentation Session Description:
In an era marked by escalating cybersecurity threats, our session delves into the transformative strategies reshaping data security across memory and interconnect technologies. The presentations collectively spotlight a paradigm shift towards hardware-anchored and community-driven security models, emphasizing the integration of AI and post-quantum cryptography to fortify data integrity and confidentiality. Central to this evolution is the implementation of Security Protocol and Data Model (SPDM) standards, alongside Secure Boot and Measured Boot mechanisms, which are increasingly becoming mandatory across memory and controller technologies. The sessions explore advanced encryption protocols such as Integrity and Data Encryption (IDE) and Transaction Layer Packet (TLP) security measures, highlighting their roles in countering physical attacks and ensuring data resilience against corruption and unauthorized access. Through collaborative efforts like the Community Root of Trust and industry-wide standardization initiatives, these presentations advocate for a future where security is embedded at the silicon level, paving the way for a fortified digital infrastructure ready to meet tomorrow’s challenges.
Open INDA-101-1: Storage Networking Innovations with Fibre Channel
Ballroom D (Santa Clara Convention Center, First Floor)
Track: Industry Associations
Presenters:
Rupin Mohan, Sr. Director, OEM Storage, HPE
Rupin Mohan is a Senior Director, in Hybrid Cloud & Office of CTO at HPE and leads HPE OEM Storage. Rupin has 30+ years’ experience in developing storage and networking products. Rupin has been granted 21 patents, filed 25+ patents at HPE Storage. He also serves as a Board Member of Fibre Channel Industry Association. Rupin completed his MBA from MIT Sloan School of Management as a Sloan Fellow. He also holds a MS in Engineering from Tufts University and BE in Computer Engineering from Delhi Institute of Technology
Brent Mosbrook, Sr. Director, Product and Program Management, Broadcom
Brent Mosbrook is a Sr. Director, Product and Program Management at Broadcom.
Presentation Session Description:
The rush to bring the benefits of AI into the corporate datacenter is an excellent time to make sure your most valued data is stored on the most secure and performant network available. Fibre Channel storage networks are widely considered the most secure, extensible and high performant, but AI coming into the datacenter is going to require higher limits for all these attributes. This presentation will discuss the latest trends in the Fibre Channel industry, including the roadmap to greater network speeds and improved manageability. We also will take a deep dive into the ever growing challenges of data network security and how new innovations such as Fibre Channel Autonomous In-Line Encryption will provide automatic quantum resistant protection for your storage network.
Fibre Channel Industry Assoc. (FCIA)
PRO TEST-101-1: Performance
Ballroom G (Santa Clara Convention Center, First Floor)
Track: Testing and Performance
Organizer:
Marilyn Kushnick, Track Organizer, FMS
Marilyn Kushnick is an Engineer and Track Organizer of the Testing Track at FMS.
Presenters:
Bernard Shung, Founder, Wolley
Presentation Title:
System Test Results with NVMe-over-CXL (NVMe-oC)
Presentation Abstract:
NVMe-over-CXL (NVMe-oC) is a new technique that offers virtualized memory mode on top of NVMe storage mode operation. The virtualized memory mode has a cost advantage as the memory capacity is supported by both DRAM and NAND. In this study, we evaluate how NVMe oC compares with CXL memory module (CMM) in performance using various benchmark applications of interest. NVMe-oC memory and storage modes can run simultaneously and allow SSDs to function as both active memory and persistent storage. This study compares different hardware configurations, including NVMe-oC only, CXL memory module only, DRAM only, and a tiered approach that combines DRAM, HDM, and SSD, providing unique insights into system performance and cost-effectiveness for different applications.
Author Bio:
Dr. C. Bernard Shung is the Founder and President of Wolley Inc., leading innovation in controller architecture for CXL and emerging Storage Class Memory (SCM). Previously, he served as General Manager, New Business Development at MediaTek, advising the CEO on enterprise technology. He was SVP of Engineering at Link-A-Media Devices (LAMD), later acquired by SK hynix, where he led SSD storage solutions. Before that, he co-founded SiBEAM, Inc., pioneering 60GHz wireless technology, and held leadership roles at Broadcom and IBM Research in networking and storage technologies. Dr. Shung was also a Professor and Chairman at National Chiao Tung University (NCTU), earning an Outstanding Teaching Award. He has published 60+ technical papers, holds 10+ US/Taiwan patents, and was Chairman & President of CIE-USA. He holds a BSEE from National Taiwan University and MS/Ph.D. in EECS from UC Berkeley.
Alex Lemberg, Senior Technologist, Systems Design Engineering, Architecture and Platform, Sandisk Flash Products Group
Presentation Title:
HID – Host Initiated Defrag
Presentation Abstract:
When a device becomes fragmented, both write performance and quality of service decline. In mobile use cases, there are periods of low user activity (typically at night) that can be leveraged to defragment the data, improving performance during active hours. In this presentation, we will review the host-initiated defragmentation method. This approach will ensure optimal performance for the host while managing the impact of write amplification. The host can schedule the operation to minimize any disruption to the user experience. The presentation will demonstrate the benefits of the proposed method through benchmark results and real-life use cases.
Author Bio:
Alex Lemberg has been with Sandisk for 23 years and is part of the iNAND Technical Ecosystem team, responsible for technical customer communication, new product architecture requirements, and new features definition. Prior to this role, he held various positions, including leading Linux Host R&D activities, with expertise in Linux kernel storage stack, and driver development Co-Author: Hadas Oshinsky
Kyle McRobert, Hardware Engineering Manager, Quarch Technology
Presentation Title:
Testing the latest PCIe 5.0 Power Excursion and PCIe 6.0 L0p Power States
Presentation Abstract:
Covering a broad range of new power challenges by diving into new high-power applications, along with power saving techniques by focusing on two new power specifications recently introduced to PCIe. Firstly, ATX 3.0 / PCIe 5.0 Power Excursion Specification and secondly the PCIe 6.0 L0p Low Power States. PCIe Gen 5.0 GPUs can draw up to 1800W of peak power, a brief introduction to what the specification is, common challenges faced and a case study on testing, identifying and reproducing potential issues. L0p, one of the newest power saving options added to PCIe 6.0. Power Performance is becoming increasingly important as we see more focus on reducing power and improving thermal efficiency . This talk will provide an insight into the specification and a case study into how much power this could potentially save and how to measure and test this.
Author Bio:
Shortly after graduating from Heriot-Watt University in Edinburgh with a BEng(Hons) in Electrical and Electronic Engineering, I joined Quarch in 2018 as a Hardware Engineer and now the Hardware Engineering Manager. Since 2006 Quarch Technology has been building automated test solutions for the data storage, networking, telecoms industries and beyond. I enjoy working closely with our customers to help create custom setups. My goal is to help them get the most out of our products, this came from my first days at Quarch and curries on to today, I enjoy finding new ways to use our products, recreating customers set ups building on these to improve them in anyway I can. I now manage the hardware engineering team at Quarch with a key challenge being the move to Gen6.
Presentation Session Description:
This session delves into the pivotal advancements and challenges in modern computing architectures, focusing on memory integration, power efficiency, and system optimization. With NVMe-over-CXL (NVMe-oC) and High Bandwidth Memory (HBM) at the forefront, we explore how these technologies are revolutionizing data processing and storage by offering innovative solutions for managing memory capacity and improving AI workload efficiency. The integration of HBM with cutting-edge chip architectures, such as 2.5D and 3D packaging, enhances throughput and reduces latency, crucial for the evolving demands of AI applications. Complementarily, the session examines power management strategies, including the new PCIe power specifications and host-initiated defragmentation methods, highlighting their role in optimizing power performance and thermal management. These presentations collectively underscore the importance of balancing Power, Performance, and Area (PPA) trade-offs to achieve cost-effective, high-performance systems tailored to modern computational needs.
09:35 AM to 09:45 AM
Open BRK: Tuesday AM Refreshment Break
Main Lobby/Great America Lobby (SCCC, First Floor/Great America Meeting Rooms, Second Floor)
Track: General Events
General Event Description:
Description Not Available
09:45 AM to 10:50 AM
PRO AIML-102-1: Storage for AI: Applications
Ballroom A (Santa Clara Convention Center, First Floor)
Track: AI and ML Applications
Organizer + Chairperson:
David McIntyre, Director Product Planning, Samsung Electronics
David McIntyre focuses on computational storage acceleration solutions development and strategic business enablement for cloud to edge applications including AI inference/video analytics, database processing and blockchain networks. He has held senior management positions with IBM, Samsung, Xilinx, Intel (formerly Altera) and at Silicon Valley startups. He has consulted for institutional investors including Fidelity, Goldman Sachs and UBS. David is a frequent presenter and chairperson at the Flash Memory Summit and other technical conferences including SNIA.
Presenters:
Gary Smerdon, CEO and Founder, MEXT
Presentation Title:
Elevating Flash to Memory Tier: AI-Driven Predictive Memory Management
Presentation Abstract:
Redefining data center efficiency, our approach elevates low-cost Flash from storage to a primary memory tier, significantly reducing reliance on expensive DRAM. An AI engine, positioned within the memory subsystem below the application layer, dynamically manages this transition. It offloads "cold" memory pages from DRAM to Flash and uses predictive algorithms to preload them as needed, seamlessly maintaining performance. This strategy not only preserves high-speed operations with less DRAM but also cuts computing costs by up to 40%. Our session explores how integrating Flash as a memory tier with AI-driven management doubles or quadruples effective memory capacity, transforming cost-efficiency in modern data centers, with use cases spanning database, analytics and more.
Author Bio:
Gary Smerdon is the CEO and founder of MEXT. Before MEXT, Gary served as CEO at TidalScale and held numerous leadership positions including at Fusion-io, LSI, and AMD. He is a recognized innovator having led key efforts around industry transitions in networking, storage, and computing.
Assaf Sella, VP of Machine Learning R&D, KIOXIA Israel, Ltd
Presentation Title:
All-in-storage ANNS algorithms optimize performance within a RAG system.
Presentation Abstract:
KIOXIA’s All-in-storage ANNS algorithms (AiSAQ™) is an Approximate Nearest Neighbor Search (ANNS) technology that accesses and stores index data on SSDs, significantly reducing DRAM usage in the RAG system. Optimal utilization of SSDs can greatly enhance the scalability of the Vector DB size and the accuracy of inference generation. This scalability of the vector DB meets the demands of a growing multi-tenancy environment and the efficient management of such an environment. In multi-tenancy applications, a single deployment serves multiple independent users (tenants), where each tenant leverages RAG to enrich the LLM with information from its private dataset. Multi-tenancy deployments can reach very large scale in aggregate as the number of tenants and size of tenants’ datasets increases. When the ANNS algorithms at the core of a RAG system are based on DRAM, such scaling results material cost and user experience challenges. AiSAQ based on near-zero DRAM architectures, achieve high density of tenants per server with all tenants being always active. Our presentation will delve into all-in-storage solutions, compare them with DRAM-based alternatives.
Author Bio:
Assaf serves as Vice President of Machine Learning R&D at KIOXIA Israel development center, where he leads research in generative AI, and deep neural networks to improve Flash physical-layer reliability. Prior to KIOXIA, Assaf was CTO of Texas Instruments Israel, and held leadership roles in other Israeli high-tech corporations and startups. Assaf holds an Executive MBA from Kellogg School of Management at Northwestern University, and M.Sc and B.Sc in Electrical Engineering from Tel-Aviv University and the Technion, both summa cum laude.
Chanson Lin, CEO, EmBestor Technology Inc.
Presentation Title:
Enhancing the Security Scheme of Data Storage for the Edge AI/ML Systems.
Presentation Abstract:
The Data security for AI/ML systems is crucial and it is the most important thing to make sure the systems could be operating in a safe and reliable condition. Especially in the Edge server systems, it needs remote system monitoring, data control, management, maintenance and upgrade because it may be allocated in a remote area, corner side, or harsh environment. To make sure the Edge AI/ML systems to be operated in well protected condition, enhancing the data security scheme would become a very important factor. In this presentation, we proposed the enhancement scheme of the data security in the Data Storage and Memory for Edge AI/ML systems. To integrate the multi-dimensional data security mechanism into a system, such as: remote system monitoring and management, multi-way identification and authentication, multi-dimensional data encryption, remote data management, backup, maintenance, and update. It can upgrade the security level for the Edge AI/ML systems.
Author Bio:
Dr. Chanson Lin is the Founder / Chairman & CEO of EmBestor Technology, a company specializing in industrial, niche application, and embedded storage applications. The company focuses on memory storage controller design and flash memory-based storage architectures. He has over 20 years’ experience designing NAND flash memory controllers and invented over 100 patents in the area. Before founding EmBestor, he was General Manager of the NAND flash memory controller business unit of ITE Technology, General Manager of USBest, and President / co-founder of RiCHIP. He has published several articles on embedded systems, industrial applications and has given many conference presentations, including several at previous Flash Memory Summits. He earned a PhD in electrical engineering from the National Chiao Tung University (Taiwan) and an MSEE from the National Taiwan University.
Eric Herzog, CMO, Infinidat
Presentation Title:
The Role of Storage in AI, Applications and Workloads
Presentation Abstract:
As AI models evolve, the need for more accurate, real-time insights becomes crucial. And managing large volumes of data for machine learning and predictive models becomes increasingly critical. This session will focus on how organizations can optimize their enterprise storage infrastructure to meet the demands of AI workloads. Attendees will learn: ● How integrating RAG into AI workflows can significantly reduce "hallucinations"—factually inaccurate outputs—by continuously refining data queries ● Reducing latency, increasing throughput, and ensuring reliable access to data. ● Integration of high-performance storage with AI processes to improve data handling efficiency and drive more accurate, real-time insights, while also reducing operational costs.
Author Bio:
Eric Herzog is the Chief Marketing Officer at Infinidat. Prior to joining Infinidat, Herzog was CMO and VP of Global Storage Channels at IBM Storage Solutions. His executive leadership experience also includes: CMO and Senior VP of Alliances for all-flash storage provider Violin Memory, and Senior Vice President of Product Management and Product Marketing for EMC’s Enterprise & Mid-range Systems Division.
Presentation Session Description:
This session brings together cutting-edge advancements in storage technology and data management for AI/ML systems, with a focus on optimizing performance, scalability, and security. Key themes include the use of KIOXIA’s AiSAQ™ technology, which revolutionizes Approximate Nearest Neighbor Search (ANNS) by leveraging SSDs over DRAM, thereby enhancing the scalability and efficiency of Vector DBs in multi-tenancy environments. This innovation is critical for managing large-scale deployments where multiple tenants utilize the same infrastructure. Complementing this, the session explores enhanced data security protocols vital for Edge AI/ML systems, emphasizing remote monitoring, multifaceted data encryption, and management to safeguard operations in challenging environments. Additionally, the integration of RAG into AI workflows is highlighted as a means to reduce latency and improve data accuracy, enabling real-time insights while minimizing costs. Finally, the session delves into benchmarking methodologies that simplify the validation of enterprise SSDs, thereby streamlining the testing process without the complexity of traditional AI frameworks. Together, these presentations underscore the importance of integrating advanced storage solutions and robust security measures to meet the evolving demands of AI workloads.
PRO AUTO-102-1: Vehicle to Everything
Ballroom B (Santa Clara Convention Center, First Floor)
Track: Automotive Applications
Organizer + Chairperson:
Bill Gervasi, Principal Memory Solutions Architect, Monolithic Power Systems
Mr. Gervasi has nearly 5 decades of experience in high speed memory subsystem definition, design, and product development. He piloted the definition of Double Data Rate SDRAM since its earliest inception, authoring the first standard specification, and created the Automotive SSD standard. With MPS, Bill is driving some of the memory and storage system management mechanisms for a post-quantum world. He received the JEDEC Technical Excellence award, their highest honor, in 2020.
Presenters:
Young Eui Cho, eMMC/UFS Quality/Reliability Engineer, SK hynix
Presentation Title:
Future Automotive's Impact on Nonvolatile Memory Solutions
Presentation Abstract:
The impact of future automotive applications, such as autonomous driving and robo-taxis, on nonvolatile memory solutions was analyzed focusing on cycling and temperature factors. Additionally, the characteristics of future automotive environments were compared to those of mobile and traditional automotive environments, and the necessary efforts that memory semiconductor companies should undertake to meet these characteristics were considered.
Author Bio:
In charge of certifying SK hynix's NAND solution(eMMC/UFS) family for automobiles.
Veera Venkata Sri Harsha Badam, Senior Manager, Samsung semiconductor India research
Presentation Title:
Analysis of High Endurance Storage for Write Intensive Automotive Applications
Presentation Abstract:
This paper explores the application and features of high endurance memory systems in the automotive industry, specifically focusing on the necessity of NAND Flash-based storage solutions for managing large volumes of real-time sensor data, continuous crash recordings, and archiving historical data. The increasing importance of data for reliability and safety in the automotive field highlights the significance of these storage solutions. Given the long average age of automotive vehicles, high endurance storage is crucial. The paper delves into the challenges of achieving high endurance, such as the impact of high write cycles on flash memory, and potential solutions to mitigate these challenges. Furthermore, the paper analyzes various payloads used on high endurance single-level cell (HE SLC) storage to identify patterns and trends that affect the overall lifespan of the underlying storage. By optimizing storage management practices and ensuring maximum efficiency from HE SLC storage devices, host drivers can be developed leveraging evidence-based approaches to significantly improve the performance and reliability of high endurance storage solutions.
Author Bio:
The Presenter is currently Working as a Senior Manager in Samsung Semiconductor India Research (SSIR) under Flash memory domain. Current role involves product qualification of UFS devices across Mobile and Automotive applications and spans along with test development and spec validation. His Prior experience involves Firmware development in USB based Portable SSDs at Western Digital, along with 11 years of experience in developing Firmware for safety critical Aviation based cockpit Display applications and controllers in Honeywell Technology Solutions. One of the major achievements was Outstanding Engineer Award (2022), for introducing a new product line of touch screen controllers with display applications in cockpit by replacing legacy controllers. Have a master's in computer science engineering from VIT Vellore.
Cliff Zitlaw, Distinguished Engineer - System Architecture, Infineon Technologies
Presentation Title:
High-Bandwidth NVM on the LPDDR Interface
Presentation Abstract:
The rapidly evolving automotive ecosystem presents a difficult set of requirements for consideration by next-generation chipset developers. The historical elevated operating temperatures, significantly higher computing requirements and the adoption of advanced process nodes all need to be balanced when defining a next generation chipset. This session discusses the rapidly evolving automotive ecosystem and activities at JEDEC to standardize a high-bandwidth non-volatile memory residing on the legacy LPDRAM bus that addresses this this challenging applications environment.
Author Bio:
Cliff Zitlaw has been involved in the development of semiconductor memories for 43 years. Cliff’s primary focus has been on bus interfaces that optimize memory performance within different application constraints. Cliff was the inventor of Xicor’s Microprocessor Serial interface (EEPROM), Micron’s CellularRAM interface (PSRAM) and Infineon’s Hyperbus interface (NOR and PSRAM). Cliff is the author or coauthor of over 50 patents related to memory functionality and usage. Cliff is currently the Chair of the Serial Flash and also the LPDDRX-NVM Task Groups at JEDEC.
Presentation Session Description:
This session delves into the transformative landscape of automotive technology, emphasizing the critical role of advanced memory solutions in supporting next-generation applications such as autonomous driving and robo-taxis. Presentations converge on the necessity for high endurance, nonvolatile memory systems capable of withstanding the rigorous demands of real-time data processing and storage in automotive environments. Key themes include the adaptation of NAND Flash-based storage for enhanced reliability and safety, the challenges of high write cycles on flash memory, and the strategic development of chipsets that meet the elevated computing and temperature requirements. Additionally, the efforts towards standardizing high-bandwidth memory across legacy systems reflect an industry-wide push for efficiency and longevity in automotive applications. These insights collectively underscore the need for innovative approaches by memory semiconductor companies to align with the evolving automotive ecosystem.
Open BMKT-102-1: AI + Storage: AI and the Enterprise Business
GAMR-1 & 2 (Great America Meeting Rooms, SCCC 2nd Floor)
Track: Business Strategies & Memory Markets
Organizer:
Jean Bozman, President, Cloud Architects Advisors
Jean S. Bozman is President of Cloud Architects Advisors, a market research and consulting firm focused on hardware and software for enterprise and hybrid multi-cloud computing. She analyzes the markets for servers, storage, and software related to datacenters and cloud infrastructure. A highly-respected IT professional, she has spent many years covering the worldwide markets for operating environments, servers, and server workloads. She was a Research VP at IDC, where she focused on the worldwide markets for servers and server operating systems. She is a frequent conference participant as a speaker, chairperson, and organizer at such events as Flash Memory Summit, OpenStack, and Container World. She is often quoted in a variety of publications including BusinessWeek, Investor’s Business Daily, the Los Angeles Times, CNET, Bloomberg, and Reuters. Ms. Bozman has also been VP/Principal Analyst at Hurwitz and Associates and Sr Product Marketing Manager at Sandisk. She earned a master’s degree from Stanford.
Presenters:
Wendell Wenjen, Director, Storage Market Development, Supermicro
Presentation Title:
Storage Architectures for Enterprise AI
Presentation Abstract:
Enterprises are implementing agentic and interactive-based AI applications, taking advantage of significant advances in generative AI over the past two years. The development of enterprise-specific Large Language Models (LLMs) requires that companies catalog, replicate, ETL (Extract, Transform and Load), and normalize this data from many different enterprise systems and sources containing both structured (records) and unstructured (documents, photos, videos, etc) data. Often, the storage target is a data lake used for retaining and processing the enterprise unique and proprietary data. In this presentation, we will describe different hardware and software architectures for implementing a data lake and data lakehouse used for managing and processing the enterprise data used in AI training and inference. On the hardware area, we will discuss use of All-Flash storage servers for high performance data lakes as well as large capacity disk-based storage servers for high-capacity cost effective storage. Both systems are commonly used with scale-out object and file systems. On the software area, we will discuss options in both file and object storage as well as database systems.
Author Bio:
Wendell Wenjen is director of storage market development for Supermicro where he leads storage product marketing. He was previously at Intel, Seagate, LG Electronics, Flex and Acer where he held a variety of product management, business development and marketing roles in storage and enterprise computing and started businesses with revenue over $2B. He started his career as a software engineer at Hughes Aircraft developing air traffic control systems. He has published two papers in the IEEE Aerospace Applications conference and holds a patent on server I/O technology. He holds a B.S. and Masters of Engineering in Electrical Engineering from Harvey Mudd College and an MBA in marketing from UCLA
Russ Fellows, VP, Futurum Group
Russ brings over 25 years of diverse experience in the IT industry to his role at The Futurum Group. As a partner at Evaluator Group, he built the highly successful ;lab practice, including IOmark benchmarking. Prior to Evaluator Group he worked as a Technology Evangelist and Storage Marketing Manager at Sun Microsystems. He was previously a technologist at Solbourne Computers in their test department and later moved to Fujitsu Computer Products. He started his tenure at Fujitsu as an engineer and later transitioned into IT administration and management.
Nilesh Shah, VP Business Development, ZeroPoint Technologies
Presentation Title:
AI’s Data Tsunami: How NeoCloud is Reshaping Memory & Storage Markets
Presentation Abstract:
The next wave of AI-driven NeoCloud data centers is set to transform the memory and storage industry, with demand surging for HBM, LPDDR, SSDs, and HDDs across training, inference, and fine-tuning workloads. This expert panel will project the evolving market landscape, analyzing attach rates for memory and storage across local and network-attached architectures, including CXL, PCIe, UALink, UltraEthernet, and proprietary N-Link solutions. We’ll examine how AI model growth—foundational models, checkpointing, VectorDBs, KV cache, and memory-bound inference—is driving capacity needs and power allocation shifts between compute, memory, and storage. With insights from leading accelerator vendors, hyperscalers, memory and storage providers, and industry analysts, this session will uncover how AI workloads will reshape infrastructure investments, power budgets, and the future economics of the memory and storage industry.
Author Bio:
Nilesh Shah is VP Business Development, ZeroPoint Technologies. Additionally. He participates and contributes regularly at standards bodies like SNIA, OCP, JEDEC, RISC-V, CXL Consortium. He is regularly invited to speak at conferences, and has led multiple panels and is featured in Analyst/ Press interviews, focused on AI and memory technologies. Previously, Nilesh led Strategic Planning at Intel Corporation's Non Volatile Memory Solutions Group, where he was responsible for the product planning and launch of the Data Center SSD products and Pathfinding innovations. Nilesh advises GPU and memory Chiplet startups.
Ellie Wang, Analyst, TrendForce
Presentation Title:
How AI Growth Will Drive HBM Demand Beyond 2025, Shaping Product Evolution and M
Presentation Abstract:
The demand for HBM is expected to surge beyond 2025, driven by rapid advancements in AI and high-performance computing. This presentation analyzes long-term demand trends, focusing on innovations in memory stacking technology, core chip density, and bandwidth efficiency. We will explore positive factors such as increased AI investments, faster ASIC development, and initiatives like DeepSeek that lower AI adoption barriers. Conversely, challenges include global economic uncertainties, potential oversupply risks, and impacts of national policies. Our discussion will provide a comprehensive assessment of these dynamics and suggest effective strategies for risk management within the evolving HBM landscape. This insight will be crucial for stakeholders looking to navigate the future of the HBM market.
Author Bio:
Primarily conducts research on DRAM supply-side capacity and technology development. Additionally, investigates the development of HBM generation, including supply and demand dynamics. On the demand side, the focus is on consumer market analysis.
Presentation Session Description:
In this session, we delve into the transformative impact of AI on data infrastructure and memory storage, as enterprises increasingly harness advancements in generative AI and large language models. Central to these discussions are the architectures necessary for managing and processing vast quantities of both structured and unstructured data, with a focus on the implementation of high-performance data lakes and lakehouses utilizing both high-capacity disk-based and All-Flash storage solutions. The session further explores the burgeoning demand in the memory and storage sectors catalyzed by AI-driven NeoCloud data centers. Through expert panel insights, we will examine the evolution of AI model requirements, including foundational models and memory-bound inference, and their implications for infrastructure investments and economic strategies. Additionally, the growing demand for High Bandwidth Memory (HBM) will be analyzed, highlighting innovations in memory stacking and bandwidth efficiency amidst potential risks like oversupply and economic fluctuations. Attendees will gain a comprehensive understanding of the strategic considerations necessary for leveraging AI advancements while maintaining robust and scalable data infrastructures.
PRO COLD-102-1: Media and AI: Exploring Innovation in Sustainable Long-Term Storage
Ballroom C (Santa Clara Convention Center, First Floor)
Track: Cold Data
Organizer:
Rich Gadomski, Head of Tape Evangelism, FUJIFILM Recording Media USA
As Head of Tape Evangelism for FUJIFILM Recording Media U.S.A., Inc., Rich is responsible for driving industry awareness and end user understanding of the purpose and value proposition of modern tape technology. Rich joined Fujifilm in 2003 as Director of Product Management, Computer Products Division, where he oversaw marketing of optical, magnetic, and flash storage products.
Presenters:
Alistair Symon, IBM VP of Storage Systems Development, IBM
Presentation Title:
Tape vs red-hot power hogs
Presentation Abstract:
The higher power needs of data centers drives more focus to efficient storage. Tape for cold archive data meets that need. Tape’s low power requirements, sustainability characteristics and security capabilities are growing the tape industry. This presentation describes the latest tape technology, illustrates the modernization of archive solutions – especially ease-of-attachment – and explores how tape plugs into the growing AI industry.
Author Bio:
Alistair is the head of development for storage systems in IBM where he leads the development of IBM storage products including All Flash Arrays, Hybrid Disk Systems and Tape. Prior to this Alistair was Vice President of Distributed Storage Development leading worldwide development of the XIV, All Flash Arrays and SVC/Storwize products. Earlier in his career, Alistair was responsible for IBM's Enterprise Storage Systems including DS8000 and Tape Systems. He has also led the development of IBM’s Storage Software products including the Spectrum Control and Spectrum Protect products that enable customers to manage and backup their data centers. He was the manager of storage development in the UK. In this role he was responsible for the development of the SAN Volume Controller, IBM’s software for virtualizing storage area networks, and the RAID engine for ESS 800 and DS8000. Alistair received his BSc in Computer Science from the University of Warwick in the UK.
David Landsman, Distinguished Engineer, Western Digital Corporation
Presentation Title:
HDDs, Workhorse of the Datacenter
Presentation Abstract:
As the volume of digitized data skyrockets, the demand for novel storage solutions is significantly increasing, and new forms of media (DNA, glass, ceramic, etc.) are being considered for the deep cold archive. Meanwhile, technical advances and capacity growth in all storage tiers are needed and the HDD remains vital for storing a vast amount of data, especially as trends in AI/ML enable and require more data to stay at warmer temperatures so it can be mined and actively used. This talk will discuss the continued vitality of the HDD business, how and why the HDD tier will continue to grow, and the technical advancements fueling the HDD’s continued role as the workhorse of the datacenter, including areal density improvements and performance optimizations to keep up with the growing capacity.
Author Bio:
Dave is a Distinguished Engineer and Director of Storage Standards at Western Digital Corporation. His early years were at Intel, where he wrote early generation Ethernet driver software, worked as a product planner on a generation of the Intel i860 superscalar processor, and was a member of Intel’s technical liaison team with Microsoft, where he focused on getting Windows support for Intel processor, graphics, and chipset features. Dave began his career in storage in 2004, joining mSystems, which led to Sandisk and Western Digital, by acquisitions. Since 2008, Dave has been a leader in the storage interface standards community, driving Sandisk’s and Western Digital’s engagements in NVMe, PCIe, SAS/SCSI, ATA/SATA, TCG, JEDEC, OCP, SNIA, SFF, CFA, and others. In 2019, he helped found the DNA Data Storage Alliance. He recently published the Alliance’s DNA Stability Evaluation Method specification and has published various articles and white papers on DNA data storage with the Alliance. He received his B.A. degree in Computer Science from the University of California, San Diego, in 1980.
Ilya Kazansky, CEO, SPhotonix
Presentation Title:
Growth of AI' and its Impact on Sustainable Data Storage Solutions
Presentation Abstract:
Net data generation has continued to grow and its growth curve has steepened dramatically over the last few years, fueled by the increase in data generation attributed to the start of mainstream AI adoption. At the current trajectory, it may be unsustainable to accommodate the continued demand for data storage, especially in cloud and AI datacenters. There are three core bus factors creating a challenge that may slow the development of AI capabilities and its further adoption; Data Density Durability Material source sustainability Industry now needs to race to solve some of the challenges in sustainably supporting the growing demand for data storage, and given 60% of data stored today is in cold storage, focus should be given to the tier as a priority. 5D optical storage is now being pursued by multiple ventures, and shows some of the most promising characteristics in Data Density, Durability and Material Sustainability, especially when considering the technology readiness level. Each emerging technology has its advantages and disadvantages for individual applications, we will explore these in the context of sustainably supporting the growing data storage needs driven by AI ad
Author Bio:
Ilya, SPhotonix CEO, has 15+ years experience as tech entrepreneur, senior executive, board director and advisor across IoT, AI, ML and Big Data products in several verticals. He has founded, funded and built multiple teams, products and companies resulting in three successful exits.
Steffen Hellmold, President, Cerabyte, Inc.
Presentation Title:
Permanent Data Storage for the Digital Age
Presentation Abstract:
Digital data is inherently fragile, prone to loss, and reliant on rapidly changing technology. The lack of reliable solutions to preserve and ensure access to digital data risks the loss of invaluable knowledge, history, and culture. The threat of the Digital Dark Age is real at the micro and macro level without permanent data storage solution. Long-term permanent data storage must be affordable as well as sustainable without being susceptible to bit rot, requiring no periodic maintenance or any environmental control or other energy to retain the data stored. Ceramic Data Storage holds promise to offer such a solution to reliably retain digital data for centuries or millennia overcoming media life limitations associated with all current commercial storage media technologies. In addition to technical data storage solutions the industry must establish worldwide standards that allow archives to be identified and the archived data to be independently retrieved at a future point in time.
Author Bio:
Steffen has more than 25 years of industry experience in product, technology, business & corporate development as well as strategy roles in semiconductor, memory, data storage and life sciences. He served as Senior Vice President, Business Development, Data Storage at Twist Bioscience and held executive management positions at Western Digital, Everspin, SandForce, Seagate Technology, Lexar Media/Micron, Samsung Semiconductor, SMART Modular and Fujitsu. He has been deeply engaged in various industry trade associations and standards organizations including co-founding the DNA Data Storage Alliance in 2020 as well as the USB Flash Drive Alliance, serving as their president from 2003 to 2007. He holds an economic electrical engineering degree (EEE) from the Technical University of Darmstadt, Germany.
John Monroe, President, Furthur Market Research, LLC
Presentation Title:
"Like Nothing We've Ever Seen Before": The Impacts of GenAI on Enterprise Data
Presentation Abstract:
In the GenAI era, all data stored on enterprise-grade technologies will be increasingly valued as "indispensable," but ~70% of all this data will continue to become cool or cold or frozen within 60 days of their creation, with infrequent access times ranging from days to years, and should therefore be managed as an "active archive." Without transformative new technologies, 2031-2050 annual growth rates that merely mimic ~25% historic norms cannot be feasibly sustained. Even with conservative estimates of annual growth rates declining from 25.5% in 2025 to less than 10% 2040-2050, the "active archive" of enterprise data will expand from ~5ZB in 2025 to ~170ZB in 2050.
Author Bio:
John Monroe has been involved with the storage industry for more than 40 years, beginning in 1980. •  1997-2022: 25 years as a VP Analyst at Gartner, covering the history and forecasting the future of consumer and enterprise storage markets. •  1990-1997: VP of all storage lines at SYNNEX Information Technologies (now TD SYNNEX), a global distribution and manufacturing services firm. •  1988-1990: Director of North American Sales for Kalok Corporation (a startup HDD manufacturer). •  1983-1988: Part owner and general manager of Media Winchester, Ltd., a storage products distributor and integrator which was one of Seagate’s inaugural “SuperVARs.” •  1980-1983: Monroe began his career in 1980 at Electrolabs, selling ICs, power supplies, cables, monitors, printers, 8-inch floppy disk drives, and 8-inch HDDs (“oddments of all things” related to computing electronics). Unlike most industry analysts, Monroe has had balance-sheet accountability for the stuff that he studies. Monroe earned a BA summa cum laude, Phi Beta Kappa Phi Beta Kappa from Amherst College in 1976 and a MFA with a merit scholarship from Columbia University in 1980.
Presentation Session Description:
In the face of an unprecedented surge in data generation driven by AI and digital transformation, this session addresses the critical need for innovative and sustainable data storage solutions. Common themes across the presentations include the exploration of advanced storage media such as tape, HDD, DNA, glass, ceramic, and emerging 5D optical storage technologies, each offering unique benefits in terms of power efficiency, durability, and sustainability. The discussions highlight the importance of cold and active archives, emphasizing the need for solutions that can seamlessly integrate with AI-driven workflows while maintaining low energy consumption. As the industry grapples with the challenges of data density, material sustainability, and the looming threat of a Digital Dark Age, the session underscores the necessity for permanent, reliable storage solutions that transcend current media lifespan limitations. With a focus on innovation and standardization, this session aims to equip professionals with the insights needed to sustainably manage the ever-growing data demands of the future.
PRO COMP-102-1: Computational Storage Implementations and Ideas
Ballroom E (Santa Clara Convention Center, First Floor)
Track: Computational Storage
Organizer:
Scott Shadley, Director of Leadership Narrative and Evangelist, Solidigm
Scott Shadley has spent over 25 years in the semiconductor and storage space. He has time in Production, Engineering, R&D, Customer focused roles including Marketing and Strategy. His current focus is in efforts to drive adoption of new storage technology as a Director of Leadership Narrative and Evangelist at Solidigm. He has been a key figure in promoting SNIA as a Board member and leading the computational storage efforts as a co-chair of the SNIA Technical Working Group. He participates in several industry efforts like Open Compute, NVM Express and is seen as a subject matter expert in SSD technology and semiconductor design. He has and still speaks on the subject at events like the Open Compute Summit, Flash Memory Summit, SDC, and many other events, press interviews, blogs, and webinars. While at NGD Systems, Scott developed and managed the Computational Storage products and ecosystem. Scott previously managed the Product Marketing team at Micron, was the Business Line Manager for the SATA SSD portfolio, and was the Principal Technologist for the SSD and emerging memory portfolio. He launched four successful innovative SSDs for Micron and two for STEC, all of which were billion dollar programs. Scott earned a BSEE in Device Physics from Boise State University and an MBA in marketing from University of Phoenix.
Presenters:
Pramod Peethambaran, Director of Engineering, Data Fabric Solutions, Memory Solutions, Samsung
Presentation Title:
Zero ETL Use Case of Data-Centric Computing
Presentation Abstract:
Zero ETL is one of the popular use cases in the database world. Currently, it incurs multiple data transfer and copy overhead. This talk focuses on Cognos zero-ETL framework realizing near-memory compute over fabric to reduce or eliminate the extra data transfer and copy in the ETL/ELT path by providing an easy interface to allow data modelers and AI engineers to perform data-intensive tasks (e.g., synthetic data generation, ML calculation and transformations) near the data source. This reduces the need of big data extract and load requirements, enabling faster and more TCO-effective ETL/ELT pipelines.
Author Bio:
Pramod Peethambaran is a technology leader and IEEE senior member with 20+ years of experience building multifunctional teams. The world-class products his teams have built cater to cloud infrastructure focused on storage/memory, system software and AI/ML. As Director of Engineering for Memory Solutions Lab at Samsung Semiconductor, he has led the research, development and delivery of high-performance and scalable storage and memory solutions powered by leading-edge technologies like computational storage and Compute Express Link (CXL) for AI/ML, video analytics and data analytics. He is passionate about building 0-to-1 products. Most recently, Pramod led development of a first-of-its-kind product from Samsung — a memory software platform called Cognos (rack-scale heterogeneous memory management and orchestration platform with memory tiering). He has authored IEEE publications and holds patents in the field of AI/ML processor architecture optimizations, with some pending in storage solutions. Pramod earned a Master’s in Technology from National Institute of Technology in Calicut, India, and is Fellow for international professional societies like IETE, Hackathon Raptors and NIPES.
Jongryool Kim , Research Director, SK hynix
Presentation Title:
Cost Efficient LLM Training with Computational SSD
Presentation Abstract:
The rapid growth of Deep Neural Network parameter count after the introduction of Large Language Models (LLMs) has led to the development of a variety of distributed training techniques which allow large models to fit in limited GPU memory. One such technique involves offloading of optimizer state from GPU HBM to larger memory tiers such as DRAM or SSDs, in order to reduce the number of GPU devices required to store the model parameters. When offloading optimizer state, weight update is performed by either the host CPUs or near-storage accelerators in order to minimize data movement, and only gradients and updated parameters are communicated with the GPUs. We present Checkpoint Offloading SSD (CO SSD), an implementation of optimizer and checkpoint offload on a computational SSD prototype. CO SSD offloads optimizer state to the NVMe SSDs and performs weight update and checkpoint using quad-core ARM cores. By replacing conventional SSDs with CO SSDs, we are able to move logic related to optimizer step and checkpoint to the SSD to minimize the host and network overhead. We integrate the CO SSD prototype with DeepSpeed, an open-source distributed LLM training library, and evaluate it
Author Bio:
Dr. Jongryool Kim is currently serving as the research director of AI System Infra team at SK hynix Inc., located in San Jose, California. He has been a part of the SK hynix since 2020, during which time he has been conducting research and development of numerous advanced projects such as custom HBM, CXL Pooled memory, computational CXL memory and storage, and object interface storage solution for AI/HPC systems. Additionally, he is a member of the Open Computing Project (OCP) Future Technology Initiative (FTI), working for the data-centric computing (DCC) workstream. Dr. Kim is also a Science Advisory Board (SAB) member of Semiconductor Research Corporation (SRC) JUMP 2.0. Prior to this role, he had served as the cloud system architect at Samsung Mobile division developing and operating a Samsung Cloud data analytics system that manages and analyzes data from all Samsung devices (smart phones, wearable devices, and home appliances) around the world. Additionally, he worked with various R&D teams at Samsung SW R&D Center. He conducted research to improve network and storage IO performance in High Performance Computing (HPC) and Cloud.
Andrew Walls, Retired IBM Fellow, Owner Great Walls of Storage, Great Walls of Storage LLC
Presentation Title:
Using AI to Optimize a Grid of FlashArrays
Presentation Abstract:
Storage arrays and SSDs generate a large amount of metadata. This data provides information on the capacity, performance, status, errors, configuration, network, etc. IBM has also used its computational storage devices called FCMs to shape and use data to train AI models in many ways. This data is used to detect ransomware attacks. IBM has used the large set of data to optimize storage utilization by advising where data should be placed. It has used the data successfully to detect and predict support issues. This presentation will show how the broader set of data generated by a grid of storage arrays can be used to optimize the utilization, cost, performance, support and security of that estate. Trained models can be used to determine which Flash Array is best suited for a given application. Using detailed performance and error information, AI models can predict issues and make suggestions how to mitigate or eliminate the problem thereby reducing repair or outage time. We will discuss the future of using such composable infrastructure to tune and train the models in the arrays themselves. This will increase the accuracy of the inference engines for that estate of storage.
Author Bio:
Bio: Andy Walls recently retired as Chief Architect and CTO for IBM’s Flash Systems. He was also an IBM Fellow, IBM’s most prestigious Honor. He worked for IBM for 43 years. He now owns the consultancy; Great Walls of Storage and consults for IBM. Andy is a pioneer in enabling flash into the enterprise and has shaped the IBM storage portfolio to be highly differentiated and popular. He was responsible for the Texas Memory Acquisition and defined and shaped the IBM FlashCore Module which is at the heart of FlashSystems. He led and continues to consult on the architecture and definition of the Entire FlashSystems NVMe product portfolio. Known as an innovator, he has applied for or filed over 150 patents.
Qing Zheng, Scientist, Los Alamos National Laboratory
Presentation Title:
Bringing Analytics to the Data: In-Storage Computing for pNFS
Presentation Abstract:
As datasets grow, minimizing unnecessary data movement has become essential for unlocking insights efficiently. A promising approach is to shift computation closer to storage. This talk presents our early efforts in making in-storage data analytics a reality for pNFS-based storage environments. With integration with popular SQL engines such as Presto and Spark, our approach allows for offloading data-intensive queries to pNFS data servers for execution as close to data as possible. Offloaded queries operate directly on files stored in open formats such as Apache Parquet and return results in Apache Arrow, all while maintaining the security and access controls of traditional file systems. A key enabler of this architecture is the ability for pNFS clients to identify the specific data server responsible for a file, so queries can be routed correctly. We also highlight a recent Linux kernel update that transparently converts remote pNFS reads to local file system reads when a requested file is found to be local. We report preliminary results and discuss future directions. This is a collaboration between Los Alamos, Hammerspace, and SK hynix.
Author Bio:
Qing Zheng is a Computer Scientist building next-generation HPC storage at Los Alamos National Laboratory. Qing's work focuses on developing, shaping, and leveraging emerging technologies to push for and bring solutions that are both impactful and applicable to the lab's computing environment and applications. Qing collaborates extensively with universities and industry partners. Their work spans parallel file systems, key-value storage, in-situ data algorithms, and near-data analytics. Qing holds a Ph.D. in Computer Science from Carnegie Mellon University and has been working in the HPC storage field since 2021.
Mahinder Saluja, Director of Technology and Storage Pathfinding, KIOXIA America, Inc.
Presentation Title:
Integrate Multiple Offload Fixed Function Storage Services to Storage Subsystem
Presentation Abstract:
The exponential increase in data and the rise of AI workloads create both new opportunities and challenges. This topic examines how integrating additional computing resources into the traditional storage subsystem can help address these challenges. It will include offloading fixed function storage services, namely – Compression/ Decompression, Dedup, RAID/Erasure Coding, RAID Rebuild, Encryption/Decryption, Copy Offload, and Data Scrubbing in aggregated and disaggregated storage systems.
Author Bio:
Mahinder has 20+ years of engineering leadership in innovative storage technologies development, building teams and product delivery. Currently Mahinder is one of the main leads for SSD technology strategy at KIOXIA America, Inc., collaborating with industry experts. He has several pending storage related patents.
Presentation Session Description:
This session delves into the evolving landscape of computational storage and its pivotal role in addressing the burgeoning demands of AI workloads and data management. Common themes across the presentations include the strategic offloading of computational tasks closer to storage to reduce data movement, enhance performance, and optimize resource utilization. Techniques such as Checkpoint Offloading SSDs and Near Data Compute (NDC) are explored, showcasing their ability to minimize latency and power consumption by performing operations near the data source. The integration of computational frameworks with storage arrays, as demonstrated by IBM’s use of metadata for AI model training, highlights the potential for improved storage optimization, security, and predictive maintenance. Additionally, the session will examine the benefits of in-storage data analytics, particularly in pNFS environments, to efficiently process large-scale queries while maintaining robust security protocols. By embracing these innovations, the session underscores the transformative impact of moving computation closer to data in enhancing the efficiency and scalability of modern storage systems.
PRO DCTR-102-1: Hyperscale Applications 2
Ballroom F (Santa Clara Convention Center, First Floor)
Track: Data Center Storage and Memory
Organizer + Chairperson:
Jonathan Hinkle, Senior Director - Azure Memory and Storage Pathfinding, Microsoft
Jonathan Hinkle is Senior Director - Azure Memory and Storage Pathfinding at Microsoft. He previously was In Micron's Storage Business Unit, where he investigated new technology and products, both internally as well as with customers and partners. He was previously Executive Director and Distinguished Researcher of System Architecture at Lenovo, where he led their research of datacenter computing architecture. Jonathan is an industry leading technical expert in memory, storage devices, and data center systems architecture with over 24 years of experience. In the JEDEC standards organization, Jonathan serves on the Board of Directors as Vice-Chair of Marketing and Chairs the CXL Memory Task Group, standardizing CXL-attached memory devices. He also invented and drove the first development of the EDSFF 1U Short (E1.S) NVMe drive, the VLP DIMM, and NVDIMM Persistent Memory. He has generated more than 34 granted or pending patents, and earned BS and MS degrees in Computer Engineering from North Carolina State University.
Presenters:
Mike Allison, Senior Director - NAND Product Planning - Standards, Samsung
Presentation Title:
Advances in Live Migration using NVMe Information Overlay
Presentation Abstract:
When live migrating a Guest VM from one controller to another, the Guest OS will see changes to the hardware properties of the controller. This causes problems for Live Migration scenarios where the migration needs to be completely opaque to the Guest VM. To ensure this currently, hypervisors need to intercept the virtual admin queue and make sure that the responses returned by the child controller are “fixed up” before the command is completed back to the Guest OS. But this comes at a cost of additional complexity in the virtualization stack, as well as increased latencies when processing admin commands. NVMe is defining a mechanism that allows the host to tell the migratable controller what data, or what error status code, it should return to the Guest OS when the Guest sends certain admin commands to the migratable controller. This presentation will walk through the process of migrating the Information Overlay.
Author Bio:
Mike Allison is a Sr. Director in the Samsung DSA Product Planning team focusing on standards for existing and future products. Mike is active in the many standards organizations that includes SNIA, NVM Express™, PCI Express™, DMTF, and OCP. He is the chair of the NVM Express Errata Task Group and the Representative on the OCP Steering Committee for the OCP Storage Project. He was the main author of the NVM Express TP4159 PCIe Infrastructure for Live Migration and TP4193 PCIe NVM Export Subsystem Migration which are associated with his presentation. For over 41 years, Mike has been an embedded firmware engineer and architect working on products for laser beam recorders, fighter aircraft, graphics cards, high end servers, and is now focusing on Solid State Drives. He holds 35 patents in graphics, servers, and storage. He has earned a BSEE/CS at University of Colorado, Boulder.
Ross Stenfort, System Engineer, Meta
Presentation Title:
State of Storage
Presentation Abstract:
This will cover current and future storage innovations and the benefits of these innovations.
Author Bio:
Ross Stenfort is a Hardware System Engineer at Meta delivering scalable storage solutions. He has been involved in the development of storage systems, SSDs, ROCs, HBAs and HDDs with many successful products and over 40 patents. Some of his industry and ecosystem activities include being OCP Storage Co-Lead and a NVM Express board member.
Lee Prewitt, Director Cloud Hardware Storage, Microsoft
Presentation Title:
Advances in Live Migration using NVMe Information Overlay
Presentation Abstract:
When live migrating a Guest VM from one controller to another, the Guest OS will see changes to the hardware properties of the controller. This causes problems for Live Migration scenarios where the migration needs to be completely opaque to the Guest VM. To ensure this currently, hypervisors need to intercept the virtual admin queue and make sure that the responses returned by the child controller are “fixed up” before the command is completed back to the Guest OS. But this comes at a cost of additional complexity in the virtualization stack, as well as increased latencies when processing admin commands. NVMe is defining a mechanism that allows the host to tell the migratable controller what data, or what error status code, it should return to the Guest OS when the Guest sends certain admin commands to the migratable controller. This presentation will walk through the process of migrating the Information Overlay.
Author Bio:
Lee Prewitt is a Director of Cloud Hardware Storage at Microsoft with 30+ years of storage industry experience ranging from Magneto-Optical to spinning rust to Flash. His former work at Microsoft has included working in the Windows and Devices Group where he was responsible for many of the components in the storage stack including File Systems, Spaces, Storport and Microsoft’s inbox miniport drivers (SD, UFS, NVMe, etc.). He currently works in the Azure Memory and Storage (AMS) group where he is responsible for future Data Center storage initiatives, specifications (OCP, NVMe, EDSFF, etc.), and evangelization.
Vineet Parekh, Hardware Systems Engineer, Meta
Presentation Title:
Flash Design in Hyperscalers: Challenges, Performance and Insights
Presentation Abstract:
Hyperscale environments push flash storage to its limits, revealing challenges in endurance, workload variability, and performance bottlenecks. This session explores real-world failures, optimization strategies, and key insights to improve flash reliability and efficiency at scale.
Author Bio:
Vineet Parekh is a Hardware System Engineer at Facebook working on storage. He has been involved in development of SSDs, ROCs, HBAs and HDDs. He has had extensive storage experience of working on building enterprise, client and cloud storage systems and devices. As a Flash Platforms engineer in the Release-to-Production team, he collaborates cross-functionally with Flash vendors, electrical engineers, and application developers at Meta, to manage the lifecycle of Meta's database and cache hardware. He has a Masters and B.S in Electrical Engineering.
Presentation Session Description:
This session delves into the forefront of storage technology, highlighting innovations and challenges in the realm of data management. The presentations collectively explore the evolving landscape of storage solutions, from cutting-edge NVMe mechanisms that enhance live migration transparency in virtual environments, to breakthroughs in flash storage that address the complexities of hyperscale operations. Central themes include the drive for increased efficiency, reliability, and performance, underscored by a need to navigate the inherent trade-offs of advanced storage systems. Attendees will gain insights into overcoming latency issues, optimizing endurance, and managing workload variability, equipping them with strategies to leverage these innovations for enhanced data management and operational excellence.
Open INDA-102-1: PCIe® Technology in AI/ML: Maintaining High-Speed Connectivity
Ballroom D (Santa Clara Convention Center, First Floor)
Track: Industry Associations
Chairperson:
Al Yanes, President, PCI-SIG
Al Yanes has served as president of the PCI-SIG since 2003 and chairman since 2006 and is a Distinguished Engineer for IBM in the Systems & Technology Division. He has 26 years of experience working with ASIC design in the I/O industry. Yanes holds 25 patents for PCI™ and other I/O technologies. Yanes is a PCI Express® technology expert for the IBM Rochester office, and he is involved in I/O design for IBM’s Server products. Yanes holds a B.S. in computer engineering from Rensselaer Polytechnic Institute.
Panel Members:
Ron Lowman, PCIe/CXL Product Management, Synopsys, Synopsys
Ron is the PCIe/CXL Principal Product Manager for Synopsys’ Product Management Group (PMG). He has enjoyed over 10 years at Synopsys in multiple roles including Product and Strategic Marketing covering the entirety of the IP portfolio including Artificial Intelligence, the IoT, Security, Foundation IP, UALink, and many other IP within PMG’s IP portfolio. Prior to Synopsys he spent time at Motorola and Freescale. Ron holds a Bachelors of Science in Electrical Engineering from The Colorado School of Mines and a Masters in Business Administration from the University of Texas in Austin.
Sam Kocsis, PCI-SIG Cabling Workgroup Chair/Director of Standards and Technology, Amphenol, PCI-SIG
Sam coordinates Amphenol’s engagement strategies in various industry standards and consortiums across networking, server/storage, optics, and commercial markets. He is active in IEEE 802.3, PCI-SIG and OCP projects, and currently serves as the Technical Committee Vice Chair at the OIF. Sam holds BSEE and MSEE degrees from the University of Rochester, in Rochester, New York.
Casey Morrison, Chief Product Officer, Astera Labs
Casey leads the product organization for Astera Labs with responsibility for defining products and ensuring seamless integration in customer systems. Casey’s career has been centered on helping customers solve complex challenges related to high-bandwidth, low-latency data interconnects. Formerly a head of Systems and Applications Engineering at Texas Instruments, Casey has helped to enable complex system topologies for a variety of applications spanning server, storage, networking, and wireless infrastructure.
Panel Session Description:
Hardware, software and system designers are facing rapidly increasing data demands and processing speeds in Artificial Intelligence (AI) and Machine Learning (ML) applications. PCI Express® (PCIe®) technology offers developers the low-latency and backwards compatibility needed to support today’s high-bandwidth AI applications. This panel session featuring PCI-SIG member companies will detail the features of the new PCIe 7.0 specification (targeting 128 GT/s raw bit rate, 512 GB/s bi-directionally via a x16 configuration) and how PCI-SIG’s continued doubling of the data rate allows AI chipset vendors and AI accelerator developers to maintain a clear path for growth today and into the future. Attendees will learn key PCIe technology benefits for AI/ML applications in a variety of design situations and how these were made possible by the transition to PAM4 signaling, backwards compatibility with previous generations, and low latency.
PCI-SIG
PRO SSDT-102-1: Flash and SSD Controller Technologies for AI
Ballroom G (Santa Clara Convention Center, First Floor)
Track: SSD Technology
Chairperson:
Erich Haratsch, Senior Director Architecture, Marvell Semiconductor
Erich Haratsch is the Senior Director of Architecture at Marvell, where he leads the architecture definition of SSD and storage controllers. Before joining Marvell, he worked at Seagate and LSI, focusing on SSD controllers. Earlier in his career, he contributed to multiple generations of HDD controllers at LSI and Agere Systems. Erich began his career at AT&T and Lucent Bell Labs, working on Gigabit Ethernet over copper, optical communications, and the MPEG-4 video standard. He is the author of over 40 peer-reviewed journal and conference papers and holds more than 200 U.S. patents. A Senior Member of IEEE, Erich earned his MS and PhD degrees from the Technical University of Munich, Germany.
Presenters:
Erich Haratsch, Senior Director Architecture, Marvell Semiconductor
Presentation Title:
Optimizing Flash and Storage Controllers for the AI Data Center
Presentation Abstract:
The transformational launch of GPT-4 has accelerated the race to build AI data centers for large-scale training and inference. While GPUs and high-bandwidth memory are well-known critical components, the essential role of storage devices in AI infrastructure is often overlooked. This presentation will explore the AI processing pipeline within data centers, emphasizing the crucial role of storage devices in both compute and storage nodes. We will examine the characteristics of AI workloads to derive specific requirements for flash and storage controllers.
Author Bio:
Erich Haratsch is the Senior Director of Architecture at Marvell, where he leads the architecture definition of SSD and storage controllers. Before joining Marvell, he worked at Seagate and LSI, focusing on SSD controllers. Earlier in his career, he contributed to multiple generations of HDD controllers at LSI and Agere Systems. Erich began his career at AT&T and Lucent Bell Labs, working on Gigabit Ethernet over copper, optical communications, and the MPEG-4 video standard. He is the author of over 40 peer-reviewed journal and conference papers and holds more than 200 U.S. patents. A Senior Member of IEEE, Erich earned his MS and PhD degrees from the Technical University of Munich, Germany.
Nick Huang, Principle engineer, Silicon Motion Inc.
Presentation Title:
A Distributed Controller for Flexible Applications in the AI Era
Presentation Abstract:
As NAND specifications continue to evolve and the throughput of NAND and host interfaces increases, significant gaps remain between the demands of different applications. In the AI era, we propose a flexible architecture known as the distributed controller to address these challenges. This architecture divides the traditional NAND controller into two components: a Flash Processing Unit (FPU) and a Data Processing Unit (DPU). Leveraging advanced chiplet and packaging technologies, the FPU is placed closer to the NAND die, providing more stable ONFI signals and higher I/O speeds while eliminating the need for an interface chip. The FPU ensures error-free NAND access with physical addressing, functioning similarly to an open-channel SSD. The DPU handles host protocols and supports additional data processing tasks, such as compression, deduplication, or inference acceleration. Both the FPU and DPU can be tailored to meet the specific needs of different applications. This modular approach enhances the adaptability of NAND storage, making it suitable for a wide variety of use cases.
Author Bio:
Nick is an experienced hardware designer and NAND interface architect at Silicon Motion. He specializes in optimizing the integration of Low-Density Parity-Check (LDPC) coding with NAND interfaces to achieve maximum efficiency. Nick holds a Master’s degree in Electrical Engineering, with a focus on hardware implementation of error correction coding, from National Tsing Hua University, Taiwan.
Robert Sykes, Director Technical Product Management, Micron
Presentation Title:
Future Architectures for AI workloads
Presentation Abstract:
A review of PCIe Performance over its generations, how the industry is reacting and why AI is driving this need. The presentation will cover NAND (number of planes, channels), controller capabilities and how its architecture plays its part in meeting future storage needs and how this relates to top end performance for AI workloads. Specifically, the presentation will consider the bottlenecks in the system that relate to achieving performance including host factors, controller design, DRAM etc. and NAND capabilities and asks poses the question, what is required to enable PCIe Gen7 performance.
Author Bio:
Rob Sykes is Director of Technical Product Marketing at Micron Technology where he focuses on the definition of Micron's SSD Enterprise Controllers. As a technology leader and visionary for over 25 years, he has been a key figure in the development of multiple generations of PCIe/NVMe products. He holds patents in ASIC/FW architecture and has been a presenter at FMS since 2012 covering FTL, Flash Architectures, Futures of Memory Storage and more. He holds an MSc in Computer Science from Bristol University (UK)
Roman Pletka, Senior Research Scientist, IBM Research - Zurich
Presentation Title:
SSD controller architecture for similiarity search in Vector DBs
Presentation Abstract:
Generative AI applications are currently transforming industries by their ability to answer questions and generate content. Although LLMs are trained with an immense amount of information, generated results may be hallucinatory or not up-to-date. Hence, semantic search technologies providing context-relevant input is indispensable to reduce these effects. This data is extracted using a process called Retrieval Augmented Generation (RAG) that extracts related facts from large data stores such as a Vector DBs. The number of vectors to be searched is growing towards several billions and can no longer be kept in DRAM motivating the offloading into storage devices. We present CSD SSD controller architectures performing in-storage similarity searches and review data placement strategies for highly-parallelized processing of similarity searches in storage that can scale to multiple billions of vectors within a single device. In particular, we present results from an implementation using inverted index and graph-based approaches providing coarse and fine-grained searching capabilities and introduce NVMe CSD interfaces to handle Vector DB information and perform searches efficiently.
Author Bio:
Roman Pletka is a senior research scientist and master inventor for storage and AI systems at the IBM Zurich Research Laboratory where he focuses on non-volatile memory technologies and AI in storage systems. He is a frequent speaker at international conferences, has published over 20 articles and obtained more than 130 patents in managing non-volatile memories, security, scalability, and availability of distributed storage systems as well as quality-of-service in high-speed networks, active networks, and network processors. He has made presentations at many international conferences including the ACM International Conference on Systems and Storage (SYSTOR) and the Nonvolatile Memory Workshop. He earned a PhD in computer networking from ETH Zurich, Switzerland and an MS in the same subject from EPFL (Swiss Federal Institute of Technology of Lausanne) and has over 20 years experience in storage systems research.
Presentation Session Description:
In the rapidly evolving landscape of AI-driven data processing and storage, the convergence of advanced NAND architectures, PCIe development, and AI data center requirements is redefining the future of computing. This session explores the innovative approaches in storage controller designs and interface technologies essential for meeting the demands of AI applications. Central to this discussion is the distributed controller architecture, which separates the Flash Processing Unit (FPU) and Data Processing Unit (DPU), enhancing the modularity and flexibility of NAND storage to optimize performance across diverse applications. As AI workloads continue to grow in complexity, the need for high-throughput, efficient storage solutions becomes critical, with PCIe evolution and advanced NAND capabilities playing pivotal roles. The session also examines the integration of cutting-edge semantic search technologies like Retrieval Augmented Generation (RAG) in Generative AI, leveraging storage devices for in-storage similarity searches to manage the massive scale of data. By delving into these themes, this session highlights the critical innovations and architectural strategies that are enabling the next generation of AI infrastructure, ensuring robust and scalable solutions capable of supporting the burgeoning demands of AI workloads.
10:50 AM to 11:00 AM
Open SPEC-101-1: Chairperson's Welcome
Mission City Ballroom (Santa Clara Convention Center, First Floor)
Track: Special Sessions
Speakers:
Tom Coughlin, FMS General Chair, FMS: The Future of Memory and Storage
Tom Coughlin, FMS General Chair, is President, Coughlin Associates. Tom is a digital storage analyst and business/ technology consultant with over 40 years in the data storage industry with engineering and senior management positions. Coughlin Associates consults, publishes books and market and technology reports and puts on digital storage and memory-oriented events. He is a regular contributor for forbes.com and M&E organization websites. He is an IEEE Fellow, 2025 IEEE Past President, Past-President IEEE-USA, Past Director IEEE Region 6 and Past Chair Santa Clara Valley IEEE Section, and is also active with SNIA and SMPTE. For more information on Tom Coughlin go to www.tomcoughlin.com.
Special Presentation Description:
Welcome to the 2025 Future of Memory and Storage (FMS) is where it all comes together: digital storage and memory technologies and their applications. With its 19 year history of programs, keynotes and exhibits, FMS is the largest independent digital storage and memory event in the world. Learn more why the conference is a must-attend industry networking event.
11:00 AM to 11:30 AM
Open KEYN: Keynote 1: KIOXIA: Optimize AI Infrastructure Investments with Flash Memory Technology and Storage Solutions
Mission City Ballroom (Santa Clara Convention Center, First Floor)
Track: Keynotes
Keynote Speakers:
Neville Ichhaporia, Sr VP and GM of the SSD Business Unit, KIOXIA America
Neville Ichhaporia
Neville Ichhaporia is Senior Vice President and General Manager of the SSD Business Unit at KIOXIA America, Inc., responsible for SSD marketing, business management, product planning, and engineering, overseeing the SSD product portfolio aimed at cloud, data center, enterprise, and client computing markets. Neville has over 20 years of extensive industry experience, including product management, strategic marketing, new product development, hardware engineering, and R&D. Prior to joining KIOXIA in 2016, Neville held diverse roles at Toshiba Memory America, SanDisk, Western Digital Corporation, and Microchip. Neville Ichhaporia holds an MBA from the Santa Clara University Leavey School of Business, an MS in Electrical Engineering and VLSI Design from the University of Ohio, Toledo, and a BS in Instrumentation and Control Systems from the University of Mumbai.
Katsuki Matsudera, General Manager of the Memory Technical Marketing Managing Department, KIOXIA
Katsuki Matsudera
Mr. Katsuki Matsudera is a General Manager of the Memory Technical Marketing Managing Department at KIOXIA Corporation. He has played a key role of the planning and marketing of KIOXIA’s groundbreaking BiCS FLASH™ Generation 8 3D flash memory, utilizing CBA (CMOS directly Bonded to Array) architecture and taking flash memory to the next level. Mr. Matsudera graduated with a master’s degree in Mechanical Engineering from Kyoto University in 1996, joining Toshiba Corporation the same year. Later he worked on groundbreaking products, including the world's first NAND flash memory using TSV technology and 3D flash memory with 4-bit-per-cell technology, referred to as BiCS FLASH™ QLC. Today, Katsuki directs the company’s comprehensive product strategy planning and execution, as well as new market development of future BiCS FLASH™ generations.
Keynote Description:
For over three decades, KIOXIA, the inventor of NAND flash memory technology, has continued to innovate and lead the future of memory and storage. With the introduction of its upcoming BiCS FLASH™ Generation 10 3D flash memory, along with a diverse portfolio of SSDs, KIOXIA is meeting the ever-increasing demand for faster, denser, and more power-efficient storage for Artificial Intelligence. AI is now dominating investments in data center infrastructure; however, a “one-size-fits-all” storage solution will neither optimize these investments nor maximize ROI. There is no single, homogenous AI workload; therefore, each stage of the AI data lifecycle has its own unique storage requirements which needs to be matched with the right storage solution to optimize AI investments. Discover how KIOXIA’s next-generation of memory and storage solutions can Scale AI Without Limits – Make it with KIOXIA!
KIOXIA
11:30 AM to 11:40 AM
Open SPEC-102-1: FMS Lifetime Achievement Award
Mission City Ballroom (Santa Clara Convention Center, First Floor)
Track: Special Sessions
Speakers:
Brian Berg, President, Berg Software Design
Through his Berg Software Design consultancy, Brian provides hardware and software design and development services for storage and interface technologies in consumer electronics, including flash memory, NVMe and USB. Brian has been a developer, project lead, industry analyst, seminar leader, technical marketer and technical writer. He has participated in over 80 conferences as a speaker, session chair and conference chair. He has also worked extensively with intellectual property and patents, particularly in the storage arena. He is active as an IEEE officer and volunteer, including as past Chair of the Santa Clara Valley Section, Director and past Chair of the Consultants Network of Silicon Valley, Region 6 IEEE Milestone Coordinator, Chair of the SCV Technical History Committee, and past Liaison for the Women in Engineering Affinity Group. Brian is an IEEE awards recipient, including the 2017 Outstanding Leadership and Service to the IEEE within Region 6, the 2017 IEEE-USA Professional Leadership Award for Outstanding Service to the Consulting and Electrical Engineering profession, and the 2012 Outstanding Leadership and Professional Service Award for Region 6.
Jim Handy, General Director, Objective Analysis
Jim Handy of Objective Analysis is a 35-year semiconductor industry executive and a leading industry analyst. Following marketing and design positions at Intel, National Semiconductor, and Infineon he became highly respected as an analyst for his technical depth, accurate forecasts, industry presence, and numerous market reports, articles, white papers, and quotes. He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com.
Special Presentation Description:
The FMS Lifetime Achievement Award recognizes individuals who have shown outstanding leadership in promoting the development and use of memory, storage, and/or associated or related technologies, including one or more of the following: Founding a leading memory or storage company Driving the adoption of initiatives and/or standards in the memory and storage industries Bringing memory or storage to a new application, including supplanting older technologies Demonstrating exceptional leadership, including defining new architectures
11:40 AM to 12:10 PM
Open KEYN: Keynote 2: FADU: Pushing the Storage Frontier: Next-Generation SSDs for Tomorrow’s Datacenters
Mission City Ballroom (Santa Clara Convention Center, First Floor)
Track: Keynotes
Keynote Speakers:
Ross Stenfort, Hardware Systems Engineer, Meta
Ross Stenfort
Ross Stenfort is a Hardware Systems Engineer at Meta delivering scalable storage solutions. He has 30+ years of experience developing and bringing leading edge storage products to market. Ross works closely industry partners and standards organizations including NVM Express, SNIA/EDSFF, and Open Compute Project (OCP). With experience including ASIC design, he has an appreciation for the design challenges facing SSD providers to deliver performance and QoS within a shrinking power envelope. Ross holds over 40 patents.
Jihyo Lee, CEO and Co-founder of FADU, FADU
Jihyo Lee
Jihyo Lee is the CEO and co-founder of FADU, a leading fabless semiconductor company revolutionizing data center and storage solutions for next-generation computing architectures. Under his leadership, FADU has become a hub of innovation, bringing together top industry talent to drive advancements in technology. Before founding FADU, Jihyo was a partner at Bain & Company, where he honed his strategic expertise. He is also a successful serial entrepreneur, having built and led multiple ventures across the technology, telecom, and energy sectors. Jihyo holds an MBA from the Wharton School of the University of Pennsylvania and both bachelor’s and master’s degrees in Industrial Engineering from Seoul National University. His unique combination of academic excellence, entrepreneurial spirit, and leadership experience makes him a visionary in the tech industry.
Keynote Description:
The rapid evolution of datacenter infrastructure is being driven by the need for higher performance, ultra-high capacity, and power efficiency. This talk explores how evolving new AI workloads are driving storage and will delve into the challenges and opportunities associated with these workloads. This keynote will also provide a comprehensive overview of where the industry stands today and the opportunities that lie ahead. We will share insights from both the customer perspective and the supplier perspective. Additionally, we will discuss the importance of ecosystem collaboration and introduce new business models. Join us as we explore the new storage frontier and chart the course for the next generation of datacenter infrastructure.
FADU Technology
12:10 PM to 01:10 PM
Open BRK: Tuesday Lunch
Hyatt Regency Hallway/Mission City Ballroom Lobby (Santa Clara Convention Center, First Floor)
Track: General Events
General Event Description:
Description Not Available
01:10 PM to 01:40 PM
Open KEYN: Keynote 3: Micron: Data is at the Heart of AI
Mission City Ballroom (Santa Clara Convention Center, First Floor)
Track: Keynotes
Keynote Speakers:
Jeremy Werner, SVP & GM, Core Data Center Business Unit, Micron Technology
Jeremy Werner
Jeremy is an accomplished storage technology leader with over 20 years of experience. At Micron he has a wide range of responsibilities, including product planning, marketing and customer support for Server, Storage, Hyperscale, and Client markets globally. Previously he was GM of the SSD business at KIOXIA America and spent a decade in sales and marketing roles at startup companies MetaRAM, Tidal Systems, and SandForce. Jeremy earned a B.S.E.E. from Cornell University, is a Stanford Graduate Business School alumni, and holds over 25 patents or patents pending
Keynote Description:
Without data, there is no AI. To unlock AI’s full potential, data must be stored, moved, and processed with incredible speed and efficiency from the cloud to the edge. As AI substantially increases performance requirements, the need for optimized power/cooling, rack space, and capacity also rises. This session explores how Micron’s cutting-edge memory and storage solutions – such as PCIe Gen6 SSDs, high-capacity SSDs, HBM3E, and SOCAMM – are driving the AI revolution, reducing bottlenecks, optimizing energy efficiency, and turning data into intelligence. We will explore Micron’s end-to-end, high-performance, energy-efficient memory and storage solutions powering the AI revolution by turning data into intelligence. Join us as we look at how Micron's memory and storage innovations fuel the AI revolution to enrich life for all.
Micron
01:40 PM to 02:10 PM
Open KEYN: Keynote 4: Silicon Motion: Smart Storage in Motion: From Silicon Innovation to AI Transformation Across all Spectrums
Mission City Ballroom (Santa Clara Convention Center, First Floor)
Track: Keynotes
Keynote Speakers:
Nelson Duann, Sr VP of Client & Automotive Storage Business, Silicon Motion
Nelson Duann
Nelson Duann has been with Silicon Motion since 2007 and has nearly 25 years of experience in product design, development, and marketing in the semiconductor industry. He was most recently leading Silicon Motion's marketing and R&D efforts and has played a key role in leading the company's OEM business for mobile storage and SSD controller solutions, helping to introduce these products to growing them into the market leaders in these markets today. Prior to Silicon Motion, he worked for Sun Microsystems focusing on UltraSPARC microprocessor projects. Nelson has an MS in Communications Engineering from National Chiao Tung University in Taiwan and an MS in Electrical Engineering from Stanford University.
Alex Chou, Sr VP of Enterprise Storage & Display Interface Solution Business, Silicon Motion
Alex Chou
Alex Chou joined us in December 2023 with over 30 years’ industry experience in ASIC design/applications engineering, product marketing, business strategy, and executive-level business engagement. Prior to Silicon Motion, Alex was Senior Vice President at Synaptics, where he was the GM responsible for the growth and success of its Wireless Connectivity business. Prior to that, he spent more than 18 years at Broadcom, and was responsible for their enterprise networking, WiFi network solutions and client WIFI/BT/GNSS as VP of Product Marketing at Broadcom's Wireless Connectivity BU. Alex holds an BS degree from National Cheng Kung University in Taiwan, and an MS in Computer Engineering from Syracuse University in New York.
Keynote Description:
AI is transforming every layer of computing. However, without seamless data movement and intelligent orchestration, its full potential cannot be realized. As data moves from hyperscale cloud training platforms to low-latency edge inference engines, storage is no longer a static endpoint. It has become the critical infrastructure that keeps AI in motion. In this keynote, we will explore how next-generation storage solutions are driving the AI revolution by enabling high-throughput data transfer, ultra-low latency, and intelligent workload orchestration across the entire data pipeline—from cloud to edge. We will highlight innovations in flash storage architecture, interface performance, and AI-optimized data paths that overcome infrastructure bottlenecks and deliver greater speed, scalability, and efficiency. From the data center to edge devices, from data to intelligence, Silicon Motion is unlocking the full power of data across the AI.
Silicon Motion
02:10 PM to 02:40 PM
Open KEYN: Keynote 5: SK hynix: Where AI Begins: Full-Stack Memory Redefining the Future
Mission City Ballroom (Santa Clara Convention Center, First Floor)
Track: Keynotes
Keynote Speakers:
Chunsung Kim, VP, SK hynix
Chunsung Kim
Chunsung Kim, VP of eSSD Product Development at SK hynix, currently leads the development of the company’s enterprise SSD products. With over 15 years of experience in S.LSI development, Chunsung Kim established and built SK hynix’s in-house NAND Flash controller teams and capabilities in 2011. Possessing a well-rounded expertise in both SSD engineering and business strategy, he has also played a pivotal role in stabilizing SK hynix’s global R&D operations and management. In addition, he is spearheading the company’s initiatives to explore the future of storage solutions in the AI era. Chunsung Kim holds a Master’s degree in Control and Instrumentation Engineering from Chung-Ang University in South Korea.
Jeff (Joonyong) Choi , VP, SK hynix
Jeff (Joonyong) Choi
Jeff (Joonyong) Choi is Vice President and Head of the HBM Business Planning Group at SK hynix, where he leads portfolio planning and strategic direction for next-generation High Bandwidth Memory (HBM) products. In this role, he drives the successful enablement of advanced memory solutions and leads high-impact business engagements with key players across the global AI ecosystem. Jeff plays a pivotal role in orchestrating product development, customer collaboration, and business strategy into a seamless, integrated process—helping SK hynix position itself at the forefront of innovation in AI and high-performance computing. Prior to his current role, he led product planning for mobile and graphics DRAM, contributing significantly to SK hynix’s market leadership in performance memory. Jeff holds a Master’s degree in Electrical Engineering from KAIST and an MBA from the MIT Sloan School of Management.
Keynote Description:
As the AI industry is rapidly shifting its focus from AI Training to AI Inference, memory technologies must evolve to support high-performance and power-efficient token generation across Generative, Agent and Physical AI. Performance and power efficiency remain two critical pillars shaping the scalability and TCO of AI systems. To address these demands, SK hynix delivers a comprehensive memory portfolio-spanning HBM, DRAM, compute SSDs, and storage SSDs optimized for diverse AI environments including data centers, PCs, and smartphones. HBM, with its structural advantages of high bandwidth and low power consumption, provides the flexibility to meet a wide variety of customer needs. Meanwhile, our storage solutions are designed to enable fast, reliable access to data-intensive workloads in AI inference scenarios. Together, these efforts form a mid-to-long-term roadmap focused on scalability, performance, and cost optimization. This keynote will highlight how SK hynix’s memory technologies are enabling the infrastructure required for next-generation AI.
SK hynix
02:40 PM to 03:10 PM
Open KEYN: Keynote 6: Samsung: Architecting AI Advancement: The Future of Memory and Storage
Mission City Ballroom (Santa Clara Convention Center, First Floor)
Track: Keynotes
Keynote Speakers:
Daihyun Lim, VP Memory, Samsung Semiconductor
Daihyun Lim
Daihyun Lim joined Samsung in 2023 and serves as Vice President of Memory for Samsung Semiconductor, leading HBM I/O design as a Master (VP of Technology). Prior to Samsung, he was with IBM ASIC group working on high-speed serial link design from 2008 to 2017. He was also with Nokia Network Infrastructure, designing silicon photonics transceivers as a Distinguished Member of Technical Staff (DMTS) from 2017 to 2022. His areas of expertise include high-speed memory interface circuits, signal and power integrity, and optical transceivers. Daihyun Lim received his B.S. in Electrical Engineering from Seoul National University in 1999, and his S.M. and Ph.D. degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology (MIT) in 2004 and 2008, respectively. In 2017, he was recognized as the author of the most frequently cited paper in the history of VLSI Symposium.
Hwa-Seok Oh, Executive VP of Memory, Samsung Semiconductor
Hwa-Seok Oh
Hwa-Seok Oh leads the Solution Product Engineering team at Samsung, where he oversees the commercialization of flash storage products that include mobile memory devices like eMMC and UFS, as well as client-, server-, and enterprise-class SSDs. Hwa-Seok Oh joined Samsung Electronics in 1997 as a SoC design engineer, focusing on the development of network and storage controllers. While working on flash storage products, he pioneered the world’s first UFS products. He also spearheaded the creation of high-performance NVMe SSD controllers for datacenters. More recently, he has been at the forefront of developing new flash storage technologies, contributing to innovations such as Samsung’s SmartSSD and Flash Memory-based CXL Memory Module. Hwa-Seok Oh holds a Bachelor’s and a Master’s degree in Computer Science from Sogang University, earned in 1995 and 1997, respectively.
Keynote Description:
As AI workloads grow more complex, memory and storage architectures must evolve to deliver ultra- high bandwidth, low latency, and efficient scalability. Samsung’s latest innovations—HBM, DDR5, CXL, PCIe Gen5/Gen6 SSDs, and UFS 5.0—are engineered to meet these demands across data center and edge environments. By refining memory hierarchies and enhancing data throughput and integrity, Samsung is advancing infrastructure that has compelling value to next-gen AI systems. This session aims to highlight the pivotal role of memory and storage within the AI infrastructure framework, while providing an insightful forecast into forthcoming technological advancements and industry outlook.
Samsung Semiconductor
03:00 PM to 07:00 PM
Open GEN: FMS Exhibits Open
Exhibit Hall (Santa Clara Convention Center, First Floor)
Track: General Events
General Event Description:
Attend the FMS Exhibit Hall, expanded again for 2025! Along with our sponsors and exhibitors from a broad scope of the memory and storage industries, the show floor has a variety of events, including a Pitch Theater, Industry Receptions, Winners' Circle - FMS Awards Ceremony, and the highly sought-after End-of-Show Raffle (you never know who will show up to entertain)! This year, there will be a dedicated lunch hour from 12:00 to 1:00 Tuesday through Thursday.
05:00 PM to 07:00 PM
Open GEN : FMS Opening Reception
Exhibit Hall (Santa Clara Convention Center, First Floor)
Track: General Events
General Event Description:
Attend the FMS Exhibit Hall, expanded again for 2025! Along with our sponsors and exhibitors from a broad scope of the memory and storage industries, the show floor has a variety of events, including a Pitch Theater, Industry Receptions, Winners' Circle - FMS Awards Ceremony, and the highly sought-after End-of-Show Raffle (you never know who will show up to entertain)! This year, there will be a dedicated lunch hour from 12:00 to 1:00 Tuesday through Thursday.
06:00 PM to 07:00 PM
Open SPEC-103-1: FMS Best of Show Awards
FMS Theater (Santa Clara Convention Center, First Floor)
Track: Special Sessions
Special Event Description:
Description Not Available