2025 Keynotes

Keynote 1: KIOXIA: Optimize AI Infrastructure Investments with KIOXIA BiCS FLASH memory, SSD Technology, and Storage Solutions
Tuesday, August 5th @ 11:00 AM - 11:30 AM


For over three decades, KIOXIA, the inventor of NAND flash memory technology, has continued to innovate and lead the future of memory and storage. With the introduction of its upcoming BiCS FLASH™ Generation 10 3D flash memory, along with a diverse portfolio of SSDs, KIOXIA is meeting the ever-increasing demand for faster, denser, and more power-efficient storage for Artificial Intelligence. AI is now dominating investments in data center infrastructure; however, a “one-size-fits-all” storage solution will neither optimize these investments nor maximize ROI. There is no single, homogenous AI workload; therefore, each stage of the AI data lifecycle has its own unique storage requirements which needs to be matched with the right storage solution to optimize AI investments. Discover how KIOXIA’s next-generation of memory and storage solutions can Scale AI Without Limits – Make it with KIOXIA!

Keynote 2: FADU: Pushing the Storage Frontier: Next-Generation SSDs for Tomorrow’s Datacenters
Tuesday, August 5th @ 11:40 AM - 12:10 PM


The rapid evolution of datacenter infrastructure is being driven by the need for higher performance, ultra-high capacity, and power efficiency. This talk explores how evolving new AI workloads are driving storage and will delve into the challenges and opportunities associated with these workloads. This keynote will also provide a comprehensive overview of where the industry stands today and the opportunities that lie ahead. We will share insights from both the customer perspective and the supplier perspective. Additionally, we will discuss the importance of ecosystem collaboration and introduce new business models. Join us as we explore the new storage frontier and chart the course for the next generation of datacenter infrastructure.

Keynote 3: Micron: Data is at the Heart of AI
Tuesday, August 5th @ 01:10 PM - 01:40 PM

Without data, there is no AI. To unlock AI’s full potential, data must be stored, moved, and processed with incredible speed and efficiency from the cloud to the edge. As AI substantially increases performance requirements, the need for optimized power/cooling, rack space, and capacity also rises. This session explores how Micron’s cutting-edge memory and storage solutions – such as PCIe Gen6 SSDs, high-capacity SSDs, HBM3E, and SOCAMM – are driving the AI revolution, reducing bottlenecks, optimizing energy efficiency, and turning data into intelligence. We will explore Micron’s end-to-end, high-performance, energy-efficient memory and storage solutions powering the AI revolution by turning data into intelligence. Join us as we look at how Micron's memory and storage innovations fuel the AI revolution to enrich life for all.

Keynote 4: Silicon Motion: “Smart Storage in Motion: From Silicon Innovation to AI Transformation Across all Spectrums”
Tuesday, August 5th @ 01:40 PM - 02:10 PM


AI is transforming every layer of computing. However, without seamless data movement and intelligent orchestration, its full potential cannot be realized. As data moves from hyperscale cloud training platforms to low-latency edge inference engines, storage is no longer a static endpoint. It has become the critical infrastructure that keeps AI in motion. In this keynote, we will explore how next-generation storage solutions are driving the AI revolution by enabling high-throughput data transfer, ultra-low latency, and intelligent workload orchestration across the entire data pipeline—from cloud to edge. We will highlight innovations in flash storage architecture, interface performance, and AI-optimized data paths that overcome infrastructure bottlenecks and deliver greater speed, scalability, and efficiency. From the data center to edge devices, from data to intelligence, Silicon Motion is unlocking the full power of data across the AI.

Keynote 5: SK hynix: Where AI Begins: Full-Stack Memory Redefining the Future
Tuesday, August 5th @ 02:10 PM - 02:40 PM


As the AI industry is rapidly shifting its focus from AI Training to AI Inference, memory technologies must evolve to support high-performance and power-efficient token generation across Generative, Agent and Physical AI. Performance and power efficiency remain two critical pillars shaping the scalability and TCO of AI systems. To address these demands, SK hynix delivers a comprehensive memory portfolio-spanning HBM, DRAM, compute SSDs, and storage SSDs optimized for diverse AI environments including data centers, PCs, and smartphones. HBM, with its structural advantages of high bandwidth and low power consumption, provides the flexibility to meet a wide variety of customer needs. Meanwhile, our storage solutions are designed to enable fast, reliable access to data-intensive workloads in AI inference scenarios. Together, these efforts form a mid-to-long-term roadmap focused on scalability, performance, and cost optimization. This keynote will highlight how SK hynix’s memory technologies are enabling the infrastructure required for next-generation AI.

Keynote 6: Samsung: Architecting AI Advancement: The Future of Memory and Storage
Tuesday, August 5th @ 02:40 PM - 03:10 PM



As AI workloads grow more complex, memory and storage architectures must evolve to deliver ultra- high bandwidth, low latency, and efficient scalability. Samsung’s latest innovations—HBM, DDR5, CXL, PCIe Gen5/Gen6 SSDs, and UFS 5.0—are engineered to meet these demands across data center and edge environments. By refining memory hierarchies and enhancing data throughput and integrity, Samsung is advancing infrastructure that has compelling value to next-gen AI systems. This session aims to highlight the pivotal role of memory and storage within the AI infrastructure framework, while providing an insightful forecast into forthcoming technological advancements and industry outlook.

Keynote 7: NEO: Breaking the Bottleneck: NEO Semiconductor’s Disruptive 3D Memory Architecture for AI
Wednesday, August 6th @ 11:00 AM - 11:30 AM

As AI systems scale, the widening gap between processor and memory performance has become a critical limitation. NEO Semiconductor introduces a breakthrough in 3D memory architecture that eliminates the need for through-silicon via (TSV) processes—dramatically increasing memory bandwidth by up to 10x, while reducing die cost, height, and power consumption by as much as 90%. In this keynote, NEO will explore how rethinking memory from the ground up can unlock new levels of performance and efficiency in AI. Attendees will also get an exclusive preview of a soon-to-be-announced innovation that promises to redefine the future of memory technology.

Keynote 8: Sandisk: SanDisk Future Forward - Unlocking the Full Potential of NAND in the AI Era
Wednesday, August 6th @ 11:40 AM - 12:10 PM

As we enter the second chapter of the AI Era, moving from training to inference and far beyond the optimization of production costs and scaling. This pivotal phase will redefine the use of NAND in enterprise applications by harnessing advanced media properties alongside cutting edge device architectures and packaging. Sandisk believes that the unique dimensional capabilities of NAND present an unparalleled opportunity to drive innovation within the evolving AI landscape. Join us as we showcase NAND-based solutions that will strategically elevate its value to the ecosystem, delivering tangible benefits for the most challenging data-intensive workloads.

Keynote 9: MaxLinear: “Accelerated” Software Defined Storage Transforming Data-storage at Enterprise Data Centers
Wednesday, August 6th @ 01:10 PM - 01:40 PM

The data storage market is experiencing enormous growth, driven largely by AI adoption. According to Fortune Business Insights, the global cloud storage market is projected to grow six folds from 100B$ to 600B$+ over next 5 years. This growth is creating significant challenges: • Rising power consumption even beyond current 2% of global energy consumption • Increasing storage costs as data volumes expand • Performance bottlenecks with traditional storage solutions • Security concerns with distributed data This keynote will address these challenges suggesting novel methods using combination of high-performance CPU Cores (performance per watt) and storage acceleration SoC (System-on-a-Chip) drastically reduce the power consumption over traditional methods. Several architectural trade-offs involving off-load, in-line and a hybrid method along with accelerated data services like deep compression for hot and cold data, encrypted data and providing quantum resilience with an achievable scale-out at 1Tb per second will be discussed. These methods can improve effective storage by factors up-to 1:20.

Keynote 10: VergeIO: AI Infrastructure for Everyone: Flattening the Pipeline, Simplifying Deployment
Wednesday, August 6th @ 01:40 PM - 02:10 PM


Today, the complexity of artificial intelligence demands specialized skills, sophisticated tools, and robust infrastructure, rendering it both inaccessible and costly for many organizations. IT teams encounter significant learning curves and operational challenges when developing AI solutions on fragmented infrastructures. Current solutions fail to address the core issue: the ecosystem’s overwhelming complexity. Innovations in AI infrastructure must flatten the AI pipeline and reduce integration burdens. This talk will examine how streamlining the AI ecosystem facilitates the privatization of AI for organizations and sovereign entities, enabling the creation of secure, self-managed AI environments. These advancements will promote broader AI adoption, leading to faster returns on GPU investments and justifying the use of high-capacity SSD technology within AI processes. During the keynote, VergeIO will showcase a live demonstration of VergeIQ, and provide a peek at what integrated, sovereign AI looks like in practice.

Keynote 11: KOVE: Rethinking the Box: Why Memory Constraints Are Now a Design Choice
Wednesday, August 6th @ 02:10 PM - 02:40 PM

For decades, compute and storage evolved — but memory stayed in the box. We accepted its limits as fact. In this keynote, Kove CEO John Overton challenges that assumption and shows how software-defined memory (SDM) redefines what’s possible. With Kove: SDM™, memory is no longer constrained by local hardware — it becomes a virtualized, high-performance resource, dynamically allocated where and when it’s needed most. Enterprises are now running memory-bound jobs at up to 60x speed, achieving 100x container density, and reducing power and cooling costs by up to 54%. From AI/ML model training to risk modeling and real-time analytics, Kove: SDM™ unlocks new agility—without code changes or hardware overhauls. Join us to explore how Kove engineered around the limits of physics to rewrite what modern infrastructure can do.

Keynote 12: Executive AI Panel: Memory and Storage Scaling for AI Inferencing
Thursday, August 7th @ 11:00 AM - 12:00 PM
Raw bandwidth is important for AI training workloads, but AI inference needs that and more. It also needs distributed solutions with AI optimized low latency networking, and intelligent memory and storage for optimum performance. This panel explores how ultra-high performance AI optimized storage networking, and GPU enhanced AI storage solutions can dramatically accelerate data transfers between memory and local and remote storage tiers. This enables dynamic resource allocation, significantly boosting AI inferencing request throughput. We will explore how this combination addresses the challenges of scaling inference workloads across large GPU fleets moving beyond traditional bottlenecks. We have assembled a panel of experts from inside NVIDIA and across the storage and memory industry to provide insight on how to maximize the number of AI requests served, while maintaining low latency and high accuracy.