System Vendors - MemCon | Kisaco Research

System Vendors - MemCon

Memory Con
March 2025
Silicon Valley, CA

Why Should System Vendors Attend MemCon 2024?

We attract system integrators, cloud vendors, GPU cloud providers and HPC companies from the likes of Accenture, EY, Equinix, Iris Energy, HPE, Dell, Microsoft and more, as they come together to:

  • Find new pools of customers.

  • Take home insights on how to implement new technologies into systems.

  • Learn how others are running multiple applications/use-cases on systems.

If you'd like to find out more information about attending as an AI vendors, register your interest here

CONFIRM YOUR PLACE HERE

Featured Speakers Include

Author:

Rodrigo Madanes

Global AI Innovation Officer
EY

Rodrigo Madanes is EY’s Global Innovation AI Leader. Rodrigo has a computer science degree from MIT and a PhD from UC Berkeley. Some testament to his technical expertise includes 3 patents and having created novel AI products at both the MIT Media Lab as well as Apple’s Advanced Technologies Group.

Prior to EY, Rodrigo ran the European business incubator at eBay which launched new ventures including eBay Hire. At Skype, he was the C-suite executive leading product design globally during its hyper-growth phase, where the team scaled the userbase, revenue, and profits 100% YoY for 3 consecutive years.

Rodrigo Madanes

Global AI Innovation Officer
EY

Rodrigo Madanes is EY’s Global Innovation AI Leader. Rodrigo has a computer science degree from MIT and a PhD from UC Berkeley. Some testament to his technical expertise includes 3 patents and having created novel AI products at both the MIT Media Lab as well as Apple’s Advanced Technologies Group.

Prior to EY, Rodrigo ran the European business incubator at eBay which launched new ventures including eBay Hire. At Skype, he was the C-suite executive leading product design globally during its hyper-growth phase, where the team scaled the userbase, revenue, and profits 100% YoY for 3 consecutive years.

Author:

Petr Lapukhov

Network Engineer
NVIDIA

Petr Lapukhov is a Network Engineer at Meta. He has 20+ years in the networking industry, designing and operating large scale networks. He has a depth of experience in developing and operating software for network control and monitoring. His past experience includes CCIE/CCDE training and UNIX system administration.

Petr Lapukhov

Network Engineer
NVIDIA

Petr Lapukhov is a Network Engineer at Meta. He has 20+ years in the networking industry, designing and operating large scale networks. He has a depth of experience in developing and operating software for network control and monitoring. His past experience includes CCIE/CCDE training and UNIX system administration.

Author:

Tirthankar Lahiri

SVP, Data & In-Memory Technologies
Oracle

Tirthankar Lahiri is Vice President of the Data and In-Memory Technologies group for Oracle Database and is responsible for the Oracle Database Engine (including Database In-Memory, Data and Indexes, Space Management, Transactions, and the Database File System), the Oracle TimesTen In-Memory Database, and Oracle NoSQLDB. Tirthankar has 22 years of experience in the Database industry and has worked extensively in a variety of areas including Manageability, Performance, Scalability, High Availability, Caching, Distributed Concurrency Control, In-Memory Data Management, NoSQL architectures, etc. He has 27 issued and has several pending patents in these areas. Tirthankar has a B.Tech in Computer Science from the Indian Institute of Technology (Kharagpur) and an MS in Electrical Engineering from Stanford University.

Tirthankar Lahiri

SVP, Data & In-Memory Technologies
Oracle

Tirthankar Lahiri is Vice President of the Data and In-Memory Technologies group for Oracle Database and is responsible for the Oracle Database Engine (including Database In-Memory, Data and Indexes, Space Management, Transactions, and the Database File System), the Oracle TimesTen In-Memory Database, and Oracle NoSQLDB. Tirthankar has 22 years of experience in the Database industry and has worked extensively in a variety of areas including Manageability, Performance, Scalability, High Availability, Caching, Distributed Concurrency Control, In-Memory Data Management, NoSQL architectures, etc. He has 27 issued and has several pending patents in these areas. Tirthankar has a B.Tech in Computer Science from the Indian Institute of Technology (Kharagpur) and an MS in Electrical Engineering from Stanford University.

Author:

Tejas Chopra

Senior Engineer of Software
Netflix

Tejas Chopra is a Sr. Engineer at Netflix working on Machine Learning Platform for Netflix Studios and a Founder at GoEB1 which is the world’s first and only thought leadership platform for immigrants.Tejas is a recipient of the prestigious EB1A (Einstein) visa in US. Tejas is a Tech 40 under 40 Award winner, a TEDx speaker, a Senior IEEE Member, an ACM member, and has spoken at conferences and panels on Cloud Computing, Blockchain, Software Development and Engineering Leadership.Tejas has been awarded the ‘International Achievers Award, 2023’ by the Indian Achievers’ Forum. He is an Adjunct Professor for Software Development at University of Advancing Technology, Arizona, an Angel investor and a Startup Advisor to startups like Nillion. He is also a member of the Advisory Board for Flash Memory Summit.Tejas’ experience has been in companies like Box, Apple, Samsung, Cadence, and Datrium. Tejas holds a Masters Degree in ECE from Carnegie Mellon University, Pittsburgh.

Tejas Chopra

Senior Engineer of Software
Netflix

Tejas Chopra is a Sr. Engineer at Netflix working on Machine Learning Platform for Netflix Studios and a Founder at GoEB1 which is the world’s first and only thought leadership platform for immigrants.Tejas is a recipient of the prestigious EB1A (Einstein) visa in US. Tejas is a Tech 40 under 40 Award winner, a TEDx speaker, a Senior IEEE Member, an ACM member, and has spoken at conferences and panels on Cloud Computing, Blockchain, Software Development and Engineering Leadership.Tejas has been awarded the ‘International Achievers Award, 2023’ by the Indian Achievers’ Forum. He is an Adjunct Professor for Software Development at University of Advancing Technology, Arizona, an Angel investor and a Startup Advisor to startups like Nillion. He is also a member of the Advisory Board for Flash Memory Summit.Tejas’ experience has been in companies like Box, Apple, Samsung, Cadence, and Datrium. Tejas holds a Masters Degree in ECE from Carnegie Mellon University, Pittsburgh.

Author:

Galen Shipman

Computer Scientist
Los Alamos National Laboratories

Galen Shipman is a computer scientist at Los Alamos National Laboratory (LANL). His interests include programming models, scalable runtime systems, and I/O.  As Chief Architect he leads architecture and technology of Advanced Technology Systems (ATS) at LANL. He has led performance engineering across LANL’s multi-physics integrated codes and the advancement and integration of next-generation programming models such as the Legion programming system as part of LANL's next-generation code project, Ristra. His work in storage systems and I/O is currently focused on composable micro-services as part of the Mochi project. His prior work in scalable software for HPC include major contributions to broadly used technologies including the Lustre parallel file system and Open MPI.

Galen Shipman

Computer Scientist
Los Alamos National Laboratories

Galen Shipman is a computer scientist at Los Alamos National Laboratory (LANL). His interests include programming models, scalable runtime systems, and I/O.  As Chief Architect he leads architecture and technology of Advanced Technology Systems (ATS) at LANL. He has led performance engineering across LANL’s multi-physics integrated codes and the advancement and integration of next-generation programming models such as the Legion programming system as part of LANL's next-generation code project, Ristra. His work in storage systems and I/O is currently focused on composable micro-services as part of the Mochi project. His prior work in scalable software for HPC include major contributions to broadly used technologies including the Lustre parallel file system and Open MPI.

Author:

Ping Zhou

Researcher/Architect
Bytedance Ltd.

Ping Zhou is a Senior Researcher/Architect with ByteDance, focusing on next-gen infrastructure innovations with hardware/software co-design. Prior to joining ByteDance, Ping worked with Google, Alibaba and Intel on products including Google Assistant, Optane SSD and Open Channel SSD. Ping earned his PhD in Computer Engineering at University of Pittsburgh, specializing in the field of emerging memory and storage technologies.

Ping Zhou

Researcher/Architect
Bytedance Ltd.

Ping Zhou is a Senior Researcher/Architect with ByteDance, focusing on next-gen infrastructure innovations with hardware/software co-design. Prior to joining ByteDance, Ping worked with Google, Alibaba and Intel on products including Google Assistant, Optane SSD and Open Channel SSD. Ping earned his PhD in Computer Engineering at University of Pittsburgh, specializing in the field of emerging memory and storage technologies.

Agenda Highlights


Memory Optimizations for Machine Learning

As Machine Learning continues to forge its way into diverse industries and applications, optimizing computational resources, particularly memory, has become a critical aspect of effective model deployment. This session, "Memory Optimizations for Machine Learning," aims to offer an exhaustive look into the specific memory requirements in Machine Learning tasks and the cutting-edge strategies to minimize memory consumption efficiently.
We'll begin by demystifying the memory footprint of typical Machine Learning data structures and algorithms, elucidating the nuances of memory allocation and deallocation during model training phases. The talk will then focus on memory-saving techniques such as data quantization, model pruning, and efficient mini-batch selection. These techniques offer the advantage of conserving memory resources without significant degradation in model performance.
Additional insights into how memory usage can be optimized across various hardware setups, from CPUs and GPUs to custom ML accelerators, will also be presented. 

Author:

Tejas Chopra

Senior Engineer of Software
Netflix

Tejas Chopra is a Sr. Engineer at Netflix working on Machine Learning Platform for Netflix Studios and a Founder at GoEB1 which is the world’s first and only thought leadership platform for immigrants.Tejas is a recipient of the prestigious EB1A (Einstein) visa in US. Tejas is a Tech 40 under 40 Award winner, a TEDx speaker, a Senior IEEE Member, an ACM member, and has spoken at conferences and panels on Cloud Computing, Blockchain, Software Development and Engineering Leadership.Tejas has been awarded the ‘International Achievers Award, 2023’ by the Indian Achievers’ Forum. He is an Adjunct Professor for Software Development at University of Advancing Technology, Arizona, an Angel investor and a Startup Advisor to startups like Nillion. He is also a member of the Advisory Board for Flash Memory Summit.Tejas’ experience has been in companies like Box, Apple, Samsung, Cadence, and Datrium. Tejas holds a Masters Degree in ECE from Carnegie Mellon University, Pittsburgh.

Tejas Chopra

Senior Engineer of Software
Netflix

Tejas Chopra is a Sr. Engineer at Netflix working on Machine Learning Platform for Netflix Studios and a Founder at GoEB1 which is the world’s first and only thought leadership platform for immigrants.Tejas is a recipient of the prestigious EB1A (Einstein) visa in US. Tejas is a Tech 40 under 40 Award winner, a TEDx speaker, a Senior IEEE Member, an ACM member, and has spoken at conferences and panels on Cloud Computing, Blockchain, Software Development and Engineering Leadership.Tejas has been awarded the ‘International Achievers Award, 2023’ by the Indian Achievers’ Forum. He is an Adjunct Professor for Software Development at University of Advancing Technology, Arizona, an Angel investor and a Startup Advisor to startups like Nillion. He is also a member of the Advisory Board for Flash Memory Summit.Tejas’ experience has been in companies like Box, Apple, Samsung, Cadence, and Datrium. Tejas holds a Masters Degree in ECE from Carnegie Mellon University, Pittsburgh.

Indirect/Irregular Workloads within Large Simulations and How to Improve Access through Co-Design

Los Alamos National Laboratory's (LANL) has a diverse set of High Performance Computing codes. Analysis of many of these codes indicate they are heavily memory bound with sparse memory accesses. High Bandwidth Memory (HBM) has proven a significant advancement in improving the performance of these codes but the roadmap for major (step function) improvements in memory technologies is unclear. Addressing this challenge will require a renewed focus on high performance memory and processor technologies that take a more aggressive and holistic view of advancements in ISA, microarchitecture, and memory controller technologies. Beyond scientific simulations, advancements in performance of sparse memory accesses will benefit graph analysis, DLRM inference, and database workloads.

Author:

Galen Shipman

Computer Scientist
Los Alamos National Laboratories

Galen Shipman is a computer scientist at Los Alamos National Laboratory (LANL). His interests include programming models, scalable runtime systems, and I/O.  As Chief Architect he leads architecture and technology of Advanced Technology Systems (ATS) at LANL. He has led performance engineering across LANL’s multi-physics integrated codes and the advancement and integration of next-generation programming models such as the Legion programming system as part of LANL's next-generation code project, Ristra. His work in storage systems and I/O is currently focused on composable micro-services as part of the Mochi project. His prior work in scalable software for HPC include major contributions to broadly used technologies including the Lustre parallel file system and Open MPI.

Galen Shipman

Computer Scientist
Los Alamos National Laboratories

Galen Shipman is a computer scientist at Los Alamos National Laboratory (LANL). His interests include programming models, scalable runtime systems, and I/O.  As Chief Architect he leads architecture and technology of Advanced Technology Systems (ATS) at LANL. He has led performance engineering across LANL’s multi-physics integrated codes and the advancement and integration of next-generation programming models such as the Legion programming system as part of LANL's next-generation code project, Ristra. His work in storage systems and I/O is currently focused on composable micro-services as part of the Mochi project. His prior work in scalable software for HPC include major contributions to broadly used technologies including the Lustre parallel file system and Open MPI.

How Deep Learning & Computer Vision Infrastructure Requires Application-Specific Infrastructure

Author:

Sandeep Singh

Director - Applied DL & Computer Vision
Beans.ai

Sandeep Singh

Director - Applied DL & Computer Vision
Beans.ai

Data Movement for Enterprise Teams – AI Challenges: Latency, Performance and Failing AI Training Scenarios

There are a set of challenges that emanate from memory issues in GenAI deployments in enterprise

  • Poor tooling for performance issues related from GPU and memory interconnectedness
  • Latency issues as a result of data movement and poor memory capacity planning
  • Failing AI training scenarios in low memory constraints

There is both opacity and immature tooling to manage a foundational infrastructure for GenAI deployment, memory. This is experienced by AI teams who need to double-click on the infrastructure and improve on these foundations to deploy AI at scale.

 

Author:

Rodrigo Madanes

Global AI Innovation Officer
EY

Rodrigo Madanes is EY’s Global Innovation AI Leader. Rodrigo has a computer science degree from MIT and a PhD from UC Berkeley. Some testament to his technical expertise includes 3 patents and having created novel AI products at both the MIT Media Lab as well as Apple’s Advanced Technologies Group.

Prior to EY, Rodrigo ran the European business incubator at eBay which launched new ventures including eBay Hire. At Skype, he was the C-suite executive leading product design globally during its hyper-growth phase, where the team scaled the userbase, revenue, and profits 100% YoY for 3 consecutive years.

Rodrigo Madanes

Global AI Innovation Officer
EY

Rodrigo Madanes is EY’s Global Innovation AI Leader. Rodrigo has a computer science degree from MIT and a PhD from UC Berkeley. Some testament to his technical expertise includes 3 patents and having created novel AI products at both the MIT Media Lab as well as Apple’s Advanced Technologies Group.

Prior to EY, Rodrigo ran the European business incubator at eBay which launched new ventures including eBay Hire. At Skype, he was the C-suite executive leading product design globally during its hyper-growth phase, where the team scaled the userbase, revenue, and profits 100% YoY for 3 consecutive years.

Exploring CXL Use Cases and the Future of Disaggregated Heterogeneous Memory Architecture

This session will cover a quick overview of CXL technology, its influence on systems architecture and explore potential use cases within enterprise applications. Ping Zhou will then discuss evaluations of CXL technologies from ByteDance’s perspective. Lastly, Ping will cover ByteDance’s vision of next generation systems/architecture and the technical challenges ahead for the industry.

Author:

Ping Zhou

Researcher/Architect
Bytedance Ltd.

Ping Zhou is a Senior Researcher/Architect with ByteDance, focusing on next-gen infrastructure innovations with hardware/software co-design. Prior to joining ByteDance, Ping worked with Google, Alibaba and Intel on products including Google Assistant, Optane SSD and Open Channel SSD. Ping earned his PhD in Computer Engineering at University of Pittsburgh, specializing in the field of emerging memory and storage technologies.

Ping Zhou

Researcher/Architect
Bytedance Ltd.

Ping Zhou is a Senior Researcher/Architect with ByteDance, focusing on next-gen infrastructure innovations with hardware/software co-design. Prior to joining ByteDance, Ping worked with Google, Alibaba and Intel on products including Google Assistant, Optane SSD and Open Channel SSD. Ping earned his PhD in Computer Engineering at University of Pittsburgh, specializing in the field of emerging memory and storage technologies.

Transforming In-Memory Database Infrastructure

Oracle AI Vector Search enables enterprises to leverage their own business data to build cutting-edge generative AI solutions. AI Vectors are data structures that encode the key features or essence of unstructured entities such as images or documents. The more similar two entities are, the shorter the mathematical distance between their corresponding AI vectors. With AI Vector search, Oracle Database is introducing a new vector datatype, new vector indexes (in-memory neighbor graph indexes and neighbor partitioned indexes), and new Vector SQL operators for highly efficient and powerful similarity search queries. Oracle AI Vector Search enables applications to combine their business data with large language models (LLMs) using a technique called Retrieval Augmentation Generation (RAG), to deliver amazingly accurate responses to natural language questions. With AI Vector Search in Oracle Database, users can easily build AI applications that combine relational searches with similarity search, without requiring data movement to a separate vector database, and without any loss of security, data integrity, consistency, or performance.

Author:

Tirthankar Lahiri

SVP, Data & In-Memory Technologies
Oracle

Tirthankar Lahiri is Vice President of the Data and In-Memory Technologies group for Oracle Database and is responsible for the Oracle Database Engine (including Database In-Memory, Data and Indexes, Space Management, Transactions, and the Database File System), the Oracle TimesTen In-Memory Database, and Oracle NoSQLDB. Tirthankar has 22 years of experience in the Database industry and has worked extensively in a variety of areas including Manageability, Performance, Scalability, High Availability, Caching, Distributed Concurrency Control, In-Memory Data Management, NoSQL architectures, etc. He has 27 issued and has several pending patents in these areas. Tirthankar has a B.Tech in Computer Science from the Indian Institute of Technology (Kharagpur) and an MS in Electrical Engineering from Stanford University.

Tirthankar Lahiri

SVP, Data & In-Memory Technologies
Oracle

Tirthankar Lahiri is Vice President of the Data and In-Memory Technologies group for Oracle Database and is responsible for the Oracle Database Engine (including Database In-Memory, Data and Indexes, Space Management, Transactions, and the Database File System), the Oracle TimesTen In-Memory Database, and Oracle NoSQLDB. Tirthankar has 22 years of experience in the Database industry and has worked extensively in a variety of areas including Manageability, Performance, Scalability, High Availability, Caching, Distributed Concurrency Control, In-Memory Data Management, NoSQL architectures, etc. He has 27 issued and has several pending patents in these areas. Tirthankar has a B.Tech in Computer Science from the Indian Institute of Technology (Kharagpur) and an MS in Electrical Engineering from Stanford University.

Memory Optimizations for Large Language Models: From Training to Inference

Large Language Models (LLMs) have revolutionized natural language processing but have posed significant challenges in training and inference due to their enormous memory requirements. In this talk, we delve into techniques and optimizations to mitigate memory constraints across the entire lifecycle of LLMs.

The first segment explores Memory Optimized LLM Training. We discuss Training challenges and cover different techniques under Parameter Efficient Fine Tuning (PEFT). like prompt tuning with LoRA, and adapters.

LLMs inference is more memory bound rather than compute bound, In this section we will explore inference optimizations mostly for transformer architectures like Paged Key-Value (KV) Cache, Speculative Decoding, Quantization, Inflight Batching strategies, Flash Attention, each contributing to enhanced inference speed and efficiency.

Finally, we explore the concept of Coherent Memory, and how it helps with Inference optimizations by KV Cache offloading and LoRA weight re-computation.

By illuminating these advancements, this talk aims to provide a comprehensive understanding of state-of-the-art memory optimization techniques for LLMs, empowering practitioners to push the boundaries of natural language processing further.

Author:

Arun Raman

Deep Learning Solutions Architect
NVIDIA

Arun Raman is an AI solution architect at NVIDIA, adept at navigating the intricate challenges of deploying AI applications across edge, cloud, and on-premises environments within the consumer Internet industry. In his current role, he works on the design of end-to-end accelerated AI pipelines, for consumer internet customers meticulously addressing preprocessing, training, and inference optimizations.  His experience extends beyond AI, having worked with distributed systems and multi-cloud infrastructure. He shares practical strategies and real-world experiences, empowering organizations to leverage AI effectively.

Arun Raman

Deep Learning Solutions Architect
NVIDIA

Arun Raman is an AI solution architect at NVIDIA, adept at navigating the intricate challenges of deploying AI applications across edge, cloud, and on-premises environments within the consumer Internet industry. In his current role, he works on the design of end-to-end accelerated AI pipelines, for consumer internet customers meticulously addressing preprocessing, training, and inference optimizations.  His experience extends beyond AI, having worked with distributed systems and multi-cloud infrastructure. He shares practical strategies and real-world experiences, empowering organizations to leverage AI effectively.

How are Memory Innovations Impacting the Total Cost of Ownership in Scaling-Up and Power Consumption

Author:

Helen Byrne

VP, Solution Architect
Graphcore

Helen leads the Solution Architects team at Graphcore, helping innovators build their AI solutions using Graphcore’s Intelligence Processing Units (IPUs). She has been at Graphcore for more than 5 years, previously leading AI Field Engineering and working in AI Research, working on problems in Distributed Machine Learning. Before landing in the technology industry, she worked in Investment Banking. Her background is in Mathematics and she has a MSc in Artificial Intelligence.

Helen Byrne

VP, Solution Architect
Graphcore

Helen leads the Solution Architects team at Graphcore, helping innovators build their AI solutions using Graphcore’s Intelligence Processing Units (IPUs). She has been at Graphcore for more than 5 years, previously leading AI Field Engineering and working in AI Research, working on problems in Distributed Machine Learning. Before landing in the technology industry, she worked in Investment Banking. Her background is in Mathematics and she has a MSc in Artificial Intelligence.