AI Hardware Summit Agenda

Use the filters above to sort sessions by the topic most interesting to you, or filter by your job focus and see the sessions most relevant to your role. 

Click the arrows to the right-hand side of a session title to see its abstract.

Whether you're focused on hardware, infrastructure, software, or data, there's something for everyone at this year's AI Hardware Summit. We'll be adding content, speakers and networking sessions to the agenda between now and the event so keep checking back for updates. 


Alternatively, you can download the information pack, and we'll send you automatic updates on agenda developments, new speakers and event news. The information pack also includes extra information on audience demographics and reasons why you should attend!


Tuesday, 13 Sep, 2022
PRE-DAY: DEVELOPER WORKSHOPS
9:00 AM - 10:30 AM

Developer workshops are restricted to machine learning practitioners from research institutions and enterprises who are interested in learning how to port code onto novel AI platforms and want to get hands-on access to hardware and SDKs.  

Workshops are application only and subject to eligibility and availability. The workshops are free, and lunch, shared networking sessions, and access to the Meet and Greet function and keynote is included in the developer pass. If you're a machine learning engineer / AI application developer, please apply using the form in the registration section of the website or by emailing [email protected]. There are approximately 30 spaces available.



Developer Efficiency
Novel AI Hardware
Data Science
Software Engineering
Host

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

10:30 AM - 11:00 AM
Networking Break
11:00 AM - 12:30 PM

Developer workshops are restricted to machine learning practitioners from research institutions and enterprises who are interested in learning how to port code onto novel AI platforms and want to get hands-on access to hardware and SDKs. 

Workshops are application only and subject to eligibility and availability. The workshops are free, and lunch, shared networking sessions, and access to the Meet and Greet function and keynote is included in the developer pass. If you're a machine learning engineer / AI application developer, please apply using the form in the registration section of the website or by emailing [email protected]. There are approximately 30 spaces available.

Developer Efficiency
Novel AI Hardware
Data Science
Software Engineering
Host

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

12:30 PM - 1:45 PM
Lunch
1:45 PM - 2:45 PM

Developer workshops are restricted to machine learning practitioners from research institutions and enterprises who are interested in learning how to port code onto novel AI platforms and want to get hands-on access to hardware and SDKs. 

Workshops are application only and subject to eligibility and availability. The workshops are free, and lunch, shared networking sessions, and access to the Meet and Greet function and keynote is included in the developer pass. If you're a machine learning engineer / AI application developer, please apply using the form in the registration section of the website or by emailing [email protected]. There are approximately 30 spaces available.

Developer Efficiency
Edge AI
Novel AI Hardware
Data Science
Software Engineering
2:45 PM - 3:00 PM
Break
3:00 PM - 4:00 PM

Developer workshops are restricted to machine learning practitioners from research institutions and enterprises who are interested in learning how to port code onto novel AI platforms and want to get hands-on access to hardware and SDKs. 

Workshops are application only and subject to eligibility and availability. The workshops are free, and lunch, shared networking sessions, and access to the Meet and Greet function and keynote is included in the developer pass. If you're a machine learning engineer / AI application developer, please apply using the form in the registration section of the website or by emailing [email protected]. There are approximately 30 spaces available.

Developer Efficiency
NLP
Novel AI Hardware
Data Science
Software Engineering
4:00 PM - 7:00 PM

AI Hardware Summit attendees are invited to attend the evening before the main conference for an extended networking session where they can meet one another and the Edge AI Summit attendees. The Meet & Greet is a perfect opportunity to reconnect with peers, expand your network, and discuss the state of ML across the cloud-edge continuum!

We will soon announce a luminary guest speaker who will present in the middle of the function, followed by a drinks reception for both events.

Order of ceremony:
4:00 - 5:00 PM: Informal Networking
5:00 - 6:00 PM: Guest Keynote Speaker
6:00 - 7:00 PM: Drinks Reception

Chip Design
Developer Efficiency
Edge AI
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering
Wednesday, 14 Sep, 2022
MAIN CONFERENCE: BUILDING AND OPTIMIZING ML PLATFORMS AT SCALE

Author:

Marshall Choy

SVP, Product
SambaNova Systems

Marshall Choy is Senior Vice President of Product at SambaNova Systems and is responsible for product management and go-to-market operations.  Marshall has extensive experience leading global organizations to bring breakthrough products to market, establish new market presences, and grow new and existing lines of business.  Marshall was previously Vice President of Product Management at Oracle until 2018.  He was responsible for the portfolio and strategy for Oracle Systems products and solutions.  He led teams that delivered comprehensive end-to-end hardware and software solutions and product management operations.  Prior to joining Oracle in 2010 when it acquired Sun Microsystems, he served as Director of Engineered Solutions at Sun.  During his 11 years there, Marshall held various positions in development, information technology, and marketing. 

Marshall Choy

SVP, Product
SambaNova Systems

Marshall Choy is Senior Vice President of Product at SambaNova Systems and is responsible for product management and go-to-market operations.  Marshall has extensive experience leading global organizations to bring breakthrough products to market, establish new market presences, and grow new and existing lines of business.  Marshall was previously Vice President of Product Management at Oracle until 2018.  He was responsible for the portfolio and strategy for Oracle Systems products and solutions.  He led teams that delivered comprehensive end-to-end hardware and software solutions and product management operations.  Prior to joining Oracle in 2010 when it acquired Sun Microsystems, he served as Director of Engineered Solutions at Sun.  During his 11 years there, Marshall held various positions in development, information technology, and marketing. 

8:30 AM - 9:00 AM
Luminary Keynote
Developer Efficiency
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Alexis Black Bjorlin

VP, Infrastructure Hardware
Meta

Dr. Alexis Black Bjorlin is VP, Infrastructure Hardware Engineering at Meta. She also serves on the board of directors at Digital Realty and Celestial AI. Prior to Meta, Dr. Bjorlin was Senior Vice President and General Manager of Broadcom’s Optical Systems Division and previously Corporate Vice President of the Data Center Group and General Manager of the Connectivity Group at Intel. Prior to Intel, she spent eight years as President of Source Photonics, where she also served on the board of directors. She earned a B.S. in Materials Science and Engineering from Massachusetts Institute of Technology and a Ph.D. in Materials Science from the University of California at Santa Barbara.

Alexis Black Bjorlin

VP, Infrastructure Hardware
Meta

Dr. Alexis Black Bjorlin is VP, Infrastructure Hardware Engineering at Meta. She also serves on the board of directors at Digital Realty and Celestial AI. Prior to Meta, Dr. Bjorlin was Senior Vice President and General Manager of Broadcom’s Optical Systems Division and previously Corporate Vice President of the Data Center Group and General Manager of the Connectivity Group at Intel. Prior to Intel, she spent eight years as President of Source Photonics, where she also served on the board of directors. She earned a B.S. in Materials Science and Engineering from Massachusetts Institute of Technology and a Ph.D. in Materials Science from the University of California at Santa Barbara.

Author:

Lin Qiao

Senior Director, Engineering
Meta

Lin is an engineering director leading the development of PyTorch, AI compilers, and on-device AI platforms at Facebook. She drove AI research to production innovations across hardware acceleration, enabling model exploration and model scale, building production ecosystem and platforms for on-device and privacy-preserving AI. She received a Ph.D. in computer science, started her career as a researcher in Almaden Research Lab, and later moved to the industry as an engineer. Prior to Facebook, she worked on a broad range of distributed and data processing domains, from high-performance columnar databases, OLTP systems, streaming processing systems, data warehouse systems, and logging and metrics platforms.

 

Lin Qiao

Senior Director, Engineering
Meta

Lin is an engineering director leading the development of PyTorch, AI compilers, and on-device AI platforms at Facebook. She drove AI research to production innovations across hardware acceleration, enabling model exploration and model scale, building production ecosystem and platforms for on-device and privacy-preserving AI. She received a Ph.D. in computer science, started her career as a researcher in Almaden Research Lab, and later moved to the industry as an engineer. Prior to Facebook, she worked on a broad range of distributed and data processing domains, from high-performance columnar databases, OLTP systems, streaming processing systems, data warehouse systems, and logging and metrics platforms.

 

9:00 AM - 9:30 AM
Chip Design
Edge AI
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering
Industry & Investment

Author:

Lip-Bu Tan

Executive Chairman
Cadence

Mr. Lip-Bu Tan is a Co-Founder, Partner, Managing Director, and Chairman of Walden International. Mr. Tan serves as Executive Chairman of Cadence Design Systems Inc., and as Chairman on the Board of SambaNova Systems, Inc. Mr. Tan also serves on the Boards of Softbank and Schneider Electric. Until late 2021, Mr Tan served as Chief Executive Officer at Cadence Design Systems Inc. since January 2009, serving as its President from January 2009 to November 2017. Lip-Bu focuses on semiconductor/components, cloud/big data, artificial intelligence and machine learning. Lip-Bu holds a B.S. in Physics from Nanyang University in Singapore, a M.S. in Nuclear Engineering from Massachusetts Institute of Technology, and a M.B.A. from the University of San Francisco. 

Lip-Bu Tan

Executive Chairman
Cadence

Mr. Lip-Bu Tan is a Co-Founder, Partner, Managing Director, and Chairman of Walden International. Mr. Tan serves as Executive Chairman of Cadence Design Systems Inc., and as Chairman on the Board of SambaNova Systems, Inc. Mr. Tan also serves on the Boards of Softbank and Schneider Electric. Until late 2021, Mr Tan served as Chief Executive Officer at Cadence Design Systems Inc. since January 2009, serving as its President from January 2009 to November 2017. Lip-Bu focuses on semiconductor/components, cloud/big data, artificial intelligence and machine learning. Lip-Bu holds a B.S. in Physics from Nanyang University in Singapore, a M.S. in Nuclear Engineering from Massachusetts Institute of Technology, and a M.B.A. from the University of San Francisco. 

9:30 AM - 9: 55 AM
Chip Design
ML at Scale
Novel AI Hardware
Hardware Engineering
Strategy
Systems Engineering

Author:

Karl Freund

Founder & Principal Analyst
Cambrian AI Research

Karl Freund is the founder and principal analyst of Cambrian AI Research. Prior to this, he was Moor Insights & Strategy’s consulting lead for HPC and Deep Learning. His recent experiences as the VP of Marketing at AMD and Calxeda, as well as his previous positions at Cray and IBM, positions him as a leading industry expert in these rapidly evolving industries. Karl works with investment and technology customers to help them understand the emerging Deep Learning opportunity in data centers, from competitive landscape to ecosystem to strategy.

 

Karl has worked directly with datacenter end users, OEMs, ODMs and the industry ecosystem, enabling him to help his clients define the appropriate business, product, and go-to-market strategies. He is also recognized expert on the subject of low-power servers and the emergence of ARM in the datacenter and has been a featured speaker at scores of investment and industry conferences on this topic.

Accomplishments during his career include:

  • Led the revived HPC initiative at AMD, targeting APUs at deep learning and other HPC workloads
  • Created an industry-wide thought leadership position for Calxeda in the ARM Server market
  • Helped forge the early relationship between HP and Calxeda leading to the surprise announcement of HP Moonshot with Calxeda in 2011
  • Built the IBM Power Server brand from 14% market share to over 50% share
  • Integrated the Tivoli brand into the IBM company’s branding and marketing organization
  • Co-Led the integration of HP and Apollo Marketing after the Boston-based desktop company’s acquisition

 

Karl’s background includes RISC and Mainframe servers, as well as HPC (Supercomputing). He has extensive experience as a global marketing executive at IBM where he was VP Marketing (2000-2010), Cray where he was VP Marketing (1995-1998), and HP where he was a Division Marketing Manager (1979-1995).

 

Karl Freund

Founder & Principal Analyst
Cambrian AI Research

Karl Freund is the founder and principal analyst of Cambrian AI Research. Prior to this, he was Moor Insights & Strategy’s consulting lead for HPC and Deep Learning. His recent experiences as the VP of Marketing at AMD and Calxeda, as well as his previous positions at Cray and IBM, positions him as a leading industry expert in these rapidly evolving industries. Karl works with investment and technology customers to help them understand the emerging Deep Learning opportunity in data centers, from competitive landscape to ecosystem to strategy.

 

Karl has worked directly with datacenter end users, OEMs, ODMs and the industry ecosystem, enabling him to help his clients define the appropriate business, product, and go-to-market strategies. He is also recognized expert on the subject of low-power servers and the emergence of ARM in the datacenter and has been a featured speaker at scores of investment and industry conferences on this topic.

Accomplishments during his career include:

  • Led the revived HPC initiative at AMD, targeting APUs at deep learning and other HPC workloads
  • Created an industry-wide thought leadership position for Calxeda in the ARM Server market
  • Helped forge the early relationship between HP and Calxeda leading to the surprise announcement of HP Moonshot with Calxeda in 2011
  • Built the IBM Power Server brand from 14% market share to over 50% share
  • Integrated the Tivoli brand into the IBM company’s branding and marketing organization
  • Co-Led the integration of HP and Apollo Marketing after the Boston-based desktop company’s acquisition

 

Karl’s background includes RISC and Mainframe servers, as well as HPC (Supercomputing). He has extensive experience as a global marketing executive at IBM where he was VP Marketing (2000-2010), Cray where he was VP Marketing (1995-1998), and HP where he was a Division Marketing Manager (1979-1995).

 

9:55 AM - 10:20 AM
Developer Efficiency
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Kunle Olukotun

Co-Founder & Chief Technologist
SambaNova Systems

Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is a renowned pioneer in multi-core processor design and the leader of the Stanford Hydra chip multiprocessor (CMP) research project.

Prior to SambaNova Systems, Olukotun founded Afara Websystems to develop high-throughput, low-power multi-core processors for server systems. The Afara multi-core processor, called Niagara, was acquired by Sun Microsystems and now powers Oracle’s SPARC-based servers.

Olukotun is the Director of the Pervasive Parallel Lab and a member of the Data Analytics for What’s Next (DAWN) Lab, developing infrastructure for usable machine learning.

Olukotun is an ACM Fellow and IEEE Fellow for contributions to multiprocessors on a chip and multi-threaded processor design. Olukotun recently won the prestigious IEEE Computer Society’s Harry H. Goode Memorial Award and was also elected to the National Academy of Engineering—one of the highest professional distinctions accorded to an engineer.

Kunle received his Ph.D. in Computer Engineering from The University of Michigan.

Kunle Olukotun

Co-Founder & Chief Technologist
SambaNova Systems

Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is a renowned pioneer in multi-core processor design and the leader of the Stanford Hydra chip multiprocessor (CMP) research project.

Prior to SambaNova Systems, Olukotun founded Afara Websystems to develop high-throughput, low-power multi-core processors for server systems. The Afara multi-core processor, called Niagara, was acquired by Sun Microsystems and now powers Oracle’s SPARC-based servers.

Olukotun is the Director of the Pervasive Parallel Lab and a member of the Data Analytics for What’s Next (DAWN) Lab, developing infrastructure for usable machine learning.

Olukotun is an ACM Fellow and IEEE Fellow for contributions to multiprocessors on a chip and multi-threaded processor design. Olukotun recently won the prestigious IEEE Computer Society’s Harry H. Goode Memorial Award and was also elected to the National Academy of Engineering—one of the highest professional distinctions accorded to an engineer.

Kunle received his Ph.D. in Computer Engineering from The University of Michigan.

10:20 AM - 10:45 AM

The true potential of AI rests on super-human learning capacity, and on the ability to selectively draw on that learning. Both of these properties – scale and selectivity – challenge the design of AI computers and the tools used to program them.  A rich pool of new ideas is emerging, driven by a new breed of computing company, according to Graphcore co-founder Simon Knowles. In his talk, Simon discusses the creation of the Intelligence Processing Unit (IPU) – a new type of processor, specifically designed for AI computation. He looks ahead, towards the development of AIs with super-human cognition, and explores the nature of computation systems needed to make powerful AI an economic everyday reality.

Developer Efficiency
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Simon Knowles

Co-Founder, CTO & EVP, Engineering
Graphcore

Simon is co-founder, CTO and EVP Engineering of Graphcore and is the original architect of the “Colossus” IPU.  He has been designing original processors for emergent workloads for over 30 years, focussing on intelligence since 2012.  Before Graphcore, Simon co-founded two other successful processor companies – Element14, acquired by Broadcom in 2000, and Icera, acquired by Nvidia in 2011.  

He is an EE graduate of Cambridge University.

Simon Knowles

Co-Founder, CTO & EVP, Engineering
Graphcore

Simon is co-founder, CTO and EVP Engineering of Graphcore and is the original architect of the “Colossus” IPU.  He has been designing original processors for emergent workloads for over 30 years, focussing on intelligence since 2012.  Before Graphcore, Simon co-founded two other successful processor companies – Element14, acquired by Broadcom in 2000, and Icera, acquired by Nvidia in 2011.  

He is an EE graduate of Cambridge University.

10:45 AM - 11:20 AM
Networking Break
11:20 AM - 11:45 AM
Chip Design
Developer Efficiency
Edge AI
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Gordon Wilson

Co-Founder & CEO
Rain Neuromorphics

Gordon Wilson

Co-Founder & CEO
Rain Neuromorphics
11:45 AM - 12:10 PM
Developer Efficiency
Edge AI
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Cedric Bourrasset

Head of High Performance AI Business Unit
Atos

Dr. Cedric Bourrasset is AI Business Leader for High Performance Computing Business Unit at Atos. He is also AI product manager for the Atos Codex AI suite, software enabling AI workloads into HPC environments as well as integrating a computer vision solution. He joined Atos in 2016 as an expert in the HPC/AI domain.

Previously, Cedric received his Ph.D. in Electronics and computer vision from the Blaise Pascal University of Clermont-Ferrand defending the dataflow model of computation for FPGA High Level Synthesis problematic in embedded machine learning applications.

Cedric Bourrasset

Head of High Performance AI Business Unit
Atos

Dr. Cedric Bourrasset is AI Business Leader for High Performance Computing Business Unit at Atos. He is also AI product manager for the Atos Codex AI suite, software enabling AI workloads into HPC environments as well as integrating a computer vision solution. He joined Atos in 2016 as an expert in the HPC/AI domain.

Previously, Cedric received his Ph.D. in Electronics and computer vision from the Blaise Pascal University of Clermont-Ferrand defending the dataflow model of computation for FPGA High Level Synthesis problematic in embedded machine learning applications.

12:10 PM - 1:45 PM
Lunch
1:45 PM - 2:30 PM

As scientific and machine learning workloads converge in the world of HPC, and supercomputing centers gear up for the era of exascale computing, discussions on heterogeneous systems design abound. HPC leaders increasingly need to support converged application workloads that extend beyond AI/HPC to include other computational kernels/patterns like data analytics, graph algorithms, and uncertainty quantification. In this sector, the value of heterogeneity in systems design is clear and promising, even if the method for executing these concepts is still to be determined.

However, in many industrial sectors, enterprise end customers simply use the 'threat' of heterogeneity as a tool to extract some discount from their main/incumbent vendor. The job of IT is hard enough, planning for compute, storage and networking needs, that adding a lot of compute specialization is often not high on a CIO’s priority list. 

So, who cares about heterogeneity? Where will heterogeneity in systems design change the game, and what will be its level and quality? 

Chip Design
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Strategy
Systems Engineering

Author:

Rick Stevens

Assoc. Lab Director, Distinguished Fellow & Head of Exascale
Argonne National Lab

Rick Stevens is Argonne’s Associate Laboratory Director for Computing, Environment and Life Sciences.

Stevens has been at Argonne since 1982, and has served as director of the Mathematics and Computer Science Division and also as Acting Associate Laboratory Director for Physical, Biological and Computing Sciences. He is currently leader of Argonne’s Exascale Computing Initiative, and a Professor of Computer Science at the University of Chicago Physical Sciences Collegiate Division. From 2000-2004, Stevens served as Director of the National Science Foundation’s TeraGrid Project and from 1997-2001 as Chief Architect for the National Computational Science Alliance.

Stevens is interested in the development of innovative tools and techniques that enable computational scientists to solve important large-scale problems effectively on advanced scientific computers. Specifically, his research focuses on three principal areas: advanced collaboration and visualization environments, high-performance computer architectures (including Grids) and computational problems in the life sciences. In addition to his research work, Stevens teaches courses on computer architecture, collaboration technology, virtual reality, parallel computing and computational science.

 

Rick Stevens

Assoc. Lab Director, Distinguished Fellow & Head of Exascale
Argonne National Lab

Rick Stevens is Argonne’s Associate Laboratory Director for Computing, Environment and Life Sciences.

Stevens has been at Argonne since 1982, and has served as director of the Mathematics and Computer Science Division and also as Acting Associate Laboratory Director for Physical, Biological and Computing Sciences. He is currently leader of Argonne’s Exascale Computing Initiative, and a Professor of Computer Science at the University of Chicago Physical Sciences Collegiate Division. From 2000-2004, Stevens served as Director of the National Science Foundation’s TeraGrid Project and from 1997-2001 as Chief Architect for the National Computational Science Alliance.

Stevens is interested in the development of innovative tools and techniques that enable computational scientists to solve important large-scale problems effectively on advanced scientific computers. Specifically, his research focuses on three principal areas: advanced collaboration and visualization environments, high-performance computer architectures (including Grids) and computational problems in the life sciences. In addition to his research work, Stevens teaches courses on computer architecture, collaboration technology, virtual reality, parallel computing and computational science.

 

Author:

Weifeng Zhang

Chief Scientist, Heterogeneous Computing
Alibaba

Weifeng Zhang is the Chief Scientist of Heterogeneous Computing at Alibaba Cloud Infrastructure, responsible for performance optimization of large scale distributed applications at the data centers. Weifeng also leads the effort to build the acceleration platform for various ML workloads via heterogeneous resource pooling based on the compiler technology. Prior to joining Alibaba, Weifeng was a Director of Engineering at Qualcomm Inc, focusing on GPU compiler and performance optimizations. Weifeng received his B.Sc. from Wuhan University, China and PhD in Computer Science from University of California, San Diego.

Weifeng Zhang

Chief Scientist, Heterogeneous Computing
Alibaba

Weifeng Zhang is the Chief Scientist of Heterogeneous Computing at Alibaba Cloud Infrastructure, responsible for performance optimization of large scale distributed applications at the data centers. Weifeng also leads the effort to build the acceleration platform for various ML workloads via heterogeneous resource pooling based on the compiler technology. Prior to joining Alibaba, Weifeng was a Director of Engineering at Qualcomm Inc, focusing on GPU compiler and performance optimizations. Weifeng received his B.Sc. from Wuhan University, China and PhD in Computer Science from University of California, San Diego.

Author:

Cedric Bourrasset

Head of High Performance AI Business Unit
Atos

Dr. Cedric Bourrasset is AI Business Leader for High Performance Computing Business Unit at Atos. He is also AI product manager for the Atos Codex AI suite, software enabling AI workloads into HPC environments as well as integrating a computer vision solution. He joined Atos in 2016 as an expert in the HPC/AI domain.

Previously, Cedric received his Ph.D. in Electronics and computer vision from the Blaise Pascal University of Clermont-Ferrand defending the dataflow model of computation for FPGA High Level Synthesis problematic in embedded machine learning applications.

Cedric Bourrasset

Head of High Performance AI Business Unit
Atos

Dr. Cedric Bourrasset is AI Business Leader for High Performance Computing Business Unit at Atos. He is also AI product manager for the Atos Codex AI suite, software enabling AI workloads into HPC environments as well as integrating a computer vision solution. He joined Atos in 2016 as an expert in the HPC/AI domain.

Previously, Cedric received his Ph.D. in Electronics and computer vision from the Blaise Pascal University of Clermont-Ferrand defending the dataflow model of computation for FPGA High Level Synthesis problematic in embedded machine learning applications.

2:30 PM - 3:15 PM

AI acceleration is a full stack effort and involves a multidisciplinary and holistic approach to design and optimization.

The field of deep learning has gained substantially from co-design concepts across the AI technology stack. The simultaneous design and optimization of hardware and software has led to new algorithms, numerical optimizations, and AI hardware. 

Looking at the AI stack for workloads like computer vision, NLP and Ads, in both a vertical and horizontal sense, there are significant opportunities and challenges for optimization through co-design. This panel will focus on software-defined chips and systems for AI (specs & evaluation, datacenter & edge) and look at the systems-level approach to co-design, including compilers and runtime etc.

Chip Design
Novel AI Hardware
Systems Design
Hardware Engineering
Software Engineering
Systems Engineering

Author:

Xiaoyong Liu

Director, AI Platform
Alibaba

Xiaoyong Liu

Director, AI Platform
Alibaba

Author:

Kim Hazelwood

Director, Engineering
Meta

Kim Hazelwood is an engineering leader whose expertise lies at the intersection of scalable computer systems and applied machine learning. Her roles at Meta have included multiple engineering organizational leadership roles across Infrastructure and Research. Prior to Facebook, Kim held positions including Director of Research at Yahoo Labs, Software Engineer in the datacenter division of Google, Research Scientist at Intel, and tenured Associate Professor of Computer Science at the University of Virginia. Kim holds a PhD in Computer Science from Harvard University and has authored over 50 publications and one book. She is a recipient of the MIT "Top 35 Innovators under 35"​ award, the ACM SIGPLAN "Test of Time" Award, the Anita Borg Early Career Award, and an NSF Career Award. She currently serves on the Board of Directors for the Computing Research Association. 

Kim Hazelwood

Director, Engineering
Meta

Kim Hazelwood is an engineering leader whose expertise lies at the intersection of scalable computer systems and applied machine learning. Her roles at Meta have included multiple engineering organizational leadership roles across Infrastructure and Research. Prior to Facebook, Kim held positions including Director of Research at Yahoo Labs, Software Engineer in the datacenter division of Google, Research Scientist at Intel, and tenured Associate Professor of Computer Science at the University of Virginia. Kim holds a PhD in Computer Science from Harvard University and has authored over 50 publications and one book. She is a recipient of the MIT "Top 35 Innovators under 35"​ award, the ACM SIGPLAN "Test of Time" Award, the Anita Borg Early Career Award, and an NSF Career Award. She currently serves on the Board of Directors for the Computing Research Association. 

3:15 PM - 4:00 PM

Transformers are in high demand, particularly in industries like BFSI and healthcare, for language processing, understanding, classification, generation and translation. The parameter counts for models like GPT, that are fast becoming the norm in the world of NLP, are mind-boggling, and the cost involved in training and deploying even more so. If the vast potential for LLMs is to extend beyond the wealthiest companies and research institutions on the planet, then there is a need to evaluate how to lower the barriers of entry for experimentation and research on models like GPT. There's also a need to discuss the extent to which bigger is better, in the field of practical and commercial NLP.

This panel will look at the state of play of how enterprises are using large language models today, what their plans are for future research in NLP, and how hardware & systems builders and organizations like HuggingFace can help bring state-of-the-art performance into production in smaller, more resource-constrained enterprises and labs.

Developer Efficiency
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Simon Knowles

Co-Founder, CTO & EVP, Engineering
Graphcore

Simon is co-founder, CTO and EVP Engineering of Graphcore and is the original architect of the “Colossus” IPU.  He has been designing original processors for emergent workloads for over 30 years, focussing on intelligence since 2012.  Before Graphcore, Simon co-founded two other successful processor companies – Element14, acquired by Broadcom in 2000, and Icera, acquired by Nvidia in 2011.  

He is an EE graduate of Cambridge University.

Simon Knowles

Co-Founder, CTO & EVP, Engineering
Graphcore

Simon is co-founder, CTO and EVP Engineering of Graphcore and is the original architect of the “Colossus” IPU.  He has been designing original processors for emergent workloads for over 30 years, focussing on intelligence since 2012.  Before Graphcore, Simon co-founded two other successful processor companies – Element14, acquired by Broadcom in 2000, and Icera, acquired by Nvidia in 2011.  

He is an EE graduate of Cambridge University.

Author:

Selcuk Kopru

Director, Engineering & Research, Search
eBay

Selcuk Kopru is Head of ML & NLP at eBay and is an experienced AI leader with proven expertise in creating and deploying cutting edge NLP and AI technologies and systems. He is experienced in developing scalable Machine Learning solutions to solve big data problems that involve text and multimodal data. He is also skilled in Python, Java, C++, Machine Translation and Pattern Recognition. Selcuk is also a strong research professional with a Doctor of Philosophy (PhD) in NLP in Computer Science from Middle East Technical University.

Selcuk Kopru

Director, Engineering & Research, Search
eBay

Selcuk Kopru is Head of ML & NLP at eBay and is an experienced AI leader with proven expertise in creating and deploying cutting edge NLP and AI technologies and systems. He is experienced in developing scalable Machine Learning solutions to solve big data problems that involve text and multimodal data. He is also skilled in Python, Java, C++, Machine Translation and Pattern Recognition. Selcuk is also a strong research professional with a Doctor of Philosophy (PhD) in NLP in Computer Science from Middle East Technical University.

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

4:00 PM - 4:35 PM
Networking Break
4:35 PM - 5:20 PM
Developer Efficiency
Enterprise AI
Data Science
Software Engineering
Systems Engineering

Author:

Sakyasingha Dasgupta

Founder & CEO
EdgeCortix

Dr. Sakyasingha Dasgupta is the founder and CEO of Edgecortix, Inc. He is an AI and machine learning technologist, entrepreneur and engineer with real-world experience in taking cutting edge research from ideation stage to scalable products. Having worked at global companies like Microsoft, IBM Research and national research labs like RIKEN and Max Planck Institute, in his more recent roles, he has helped establish and lead technology teams at lean startups in Japan and Singapore, in robotics & automation and Fintech sectors.

After spending more than a decade in research and development in diverse areas like, brain inspired computing, robotics, computer vision, hardware acceleration for AI, wearable devices, internet of things, machine learning in finance and healthcare, Sakya founded EdgeCortix, a deep-tech startup automating machine learning driven AI hardware & software co-design for an intelligent distributed edge ecosystem.

Sakyasingha Dasgupta

Founder & CEO
EdgeCortix

Dr. Sakyasingha Dasgupta is the founder and CEO of Edgecortix, Inc. He is an AI and machine learning technologist, entrepreneur and engineer with real-world experience in taking cutting edge research from ideation stage to scalable products. Having worked at global companies like Microsoft, IBM Research and national research labs like RIKEN and Max Planck Institute, in his more recent roles, he has helped establish and lead technology teams at lean startups in Japan and Singapore, in robotics & automation and Fintech sectors.

After spending more than a decade in research and development in diverse areas like, brain inspired computing, robotics, computer vision, hardware acceleration for AI, wearable devices, internet of things, machine learning in finance and healthcare, Sakya founded EdgeCortix, a deep-tech startup automating machine learning driven AI hardware & software co-design for an intelligent distributed edge ecosystem.

Author:

Luis Ceze

Co-founder and CEO
OctoML

Luis Ceze is Co-founder and CEO at OctoML, Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, and Venture Partner at Madrona Venture Group. His research focuses on the intersection between computer architecture, programming languages, machine learning and biology. His current focus is on approximate computing for efficient machine learning andDNA-based data storage. He co-directs the Molecular Information Systems Lab (MISL), the Systems and Architectures for Machine Learning lab (SAMPL) and the Sampa Lab for HW/SW co-design. He is a recipient of an NSF CAREER Award, a Sloan Research Fellowship, a Microsoft Research Faculty Fellowship, the IEEE TCCA young Computer Architect Award and UIUC Distinguished Alumni Award.

Luis Ceze

Co-founder and CEO
OctoML

Luis Ceze is Co-founder and CEO at OctoML, Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, and Venture Partner at Madrona Venture Group. His research focuses on the intersection between computer architecture, programming languages, machine learning and biology. His current focus is on approximate computing for efficient machine learning andDNA-based data storage. He co-directs the Molecular Information Systems Lab (MISL), the Systems and Architectures for Machine Learning lab (SAMPL) and the Sampa Lab for HW/SW co-design. He is a recipient of an NSF CAREER Award, a Sloan Research Fellowship, a Microsoft Research Faculty Fellowship, the IEEE TCCA young Computer Architect Award and UIUC Distinguished Alumni Award.

5:20 PM - 5:45 PM

Many system companies are discovering that optimizing AI/ML SoC devices is a very powerful way to achieve differentiation for specific end-applications. In 2021 the semiconductor industry experienced more rounds of venture capital funding and dollars invested than ever before. What’s more, the investments in new AI companies alonewere higher than all prior yearly totals for all design types combined. Most of these new semiconductor companies targeted specific use cases of AI/ML to achieve aggressive performance, power/heat and other system objectives. Now, system companies are designing their own custom AI/ML SoCs—whether it is hyperscalers, automotive OEMs, edge or telecommunication companies–to address their own unique system-level needs.

Joe Sawicki, executive vice president, IC Siemens EDA, will explain how SoC design solutions are enabling both semiconductor and system companies to efficiently arrive at the global optimization point between power, performance, cost, yield and other factors in their AI/ML hardware designs.  All focused on achieving a holistic, optimized system-level differentiation.

Chip Design
Edge AI
Novel AI Hardware
Hardware Engineering
Systems Engineering

Author:

Joseph Sawicki

EVP, IC EDA
Siemens

Joseph Sawicki is a leading expert in IC nanometer design and manufacturing challenges. Formerly responsible for Mentor's industry-leading design-to-silicon products, including the Calibre physical verification and DFM platform and Mentor's Tessent design-for-test product line, Sawicki now oversees all business units in the Siemens EDA IC segment.

Sawicki joined Mentor Graphics in 1990 and has held previous positions in applications engineering, sales, marketing, and management. He holds a BSEE from the University of Rochester, an MBA from Northeastern University's High Technology Program, and has completed the Harvard Business School Advanced Management Program.

 

Joseph Sawicki

EVP, IC EDA
Siemens

Joseph Sawicki is a leading expert in IC nanometer design and manufacturing challenges. Formerly responsible for Mentor's industry-leading design-to-silicon products, including the Calibre physical verification and DFM platform and Mentor's Tessent design-for-test product line, Sawicki now oversees all business units in the Siemens EDA IC segment.

Sawicki joined Mentor Graphics in 1990 and has held previous positions in applications engineering, sales, marketing, and management. He holds a BSEE from the University of Rochester, an MBA from Northeastern University's High Technology Program, and has completed the Harvard Business School Advanced Management Program.

 

5:45 PM - 6:15 PM
Closing Headline Keynote

Macrotrends in innovation are leveraging both software and chips to create the next round of world-changing products. Unlocking the vast potential offered by this innovation model is daunting however. Systemic complexity across all disciplines from silicon to software must be addressed in a holistic way to achieve success. AI applications change over months while chip design can take years, adding to the challenges. Talent shortages also create headwinds. And as more system companies engage in chip design, these headwinds can have a profound impact on the pace of innovation.

Complex chip and system design must be easier to achieve in less time. Sassine Ghazi will discuss several developing strategies that use AI and machine learning techniques to dramatically reduce design time and design risk, opening the opportunity for substantial increases in the pace of innovation.

Chip Design
Edge AI
Novel AI Hardware
Hardware Engineering
Systems Engineering

Author:

Sassine Ghazi

President & COO
Synopsys

Sassine Ghazi leads and drives strategy for all business units, sales and customer success, strategic alliances, marketing and communications at Synopsys. He joined the company in 1998 as an applications engineer. He then held a series of sales positions with increasing responsibility, culminating in leadership of worldwide strategic accounts. He was then appointed general manager for all digital and custom products, the largest business group in Synopsys. Under his leadership, several innovative solutions were launched in areas such as multi-die systems, AI-assisted design and silicon lifecycle management. He assumed the role of chief operating officer in August, 2020 and was appointed to the role of president in November 2021. Prior to Synopsys he was a design engineer at Intel.

 

Sassine holds a bachelor’s degree in Business Administration from Lebanese American University; a B.S.E.E from the Georgia Institute of Technology and an M.S.E.E. from the University of Tennessee.

 

Sassine Ghazi

President & COO
Synopsys

Sassine Ghazi leads and drives strategy for all business units, sales and customer success, strategic alliances, marketing and communications at Synopsys. He joined the company in 1998 as an applications engineer. He then held a series of sales positions with increasing responsibility, culminating in leadership of worldwide strategic accounts. He was then appointed general manager for all digital and custom products, the largest business group in Synopsys. Under his leadership, several innovative solutions were launched in areas such as multi-die systems, AI-assisted design and silicon lifecycle management. He assumed the role of chief operating officer in August, 2020 and was appointed to the role of president in November 2021. Prior to Synopsys he was a design engineer at Intel.

 

Sassine holds a bachelor’s degree in Business Administration from Lebanese American University; a B.S.E.E from the Georgia Institute of Technology and an M.S.E.E. from the University of Tennessee.

 

6:30 PM - 8:00 PM
Thursday, 15 Sep, 2022
8:30 AM - 9:00 AM
Enterprise AI
ML at Scale
NLP
Data Science
Software Engineering
Strategy

Author:

Agus Sudjianto

Executive Vice President, Head of Model Risk
Wells Fargo

Agus Sudjianto is an executive vice president, head of Model Risk and a member of the Management Committee at Wells Fargo, where he is responsible for enterprise model risk management. Prior to his current position, Agus was the modeling and analytics director and chief model risk officer at Lloyds Banking Group in the United Kingdom. Before joining Lloyds, he was an executive and head of Quantitative Risk at Bank of America. Prior to his career in banking, he was a product design manager in the Powertrain Division of Ford Motor Company. Agus holds several U.S. patents in both finance and engineering. He has published numerous technical papers and is a co-author of Design and Modeling for Computer Experiments. His technical expertise and interests include quantitative risk, particularly credit risk modeling, machine learning and computational statistics. He holds masters and doctorate degrees in engineering and management from Wayne State University and the Massachusetts Institute of Technology.

Agus Sudjianto

Executive Vice President, Head of Model Risk
Wells Fargo

Agus Sudjianto is an executive vice president, head of Model Risk and a member of the Management Committee at Wells Fargo, where he is responsible for enterprise model risk management. Prior to his current position, Agus was the modeling and analytics director and chief model risk officer at Lloyds Banking Group in the United Kingdom. Before joining Lloyds, he was an executive and head of Quantitative Risk at Bank of America. Prior to his career in banking, he was a product design manager in the Powertrain Division of Ford Motor Company. Agus holds several U.S. patents in both finance and engineering. He has published numerous technical papers and is a co-author of Design and Modeling for Computer Experiments. His technical expertise and interests include quantitative risk, particularly credit risk modeling, machine learning and computational statistics. He holds masters and doctorate degrees in engineering and management from Wayne State University and the Massachusetts Institute of Technology.

9:00 AM - 9:30 AM

 

Developer Efficiency
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering

Author:

Mark Russinovich

CTO, Azure
Microsoft

Mark Russinovich is Chief Technology Officer of Microsoft Azure, where he oversees the technical strategy and architecture of Microsoft’s cloud computing platform. He is a widely recognized expert in distributed systems, operating system internals, and cybersecurity. He is the author of the Jeff Aiken cyberthriller novels, Zero Day, Trojan Horse, and Rogue Code, and co-author of the Microsoft Press Windows Internals books. Russinovich joined Microsoft in 2006 when Microsoft acquired Winternals Software, the company he cofounded in 1996, as well as Sysinternals, where he authors and publishes dozens of popular Windows administration and diagnostic utilities. He is a featured speaker at major industry conferences, including Microsoft Ignite, Microsoft //build, RSA Conference, and more.

Mark Russinovich

CTO, Azure
Microsoft

Mark Russinovich is Chief Technology Officer of Microsoft Azure, where he oversees the technical strategy and architecture of Microsoft’s cloud computing platform. He is a widely recognized expert in distributed systems, operating system internals, and cybersecurity. He is the author of the Jeff Aiken cyberthriller novels, Zero Day, Trojan Horse, and Rogue Code, and co-author of the Microsoft Press Windows Internals books. Russinovich joined Microsoft in 2006 when Microsoft acquired Winternals Software, the company he cofounded in 1996, as well as Sysinternals, where he authors and publishes dozens of popular Windows administration and diagnostic utilities. He is a featured speaker at major industry conferences, including Microsoft Ignite, Microsoft //build, RSA Conference, and more.

9:30 AM - 10:00 AM
10:00 AM - 10:40 AM

In developing applications for a variety of different infrastructure and hardware targets, machine learning developers face a dynamic and uncertain landscape where optimization and interoperability become challenging tasks. 

This panel will address how to build infrastructure with developer efficiency in mind, so that developers can focus on creating game-changing machine learning solutions for organizations and consumers. It will also address how hardware, systems and other technology vendors can assist in this effort.

Developer Efficiency
Enterprise AI
ML at Scale
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering

Author:

Divya Jain

Director of Adobe Sensei platform
Adobe

Divya Jain is an industry recognized product and technology leader in machine learning and AI. She has 15+ years of industry experience at various startups and Fortune 500 companies. She is currently serving as an Engineering Director for Sensei ML platform at Adobe. Before this she was a Research Director at Tyco Innovation Garage and led various deep learning initiatives in video surveillance space. She also co-founded a startup, dLoop Inc., which was acquired by Box in 2013. At Box, Divya led the team that built the first machine learning capabilities into the Box platform. She is very passionate about open sharing of knowledge and information and always working towards abridging technology gap for product innovation.

Divya Jain

Director of Adobe Sensei platform
Adobe

Divya Jain is an industry recognized product and technology leader in machine learning and AI. She has 15+ years of industry experience at various startups and Fortune 500 companies. She is currently serving as an Engineering Director for Sensei ML platform at Adobe. Before this she was a Research Director at Tyco Innovation Garage and led various deep learning initiatives in video surveillance space. She also co-founded a startup, dLoop Inc., which was acquired by Box in 2013. At Box, Divya led the team that built the first machine learning capabilities into the Box platform. She is very passionate about open sharing of knowledge and information and always working towards abridging technology gap for product innovation.

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

10:40 AM - 11:10 AM
Networking Break
TRACK A: HARDWARE & SYSTEMS | TRACK B: MODELS & DATA
11:10 AM - 11:35 AM
Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Carole Jean Wu

Research Scientist
Facebook

Carole-Jean Wu is a Research Scientist at Facebook AI Research. Her research focuses on designing systems for at-scale execution of machine learning, such as personalized recommender systems and for mobile deployment. More generally, her research interests are in computer architecture with particular focus on energy- and memory-efficient systems. Carole-Jean chairs MLPerf Recommendation Benchmark Advisory Board and co-chairs MLPerf Inference. She received her M.A. and Ph.D. from Princeton and B.Sc. from Cornell. She holds tenure from ASU and is the recipient of the NSF CAREER Award, Facebook AI Infrastructure Mentorship Award, the IEEE Young Engineer of the Year Award, the Science Foundation Arizona Bisgrove Early Career Scholarship, and the Intel PhD Fellowship, among a number of Best Paper awards.

Carole Jean Wu

Research Scientist
Facebook

Carole-Jean Wu is a Research Scientist at Facebook AI Research. Her research focuses on designing systems for at-scale execution of machine learning, such as personalized recommender systems and for mobile deployment. More generally, her research interests are in computer architecture with particular focus on energy- and memory-efficient systems. Carole-Jean chairs MLPerf Recommendation Benchmark Advisory Board and co-chairs MLPerf Inference. She received her M.A. and Ph.D. from Princeton and B.Sc. from Cornell. She holds tenure from ASU and is the recipient of the NSF CAREER Award, Facebook AI Infrastructure Mentorship Award, the IEEE Young Engineer of the Year Award, the Science Foundation Arizona Bisgrove Early Career Scholarship, and the Intel PhD Fellowship, among a number of Best Paper awards.

Enterprise AI
ML at Scale
Data Science
Software Engineering
Strategy

Author:

Dr. Caiming Xiong

VP of AI Research and Applied AI
Salesforce

Dr. Caiming Xiong is VP of AI Research and Applied AI at Salesforce. Dr. Xiong holds a Ph.D. from the department of Computer Science and Engineering, University at Buffalo, SUNY and worked as a Postdoctoral Researcher Scholar at the University of California, Los Angeles (UCLA).

Dr. Caiming Xiong

VP of AI Research and Applied AI
Salesforce

Dr. Caiming Xiong is VP of AI Research and Applied AI at Salesforce. Dr. Xiong holds a Ph.D. from the department of Computer Science and Engineering, University at Buffalo, SUNY and worked as a Postdoctoral Researcher Scholar at the University of California, Los Angeles (UCLA).

11:40 AM - 12:20 PM

The relentless growth in the size and sophistication of AI models and data sets continues to put pressure on every aspect of AI processing systems. Advances in domain-specific architectures and hardware/software co-design have resulted in enormous increases in AI processing performance, but the industry needs even more. Memory systems and interconnects that supply data to AI processors will continue to be of critical importance, requiring additional innovation to meet the needs of future processors. Join Rambus Fellow and Distinguished Inventor, Dr. Steven Woo, as he leads a panel of technology experts in discussing the importance of improving memory and interfaces and enabling new system architectures, in the quest for greater AI/ML performance.

 

Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Strategy
Systems Engineering

Author:

Steven Woo

Fellow and Distinguished Inventor
Rambus


I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Steven Woo

Fellow and Distinguished Inventor
Rambus


I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College
12:20 PM - 1:50 PM
Lunch
1:50 PM - 2:15 PM

Approximately one year ago, Samsung confirmed the world’s first use of AI to design a mobile processor chip. Since then, AI-driven design has been adopted across the industry at a phenomenal pace, accelerating silicon innovations to market in automotive, high-performance computing, consumer electronics, and other applications. Will this pace of innovation ultimately lead to self-designed silicon? In this sequel to the Day-1 Keynote  Enter the Era of Autonomous Design: Personalizing Chips for 1,000X More Powerful AI Compute, we will be looking at real-world examples of using AI to design chips, and reporting on the industry’s path to autonomous design.

Chip Design
Novel AI Hardware
Hardware Engineering
Industry & Investment

Author:

Stelios Diamantidis

Senior Director & Head of Autonomous Design Solutions
Synopsys

Stelios heads Synopsys' AI Solutions team in the Office of the President, where he researches and applies innovative machine-learning technology to address systemic complexity in the design and manufacturing of integrated computational systems. In 2020, Stelios launched DSO.ai™, the world’s first autonomous AI application for chip design. He has more than 20 years of experience in chip design and EDA software and has founded two companies in this space. Stelios holds an M.S. Electrical Engineering from Stanford University, California.

 

Stelios Diamantidis

Senior Director & Head of Autonomous Design Solutions
Synopsys

Stelios heads Synopsys' AI Solutions team in the Office of the President, where he researches and applies innovative machine-learning technology to address systemic complexity in the design and manufacturing of integrated computational systems. In 2020, Stelios launched DSO.ai™, the world’s first autonomous AI application for chip design. He has more than 20 years of experience in chip design and EDA software and has founded two companies in this space. Stelios holds an M.S. Electrical Engineering from Stanford University, California.

 

Enterprise AI
ML at Scale
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering

Author:

Daniel Wu

Head of AI & ML, Commercial Banking
JPMorgan Chase

Daniel Wu is a technical leader who brings more than 20 years of expertise in software engineering, AI/ML, and high-impact team development. He is the Head of Commercial Banking AI and Machine Learning at JPMorgan Chase where he drives financial service transformation through AI innovation. His diverse professional background also includes building point of care expert systems for physicians to improve quality of care, co-founding an online personal finance marketplace, and building an online real estate brokerage platform.

Daniel is passionate about the democratization of technology and the ethical use of AI - a philosophy he shares in the computer science and AI/ML education programs he has contributed to over the years.

Daniel Wu

Head of AI & ML, Commercial Banking
JPMorgan Chase

Daniel Wu is a technical leader who brings more than 20 years of expertise in software engineering, AI/ML, and high-impact team development. He is the Head of Commercial Banking AI and Machine Learning at JPMorgan Chase where he drives financial service transformation through AI innovation. His diverse professional background also includes building point of care expert systems for physicians to improve quality of care, co-founding an online personal finance marketplace, and building an online real estate brokerage platform.

Daniel is passionate about the democratization of technology and the ethical use of AI - a philosophy he shares in the computer science and AI/ML education programs he has contributed to over the years.

2:20 PM - 3:00 PM
Chip Design
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Strategy
Systems Engineering
Enterprise AI
ML at Scale
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering

Author:

Supriya Gupta

GM, Recommendations
Credit Karma

As Credit Karma’s head of Recommendations, Supriya is charged with personalizing the Credit Karma app experience for the company’s more than 120 million members. When she joined the team more than three years ago, Supriya oversaw the transformation of the company’s recommendations engine into a dynamic and personalized deep-learning based system that leverages data and machine learning to deliver tailored recommendations and offers to our members. This means members are served financial products, tools and recommendations at the right time, and that best adhere to their financial goals and ambitions. Under Supriya’s leadership, Credit Karma’s recommendations system has significantly grown and transformed over the years, and her team is doing more than 35 billion model predictions each day. Today, her business unit partners with nearly every team at Credit Karma and drives a significant portion of revenue for the business. Prior to joining Credit Karma, Supriya worked in product at Facebook, where she led product strategy and development for a number of ads products. Supriya holds a B.S. and M.S. in Electrical Engineering from the University of Illinois, and a M.B.A. from The Wharton School of the University of Pennsylvania.

 

 

Supriya Gupta

GM, Recommendations
Credit Karma

As Credit Karma’s head of Recommendations, Supriya is charged with personalizing the Credit Karma app experience for the company’s more than 120 million members. When she joined the team more than three years ago, Supriya oversaw the transformation of the company’s recommendations engine into a dynamic and personalized deep-learning based system that leverages data and machine learning to deliver tailored recommendations and offers to our members. This means members are served financial products, tools and recommendations at the right time, and that best adhere to their financial goals and ambitions. Under Supriya’s leadership, Credit Karma’s recommendations system has significantly grown and transformed over the years, and her team is doing more than 35 billion model predictions each day. Today, her business unit partners with nearly every team at Credit Karma and drives a significant portion of revenue for the business. Prior to joining Credit Karma, Supriya worked in product at Facebook, where she led product strategy and development for a number of ads products. Supriya holds a B.S. and M.S. in Electrical Engineering from the University of Illinois, and a M.B.A. from The Wharton School of the University of Pennsylvania.

 

 

Author:

Vishu Ram

VP of Data Science and Engineering
Credit Karma

As VP of Data Science and Engineering at Credit Karma, Vishnu has been instrumental in building and scaling Credit Karma’s machine learning infrastructure, powering personalized recommendations at scale, in pursuit of helping its 120 million members make financial progress. Having been with Credit Karma for nearly eight years, Vishnu has led development strategy and implementation of the company’s deep-learning based system comprised of an internally built machine learning platform that helps the Credit Karma transform and manage data at scale. His team runs 35 billion model predictions a day and is tasked with building sophisticated models to boost platform innovation in order to deliver an optimized product experience for Credit Karma’s 120 million members. Prior to joining Credit Karma, Vishnu held CTO roles at Nykaa and Games24x7.

 

Vishu Ram

VP of Data Science and Engineering
Credit Karma

As VP of Data Science and Engineering at Credit Karma, Vishnu has been instrumental in building and scaling Credit Karma’s machine learning infrastructure, powering personalized recommendations at scale, in pursuit of helping its 120 million members make financial progress. Having been with Credit Karma for nearly eight years, Vishnu has led development strategy and implementation of the company’s deep-learning based system comprised of an internally built machine learning platform that helps the Credit Karma transform and manage data at scale. His team runs 35 billion model predictions a day and is tasked with building sophisticated models to boost platform innovation in order to deliver an optimized product experience for Credit Karma’s 120 million members. Prior to joining Credit Karma, Vishnu held CTO roles at Nykaa and Games24x7.

 

3:00 PM - 3:25 PM

Deep neural networks (DNNs), a subset of machine learning (ML), provide a foundation for automating conversational artificial intelligence (CAI) applications. FPGAs provide hardware acceleration enabling high-density and low latency CAI. In this presentation, we will provide an overview of CAI, data center use-cases, describe the traditional compute model and its limitations and show how an ML compute engine integrated into the Achronix FPGA can lead to 90% cost reductions for speech transcription.

 

Enterprise AI
NLP
Novel AI Hardware
ML at Scale
Data Science
Hardware Engineering
Software Engineering
Systems Engineering

Author:

Tom Spencer

Senior Manager, Product Marketing
Achronix

Tom Spencer

Senior Manager, Product Marketing
Achronix
3:25 PM - 3:45 PM
Networking Break
3:45 PM - 4:10 PM
Chip Design
Edge AI
Enterprise AI
Novel AI Hardware
Systems Design
Hardware Engineering
Software Engineering
Systems Engineering

Author:

Harshit Khaitan

Director, AI Accelerators
Meta

Harshit Khaitan is the Director of AI Accelerator at Meta where he leads building AI Accelerators for Reality labs products. Prior to Meta, he was technical lead and co-founder for the Edge Machine learning accelerators at Google, responsible for MLA in Google Pixel 4 (Neural Core) and Pixel 6 (Google Tensor SoC). He has also held individual and technical leadership positions at Google’s first Cloud TPU, Nvidia Tegra SoCs and Nvidia GPUs. He has 10+ US and international patents in On-device AI acceleration. He has a Master’s degree in Computer Engineering from North Carolina State University and a Bachelor’s degree in Electrical Engineering from Manipal Institute of Technology, India.

Harshit Khaitan

Director, AI Accelerators
Meta

Harshit Khaitan is the Director of AI Accelerator at Meta where he leads building AI Accelerators for Reality labs products. Prior to Meta, he was technical lead and co-founder for the Edge Machine learning accelerators at Google, responsible for MLA in Google Pixel 4 (Neural Core) and Pixel 6 (Google Tensor SoC). He has also held individual and technical leadership positions at Google’s first Cloud TPU, Nvidia Tegra SoCs and Nvidia GPUs. He has 10+ US and international patents in On-device AI acceleration. He has a Master’s degree in Computer Engineering from North Carolina State University and a Bachelor’s degree in Electrical Engineering from Manipal Institute of Technology, India.

Enterprise AI
ML at Scale
Systems Design
Data Science
Software Engineering
Systems Engineering

Author:

Dan McCreary

Distinguished Engineer, Graph & AI
Optum

Dan is a distinguished engineer in AI working on innovative database architectures including document and graph databases. He has a strong background in semantics, ontologies, NLP and search. He is a hands-on architect and like to build his own pilot applications using new technologies. Dan started the NoSQL Now! Conference (now called the Database Now! Conferences). He also co-authored the book Making Sense of NoSQL, one of the highest rated books on Amazon on the topic of NoSQL. Dan worked at Bell Labs as a VLSI circuit designer where he worked with Brian Kernighan (of K&R C). Dan also worked with Steve Jobs at NeXT Computer.

Dan McCreary

Distinguished Engineer, Graph & AI
Optum

Dan is a distinguished engineer in AI working on innovative database architectures including document and graph databases. He has a strong background in semantics, ontologies, NLP and search. He is a hands-on architect and like to build his own pilot applications using new technologies. Dan started the NoSQL Now! Conference (now called the Database Now! Conferences). He also co-authored the book Making Sense of NoSQL, one of the highest rated books on Amazon on the topic of NoSQL. Dan worked at Bell Labs as a VLSI circuit designer where he worked with Brian Kernighan (of K&R C). Dan also worked with Steve Jobs at NeXT Computer.

4:10 PM - 4:50 PM

As customer success stories from AI accelerator start ups starting to proliferate, and traction starting to ramp up, it is starting to become clear which ML workloads are most amenable to domain specific architectures, and which market sectors are most likely to adopt novel AI acceleration technologies. 

With one company still retaining the majority of market share in the datacenter, and the edge currently a complete wilderness, it might still be a difficult time to launch a new accelerator company. But opportunities for capturing market share across the cloud-edge continuum definitely exist! In the world of HPC, certain ML and non-ML scientific workloads have seen extraordinary, demonstrable speed ups on novel ML systems architectures, and the scientific community only sees demand for acceleration of these types of workloads growing. At the edge some AI chip companies are already shipping in volume, while new applications emerge continuously.

This panel will look at what it takes to make it in the AIHW game, what might shift the balance of power in the datacenter, and how companies can find a niche at the edge. 

Enterprise AI
Novel AI Hardware
Strategy
Industry & Investment

Author:

Rashmi Gopinath

General Partner
B Capital Group

Rashmi Gopinath is a General Partner at B Capital Group where she focuses on investments in cloud infrastructure, cybersecurity, devops, and artificial intelligence and machine learning. She brings over two decades of experience investing and operating in enterprise technologies. She led B Capital’s investments in Synack, Innovaccer, Yalo and Armory. Ms. Gopinath was previously a Managing Director at M12, Microsoft's venture fund, where she led investments globally in the enterprise space. At M12, Ms. Gopinath sat on several Boards including Synack, Innovaccer, Contrast Security, Paxata, UnravelData, Incorta, among others. Prior to M12, Ms. Gopinath was an Investment Director with Intel Capital where she was involved in the firm’s investments in startups including MongoDB, ForeScout, Maginatics, BlueData, among others. Ms. Gopinath held operating roles at high-growth startups such as BlueData and Couchbase where she led global business development and product marketing roles. She began her career in engineering and product roles at Oracle and GE Healthcare. Ms. Gopinath earned an M.B.A. from Northwestern University, and a B.S. in Engineering from University of Mumbai in India.

Rashmi Gopinath

General Partner
B Capital Group

Rashmi Gopinath is a General Partner at B Capital Group where she focuses on investments in cloud infrastructure, cybersecurity, devops, and artificial intelligence and machine learning. She brings over two decades of experience investing and operating in enterprise technologies. She led B Capital’s investments in Synack, Innovaccer, Yalo and Armory. Ms. Gopinath was previously a Managing Director at M12, Microsoft's venture fund, where she led investments globally in the enterprise space. At M12, Ms. Gopinath sat on several Boards including Synack, Innovaccer, Contrast Security, Paxata, UnravelData, Incorta, among others. Prior to M12, Ms. Gopinath was an Investment Director with Intel Capital where she was involved in the firm’s investments in startups including MongoDB, ForeScout, Maginatics, BlueData, among others. Ms. Gopinath held operating roles at high-growth startups such as BlueData and Couchbase where she led global business development and product marketing roles. She began her career in engineering and product roles at Oracle and GE Healthcare. Ms. Gopinath earned an M.B.A. from Northwestern University, and a B.S. in Engineering from University of Mumbai in India.

Author:

Yvonne Lutsch

Investment Principal
Bosch Ventures

Yvonne is an Investment Principal at Robert Bosch Venture Capital’s (RBVC) affiliate office located in Sunnyvale, responsible for sourcing, evaluating and executing investments for RBVC in North America in deep tech fields like Machine Learning/AI, Edge Computing, Industrial IoT, Mobility, Quantum Computing, or Sensors. She is an investor / non-executive board member at RBVC’s portfolio companies InSyte Systems, Syntiant, Zapata Computing, and UltraSense Systems. Prior to this position Yvonne was Director of Technology Scouting and Business Development for Bosch Automotive Electronics building up an Innovation Center for the division in North America. Prior to that, Yvonne held different leadership positions in Quality Management, Operations and Engineering in Automotive and Consumer Electronics with Bosch in Germany. Yvonne received a diploma in Experimental Physics from University of Siegen, Germany, and holds a PhD in Applied Physics from University of Tuebingen, Germany.

Yvonne Lutsch

Investment Principal
Bosch Ventures

Yvonne is an Investment Principal at Robert Bosch Venture Capital’s (RBVC) affiliate office located in Sunnyvale, responsible for sourcing, evaluating and executing investments for RBVC in North America in deep tech fields like Machine Learning/AI, Edge Computing, Industrial IoT, Mobility, Quantum Computing, or Sensors. She is an investor / non-executive board member at RBVC’s portfolio companies InSyte Systems, Syntiant, Zapata Computing, and UltraSense Systems. Prior to this position Yvonne was Director of Technology Scouting and Business Development for Bosch Automotive Electronics building up an Innovation Center for the division in North America. Prior to that, Yvonne held different leadership positions in Quality Management, Operations and Engineering in Automotive and Consumer Electronics with Bosch in Germany. Yvonne received a diploma in Experimental Physics from University of Siegen, Germany, and holds a PhD in Applied Physics from University of Tuebingen, Germany.

Author:

Gayathri Radhakrishnan

Senior Director, Venture Capital - AI Fund
Micron

Gayathri Radhakrishnan is currently part of the investment team at Micron Ventures, investing from $100M AI fund. She invests in startups that are leveraging AI/ML to solve critical problems in the areas of Manufacturing, Healthcare, Automotive and AgTech. Prior to that, she brings 20 years of multi-disciplinary experience across product management, product marketing, corporate strategy, M&A and venture investments in large Fortune 500 companies such as Dell and Corning and in startups. She has also worked as an early stage investor at Earlybird Venture Capital, a premier European venture capital fund based in Germany. She has a Masters in EE from The Ohio State University and MBA from INSEAD in France. She is also a Kauffman Fellow.

Gayathri Radhakrishnan

Senior Director, Venture Capital - AI Fund
Micron

Gayathri Radhakrishnan is currently part of the investment team at Micron Ventures, investing from $100M AI fund. She invests in startups that are leveraging AI/ML to solve critical problems in the areas of Manufacturing, Healthcare, Automotive and AgTech. Prior to that, she brings 20 years of multi-disciplinary experience across product management, product marketing, corporate strategy, M&A and venture investments in large Fortune 500 companies such as Dell and Corning and in startups. She has also worked as an early stage investor at Earlybird Venture Capital, a premier European venture capital fund based in Germany. She has a Masters in EE from The Ohio State University and MBA from INSEAD in France. She is also a Kauffman Fellow.

As AI makes its way into healthcare and medical applications, the role of hardware accelerators in the successful deployment of such large AI models becomes more and more important. Nowadays large language models, such as GPT-3 and T5, offer unprecedented opportunities to solve challenging healthcare business problems like drug discovery, medical term mapping and insight generation from electronic health records. However, efficient and cost effective training, as well as deployment and maintenance of such models in production remains a challenge for healthcare industry. This presentation will review a few open challenges and opportunities in the healthcare industry and the benefits that AI hardware innovation may bring to the ML utilization.

Developer Efficiency
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering

Author:

Hooman Sedghamiz

Director of AI & ML
Bayer

Hooman Sedghamiz is Director of AI & ML at Bayer. He has lead algorithm development and generated valuable insights to improve medical products ranging from implantable, wearable medical and imaging devices to bioinformatics and pharmaceutical products for a variety of multinational medical companies.

He has lead projects, data science teams and developed algorithms for closed loop active medical implants (e.g. Pacemakers, cochlear and retinal implants) as well as advanced computational biology to study the time evolution of cellular networks associated with cancer , depression and other illnesses.

His experience in healthcare also extends to image processing for Computer Tomography (CT), iX-Ray (Interventional X-Ray) as well as signal processing of physiological signals such as ECG, EMG, EEG and ACC.

Recently, his team has been working on cutting edge natural language processing and developed cutting edge models to address the healthcare challenges dealing with textual data.

Hooman Sedghamiz

Director of AI & ML
Bayer

Hooman Sedghamiz is Director of AI & ML at Bayer. He has lead algorithm development and generated valuable insights to improve medical products ranging from implantable, wearable medical and imaging devices to bioinformatics and pharmaceutical products for a variety of multinational medical companies.

He has lead projects, data science teams and developed algorithms for closed loop active medical implants (e.g. Pacemakers, cochlear and retinal implants) as well as advanced computational biology to study the time evolution of cellular networks associated with cancer , depression and other illnesses.

His experience in healthcare also extends to image processing for Computer Tomography (CT), iX-Ray (Interventional X-Ray) as well as signal processing of physiological signals such as ECG, EMG, EEG and ACC.

Recently, his team has been working on cutting edge natural language processing and developed cutting edge models to address the healthcare challenges dealing with textual data.

4:50 PM - 5:20 PM
Closing Keynote

Jump to: Day 1 | Day 2 | Day 3

Download The Information Pack

Now you've seen our jam-packed agenda, download the Information Pack for further details about the event, and exactly why you should attend!

Other events you might be interested in:

Edge AI Summit 2022