Hardware Engineering | Page 3 | Kisaco Research

Hardware Engineering

Color: 
#1d3459

This presentation, by the RISC-V founder, will highlight how RISC-V and vector compute are gaining momentum with AI and ML and computer vision and how it addresses challenges like managing power consumption, extra data movement, the need for multiple libraries, and issues with generational incompatibility. To solve these obstacles, many of the world’s largest data and device companies are turning to vector processing based on the RISC‑V Vector (RVV) 1.0 ISA.

Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Hardware Engineering
Systems Engineering
Strategy

Author:

Krste Asanovic

Co-Founder & Chief Architect
SiFive

Krste is SiFive’s Chief Architect and a Co-Founder. He is also a Professor in the EECS Department at the University of California, Berkeley, where he also serves as Director of the ADEPT Lab. Krste leads the RISC‑V ISA project at Berkeley and is Chairman of the RISC‑V Foundation. He is an ACM Fellow and an IEEE Fellow. Krste received his PhD from UC Berkeley, and a BA in Electrical and Information Sciences from the University of Cambridge.

Krste Asanovic

Co-Founder & Chief Architect
SiFive

Krste is SiFive’s Chief Architect and a Co-Founder. He is also a Professor in the EECS Department at the University of California, Berkeley, where he also serves as Director of the ADEPT Lab. Krste leads the RISC‑V ISA project at Berkeley and is Chairman of the RISC‑V Foundation. He is an ACM Fellow and an IEEE Fellow. Krste received his PhD from UC Berkeley, and a BA in Electrical and Information Sciences from the University of Cambridge.

Deep neural networks (DNNs), a subset of machine learning (ML), provide a foundation for automating conversational artificial intelligence (CAI) applications. FPGAs provide hardware acceleration enabling high-density and low latency CAI. In this presentation, we will provide an overview of CAI, data center use-cases, describe the traditional compute model and its limitations and show how an ML compute engine integrated into the Achronix FPGA can lead to 90% cost reductions for speech transcription.

 

Enterprise AI
NLP
Novel AI Hardware
ML at Scale
Data Science
Hardware Engineering
Software Engineering
Systems Engineering

Author:

Salvador Alvarez

Senior Manager, Product Planning
Achronix
  • Salvador Alvarez is the Senior Manager of Product Planning at Achronix, coordinating the research, development, and launch of new Achronix products and solutions. With over 20 years of experience in product growth, roadmap development, and competitive intelligence and analysis in the semiconductor, automotive, and edge AI industries, Sal Alvarez is a recognized expert in helping customers realize the advantages of edge AI and deep learning technology over legacy cloud AI approaches. Sal holds a B.S. in computer science and electrical engineering from the Massachusetts Institute of Technology.​

Salvador Alvarez

Senior Manager, Product Planning
Achronix
  • Salvador Alvarez is the Senior Manager of Product Planning at Achronix, coordinating the research, development, and launch of new Achronix products and solutions. With over 20 years of experience in product growth, roadmap development, and competitive intelligence and analysis in the semiconductor, automotive, and edge AI industries, Sal Alvarez is a recognized expert in helping customers realize the advantages of edge AI and deep learning technology over legacy cloud AI approaches. Sal holds a B.S. in computer science and electrical engineering from the Massachusetts Institute of Technology.​
Chip Design
Edge AI
Enterprise AI
Novel AI Hardware
Systems Design
Hardware Engineering
Software Engineering
Systems Engineering

Author:

Harshit Khaitan

Director, AI Accelerators
Meta

Harshit Khaitan is the Director of AI Accelerator at Meta where he leads building AI Accelerators for Reality labs products. Prior to Meta, he was technical lead and co-founder for the Edge Machine learning accelerators at Google, responsible for MLA in Google Pixel 4 (Neural Core) and Pixel 6 (Google Tensor SoC). He has also held individual and technical leadership positions at Google’s first Cloud TPU, Nvidia Tegra SoCs and Nvidia GPUs. He has 10+ US and international patents in On-device AI acceleration. He has a Master’s degree in Computer Engineering from North Carolina State University and a Bachelor’s degree in Electrical Engineering from Manipal Institute of Technology, India.

Harshit Khaitan

Director, AI Accelerators
Meta

Harshit Khaitan is the Director of AI Accelerator at Meta where he leads building AI Accelerators for Reality labs products. Prior to Meta, he was technical lead and co-founder for the Edge Machine learning accelerators at Google, responsible for MLA in Google Pixel 4 (Neural Core) and Pixel 6 (Google Tensor SoC). He has also held individual and technical leadership positions at Google’s first Cloud TPU, Nvidia Tegra SoCs and Nvidia GPUs. He has 10+ US and international patents in On-device AI acceleration. He has a Master’s degree in Computer Engineering from North Carolina State University and a Bachelor’s degree in Electrical Engineering from Manipal Institute of Technology, India.

Chip Design
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Strategy
Systems Engineering

Author:

Nitza Basoco

VP, Business Development
proteanTecs

Nitza Basoco is a technology leader with over 20 years of semiconductor experience. At proteanTecs, she leads the Business Development team, responsible for driving partnership strategies and building value-add ecosystem growth. 

Previously, Nitza was the VP of Operations at Synaptics with responsibility for growing and scaling their worldwide test development, product engineering and manufacturing departments. Prior to Synaptics, Nitza spent a decade holding various leadership positions within the operations organization at MaxLinear, ranging from test development engineering to supply chain. Earlier in her career, Nitza served as a Principal Test Development Engineer for Broadcom Corporation and as a Broadband Applications Engineer at Teradyne.  

Nitza holds MEng and BSEE degrees from Massachusetts Institute of Technology.

Nitza Basoco

VP, Business Development
proteanTecs

Nitza Basoco is a technology leader with over 20 years of semiconductor experience. At proteanTecs, she leads the Business Development team, responsible for driving partnership strategies and building value-add ecosystem growth. 

Previously, Nitza was the VP of Operations at Synaptics with responsibility for growing and scaling their worldwide test development, product engineering and manufacturing departments. Prior to Synaptics, Nitza spent a decade holding various leadership positions within the operations organization at MaxLinear, ranging from test development engineering to supply chain. Earlier in her career, Nitza served as a Principal Test Development Engineer for Broadcom Corporation and as a Broadband Applications Engineer at Teradyne.  

Nitza holds MEng and BSEE degrees from Massachusetts Institute of Technology.

Author:

Judy Priest

Distinguished Engineer & VP, GM
Microsoft

Judy Priest is a Distinguished Engineer in Microsoft's Cloud and AI Group. She drives innovation, integration, and operations in next generation Data Center platforms supporting Azure, AI, and MS's Enterprise software. Judy has over 25 years of experience in developing data centers systems and silicon, high speed signaling technologies and optics, circuit design, and physical architectures for compute, storage, graphics, and networking.

Judy has previously worked at Cisco Systems, Silicon Graphics, Hewlett-Packard, and Digital Equipment Corporation, as well as two startup ventures. She serves on the Board of Directors for Women's Audio Mission, a local SF nonprofit moving the needle for girls, women, and GNC individuals in STEM through music. Judy was awarded Business Insider's 2018 Most Powerful Female Engineers and InterCon Networking's 2020 Top 100 Leaders in Engineering.

 

Judy Priest

Distinguished Engineer & VP, GM
Microsoft

Judy Priest is a Distinguished Engineer in Microsoft's Cloud and AI Group. She drives innovation, integration, and operations in next generation Data Center platforms supporting Azure, AI, and MS's Enterprise software. Judy has over 25 years of experience in developing data centers systems and silicon, high speed signaling technologies and optics, circuit design, and physical architectures for compute, storage, graphics, and networking.

Judy has previously worked at Cisco Systems, Silicon Graphics, Hewlett-Packard, and Digital Equipment Corporation, as well as two startup ventures. She serves on the Board of Directors for Women's Audio Mission, a local SF nonprofit moving the needle for girls, women, and GNC individuals in STEM through music. Judy was awarded Business Insider's 2018 Most Powerful Female Engineers and InterCon Networking's 2020 Top 100 Leaders in Engineering.

 

Author:

Shivam Bharuka

Software Production Engineer
Meta

Shivam is an engineering leader with Meta as part of the AI Infrastructure team for the last three years. During this time, he has helped scale the machine learning training infrastructure at Meta to support large scale ranking and recommendation models, serving more than a billion users. He is responsible for driving performance, reliability, and efficiency-oriented designs across the components of the ML training stack at Meta. Shivam holds a B.S. and an M.S. in Computer Engineering from the University of Illinois at Urbana-Champaign.

Shivam Bharuka

Software Production Engineer
Meta

Shivam is an engineering leader with Meta as part of the AI Infrastructure team for the last three years. During this time, he has helped scale the machine learning training infrastructure at Meta to support large scale ranking and recommendation models, serving more than a billion users. He is responsible for driving performance, reliability, and efficiency-oriented designs across the components of the ML training stack at Meta. Shivam holds a B.S. and an M.S. in Computer Engineering from the University of Illinois at Urbana-Champaign.

Author:

Jim von Bergen

Senior Director, Product Quality Engineering
Cisco

Jim von Bergen

Senior Director, Product Quality Engineering
Cisco

Approximately one year ago, Samsung confirmed the world’s first use of AI to design a mobile processor chip. Since then, AI-driven design has been adopted across the industry at a phenomenal pace, accelerating silicon innovations to market in automotive, high-performance computing, consumer electronics, and other applications. Will this pace of innovation ultimately lead to self-designed silicon? In this sequel to the Day-1 Keynote  Enter the Era of Autonomous Design: Personalizing Chips for 1,000X More Powerful AI Compute, we will be looking at real-world examples of using AI to design chips, and reporting on the industry’s path to autonomous design.

Chip Design
Novel AI Hardware
Hardware Engineering
Industry & Investment

Author:

Stelios Diamantidis

Senior Director & Head of Autonomous Design Solutions
Synopsys

Stelios heads Synopsys' AI Solutions team in the Office of the President, where he researches and applies innovative machine-learning technology to address systemic complexity in the design and manufacturing of integrated computational systems. In 2020, Stelios launched DSO.ai™, the world’s first autonomous AI application for chip design. He has more than 20 years of experience in chip design and EDA software and has founded two companies in this space. Stelios holds an M.S. Electrical Engineering from Stanford University, California.

 

Stelios Diamantidis

Senior Director & Head of Autonomous Design Solutions
Synopsys

Stelios heads Synopsys' AI Solutions team in the Office of the President, where he researches and applies innovative machine-learning technology to address systemic complexity in the design and manufacturing of integrated computational systems. In 2020, Stelios launched DSO.ai™, the world’s first autonomous AI application for chip design. He has more than 20 years of experience in chip design and EDA software and has founded two companies in this space. Stelios holds an M.S. Electrical Engineering from Stanford University, California.

 

The relentless growth in the size and sophistication of AI models and data sets continues to put pressure on every aspect of AI processing systems. Advances in domain-specific architectures and hardware/software co-design have resulted in enormous increases in AI processing performance, but the industry needs even more. Memory systems and interconnects that supply data to AI processors will continue to be of critical importance, requiring additional innovation to meet the needs of future processors. Join Rambus Fellow and Distinguished Inventor, Dr. Steven Woo, as he leads a panel of technology experts in discussing the importance of improving memory and interfaces and enabling new system architectures, in the quest for greater AI/ML performance.

Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Strategy
Systems Engineering

Author:

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Author:

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Euicheol Lim is a Research Fellow and leader of system architecture team in memory system research, SK hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr. Lim joined SK Hynix in 2016 as a system architect in memory system research. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC series. His recent interesting point is memory and storage system architecture for AI and Big data system with various new media memory.

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Euicheol Lim is a Research Fellow and leader of system architecture team in memory system research, SK hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr. Lim joined SK Hynix in 2016 as a system architect in memory system research. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC series. His recent interesting point is memory and storage system architecture for AI and Big data system with various new media memory.

Author:

Sumti Jairath

Chief Architect
SambaNova Systems

Sumti Jairath is Chief Architect at SambaNova Systems, with expertise in hardware-software co-design. Sumti worked on PA-RISC-based Superdome servers back at HP, followed by several generations of SPARC CMT processors at Sun Microsystems and Oracle. At Oracle, Sumti worked on SQL, Data-analytics and Machine Learning acceleration in SPARC processors. Sumti holds 27 patents in computer architecture and hardware-software co-design.

Sumti Jairath

Chief Architect
SambaNova Systems

Sumti Jairath is Chief Architect at SambaNova Systems, with expertise in hardware-software co-design. Sumti worked on PA-RISC-based Superdome servers back at HP, followed by several generations of SPARC CMT processors at Sun Microsystems and Oracle. At Oracle, Sumti worked on SQL, Data-analytics and Machine Learning acceleration in SPARC processors. Sumti holds 27 patents in computer architecture and hardware-software co-design.

Author:

Matt Fyles

SVP, Software
Graphcore

Matt Fyles is a computer scientist with over 20 years of proven experience in the design, delivery and the support of software and hardware within the microprocessor market. As SVP Software at Graphcore, Matt has built the company’s Poplar software stack from scratch, co-designed with the IPU for machine intelligence. He currently oversees the Software team’s work on the Poplar SDK, helping to support Graphcore’s growing community of developers.

Matt Fyles

SVP, Software
Graphcore

Matt Fyles is a computer scientist with over 20 years of proven experience in the design, delivery and the support of software and hardware within the microprocessor market. As SVP Software at Graphcore, Matt has built the company’s Poplar software stack from scratch, co-designed with the IPU for machine intelligence. He currently oversees the Software team’s work on the Poplar SDK, helping to support Graphcore’s growing community of developers.

We have witnessed a big paradigm shift in how AI has affected our daily lives. While AI model training is typically done in a cloud infrastructure setting, model inferencing has grown enormously on power, area, bandwidth and memory constrained edge devices.

These inferencing workloads have varying computational and memory needs, stringent power and silicon area requirements that can be very challenging to meet. AI led innovation is affecting the next generation of embedded hardware and software design alike. This talk will illustrate the design philosophies and challenges around designing best in class AI hardware accelerators.

 

Chip Design
Novel AI Hardware
Hardware Engineering
Strategy
Systems Engineering

Author:

Sriraman Chari

Fellow & Head of AI Accelerator IP Solution
Cadence Design Systems

Sriraman Chari

Fellow & Head of AI Accelerator IP Solution
Cadence Design Systems

 

Edge AI
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering
Hardware Engineering

Author:

Victor Peng

President, Adaptive and Embedded Computing Group
AMD

Victor Peng is President of the Adaptive and Embedded Computing group at AMD. He is responsible for AMD’s Adaptive SmartNIC, FPGA, Adaptive SoC, embedded CPU, and embedded APU business that serve multiple market segments including the data center, communications, automotive, industrial, A&D, healthcare, test/measure/emulation, and other embedded markets. Peng also serves on the board of KLA Corporation.

Peng rejoined AMD in 2022 after 14 years at Xilinx, most recently serving as president and CEO. Prior to joining Xilinx, Peng worked at AMD as corporate vice president of silicon engineering for the graphics products group (GPG) and was the co-leader of the central silicon engineering team supporting graphics, game console products, and CPU chipsets. Prior to that, Peng held executive and engineering leadership roles at ATI, TZero Technologies, MIPS Technologies, SGI, and Digital Equipment Corp. 

Victor Peng

President, Adaptive and Embedded Computing Group
AMD

Victor Peng is President of the Adaptive and Embedded Computing group at AMD. He is responsible for AMD’s Adaptive SmartNIC, FPGA, Adaptive SoC, embedded CPU, and embedded APU business that serve multiple market segments including the data center, communications, automotive, industrial, A&D, healthcare, test/measure/emulation, and other embedded markets. Peng also serves on the board of KLA Corporation.

Peng rejoined AMD in 2022 after 14 years at Xilinx, most recently serving as president and CEO. Prior to joining Xilinx, Peng worked at AMD as corporate vice president of silicon engineering for the graphics products group (GPG) and was the co-leader of the central silicon engineering team supporting graphics, game console products, and CPU chipsets. Prior to that, Peng held executive and engineering leadership roles at ATI, TZero Technologies, MIPS Technologies, SGI, and Digital Equipment Corp. 

Macrotrends in innovation are leveraging both software and chips to create the next round of world-changing products. Unlocking the vast potential offered by this innovation model is daunting however. Systemic complexity across all disciplines from silicon to software must be addressed in a holistic way to achieve success. AI applications change over months while chip design can take years, adding to the challenges. Talent shortages also create headwinds. And as more system companies engage in chip design, these headwinds can have a profound impact on the pace of innovation.

Complex chip and system design must be easier to achieve in less time. Sassine Ghazi will discuss several developing strategies that use AI and machine learning techniques to dramatically reduce design time and design risk, opening the opportunity for substantial increases in the pace of innovation.

Chip Design
Edge AI
Novel AI Hardware
Hardware Engineering
Systems Engineering

Author:

Sassine Ghazi

President & COO
Synopsys

Sassine Ghazi leads and drives strategy for all business units, sales and customer success, strategic alliances, marketing and communications at Synopsys. He joined the company in 1998 as an applications engineer. He then held a series of sales positions with increasing responsibility, culminating in leadership of worldwide strategic accounts. He was then appointed general manager for all digital and custom products, the largest business group in Synopsys. Under his leadership, several innovative solutions were launched in areas such as multi-die systems, AI-assisted design and silicon lifecycle management. He assumed the role of chief operating officer in August, 2020 and was appointed to the role of president in November 2021. Prior to Synopsys he was a design engineer at Intel.

 

Sassine holds a bachelor’s degree in Business Administration from Lebanese American University; a B.S.E.E from the Georgia Institute of Technology and an M.S.E.E. from the University of Tennessee.

 

Sassine Ghazi

President & COO
Synopsys

Sassine Ghazi leads and drives strategy for all business units, sales and customer success, strategic alliances, marketing and communications at Synopsys. He joined the company in 1998 as an applications engineer. He then held a series of sales positions with increasing responsibility, culminating in leadership of worldwide strategic accounts. He was then appointed general manager for all digital and custom products, the largest business group in Synopsys. Under his leadership, several innovative solutions were launched in areas such as multi-die systems, AI-assisted design and silicon lifecycle management. He assumed the role of chief operating officer in August, 2020 and was appointed to the role of president in November 2021. Prior to Synopsys he was a design engineer at Intel.

 

Sassine holds a bachelor’s degree in Business Administration from Lebanese American University; a B.S.E.E from the Georgia Institute of Technology and an M.S.E.E. from the University of Tennessee.