Software Engineering | Page 2 | Kisaco Research

Software Engineering

Color: 
#92b4ed
Developer Efficiency
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Alexis Black Bjorlin

VP, Infrastructure Hardware
Meta

Dr. Alexis Black Bjorlin is VP, Infrastructure Hardware Engineering at Meta. She also serves on the board of directors at Digital Realty and Celestial AI. Prior to Meta, Dr. Bjorlin was Senior Vice President and General Manager of Broadcom’s Optical Systems Division and previously Corporate Vice President of the Data Center Group and General Manager of the Connectivity Group at Intel. Prior to Intel, she spent eight years as President of Source Photonics, where she also served on the board of directors. She earned a B.S. in Materials Science and Engineering from Massachusetts Institute of Technology and a Ph.D. in Materials Science from the University of California at Santa Barbara.

Alexis Black Bjorlin

VP, Infrastructure Hardware
Meta

Dr. Alexis Black Bjorlin is VP, Infrastructure Hardware Engineering at Meta. She also serves on the board of directors at Digital Realty and Celestial AI. Prior to Meta, Dr. Bjorlin was Senior Vice President and General Manager of Broadcom’s Optical Systems Division and previously Corporate Vice President of the Data Center Group and General Manager of the Connectivity Group at Intel. Prior to Intel, she spent eight years as President of Source Photonics, where she also served on the board of directors. She earned a B.S. in Materials Science and Engineering from Massachusetts Institute of Technology and a Ph.D. in Materials Science from the University of California at Santa Barbara.

AI Hardware Summit attendees are invited to attend the an extended networking session where they can meet attendees from across both events. The Meet & Greet is a perfect opportunity to reconnect with peers, expand your network, and discuss the state of ML across the cloud-edge continuum!

Chip Design
Developer Efficiency
Edge AI
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Colin Murdoch

Chief Business Officer
DeepMind

Decades of international commercial experience and deep technical expertise mean Colin is uniquely placed to ensure DeepMind’s cutting-edge research benefits as many people as possible. As Chief Business Officer of DeepMind, he oversees a wide-range of teams including Applied, which applies research breakthroughs to Google products and infrastructure used by billions of people. He also helps drive the growth of DeepMind, building and leading critical functions including finance and strategy and leading external and commercial partnerships. Originally an electronics and software engineer, he has held senior positions at both start-ups and global companies such as Thomson Reuters, helping them solve their own complex, mission-critical, real-world challenges.

Colin Murdoch

Chief Business Officer
DeepMind

Decades of international commercial experience and deep technical expertise mean Colin is uniquely placed to ensure DeepMind’s cutting-edge research benefits as many people as possible. As Chief Business Officer of DeepMind, he oversees a wide-range of teams including Applied, which applies research breakthroughs to Google products and infrastructure used by billions of people. He also helps drive the growth of DeepMind, building and leading critical functions including finance and strategy and leading external and commercial partnerships. Originally an electronics and software engineer, he has held senior positions at both start-ups and global companies such as Thomson Reuters, helping them solve their own complex, mission-critical, real-world challenges.

Author:

Cade Metz

Technology Correspondent
New York Times

Cade Metz is a reporter with The New York Times, covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Genius Makers is his first book. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites.

A native of North Carolina and a graduate of Duke University, Metz, 48, works in The New York Times’ San Francisco bureau and lives across the bay with his wife Taylor and two daughters.

Cade Metz

Technology Correspondent
New York Times

Cade Metz is a reporter with The New York Times, covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Genius Makers is his first book. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites.

A native of North Carolina and a graduate of Duke University, Metz, 48, works in The New York Times’ San Francisco bureau and lives across the bay with his wife Taylor and two daughters.

State-of-the-art large language models (LLMs) are empowering organizations to unlock the most critical insights in their unstructured data. Despite the opportunities, the cost and complexity of developing these models internally makes them impractical for most organizations to develop on their own. SambaNova overcomes these challenges with production-ready, pre-trained LLMs delivered through a full-stack solution, which can be further adapted through unlimited fine-tuning or pre-training within an organization’s own environment. In this workshop, we will showcase prototyping solutions for enterprise semantic search, legal compliance analysis, and call center service analysis, built on top of pretrained LLMs available through SambaNova Dataflow-as-a-Service. This is followed by live trials on the demo to showcase the exciting potential of large language models.

Developer workshops are restricted to machine learning practitioners from research institutions and enterprises who are interested in learning how to port code onto novel AI platforms and want to get hands-on access to hardware and SDKs. 

Workshops are application only and subject to eligibility and availability. The workshops are free, and lunch, shared networking sessions, and access to the Meet and Greet function and keynote is included in the developer pass. If you're a machine learning engineer / AI application developer, please apply using the form in the registration section of the website or by emailing [email protected]. There are approximately 30 spaces available.

Developer Efficiency
NLP
Novel AI Hardware
Data Science
Software Engineering

Author:

Jian Zhang

Director, Machine Learning
SambaNova Systems

Jian Zhang

Director, Machine Learning
SambaNova Systems

During this workshop, attendees will be updated on the state of the art in computer vision use cases and learn how to build deep learning models for object detection, while improving model performance. Atos' expert host will provide best practices on team organization to facilitate success. Finally, attendees will learn how to implement an engineering strategy - build a project template & data versioning, experiment tracking & feedback, leverage testing & boost model promotion.

Developer workshops are restricted to machine learning practitioners from research institutions and enterprises who are interested in learning how to port code onto novel AI platforms and want to get hands-on access to hardware and SDKs. 

Workshops are application only and subject to eligibility and availability. The workshops are free, and lunch, shared networking sessions, and access to the Meet and Greet function and keynote is included in the developer pass. If you're a machine learning engineer / AI application developer, please apply using the form in the registration section of the website or by emailing [email protected]. There are approximately 30 spaces available.

Developer Efficiency
Edge AI
Novel AI Hardware
Data Science
Software Engineering

Author:

Ashwin Sakhare

Senior Data Scientist, Atos zData
Atos

Ashwin Sakhare, PhD is a Senior Data Scientist at Atos zData, a leading AI and Data Science firm. Ashwin leverages over a decade of industry and research experience to solve key business challenges through novel product development and data-driven machine learning approaches. He is a computer vision expert and has delivered AI and machine vision solutions to clients across a broad range of application domains. He has a strong healthcare industry background, where he led the ideation, design, and development of AI products. Ashwin holds a BS in Biomedical Engineering from North Carolina State University and a MS and PhD in Biomedical Engineering from the University of Southern California.

Ashwin Sakhare

Senior Data Scientist, Atos zData
Atos

Ashwin Sakhare, PhD is a Senior Data Scientist at Atos zData, a leading AI and Data Science firm. Ashwin leverages over a decade of industry and research experience to solve key business challenges through novel product development and data-driven machine learning approaches. He is a computer vision expert and has delivered AI and machine vision solutions to clients across a broad range of application domains. He has a strong healthcare industry background, where he led the ideation, design, and development of AI products. Ashwin holds a BS in Biomedical Engineering from North Carolina State University and a MS and PhD in Biomedical Engineering from the University of Southern California.

Developer workshops are restricted to machine learning practitioners from research institutions and enterprises who are interested in learning how to port code onto novel AI platforms and want to get hands-on access to hardware and SDKs. 

Workshops are application only and subject to eligibility and availability. The workshops are free, and lunch, shared networking sessions, and access to the Meet and Greet function and keynote is included in the developer pass. If you're a machine learning engineer / AI application developer, please apply using the form in the registration section of the website or by emailing [email protected]. There are approximately 30 spaces available.

Developer Efficiency
Novel AI Hardware
Data Science
Software Engineering
Host

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Author:

Régis Pierrard

Machine Learning Engineer
HuggingFace

Régis Pierrard

Machine Learning Engineer
HuggingFace

Author:

Philipp Schmid

Tech Lead
HuggingFace

Philipp Schmid

Tech Lead
HuggingFace

Using Hugging Face Optimum, we will show how easy it is to run and accelerate an end-to-end state-of-the-art Transformer model workflow on IPUs. We’ll provide a demo to fine-tune a BERT-Large Transformer model in IPUs using Hugging Face Optimum, and then serve the model using the inference API. Join us as we look under the hood of our Intelligence Processing Unit to see how our unique architecture, combined with Optimum’s simple plug-and-play experience, enables faster performance on today’s most popular ML models. By the end of the talk, we would like you to have a better understanding of the wide range of off-the-shelf NLP & Computer Vision Transformer models through our integration with the Hugging Face ecosystem. Plus, how to access this walkthrough by using our free IPU runtimes in the cloud.

Developer workshops are restricted to machine learning practitioners from research institutions and enterprises who are interested in learning how to port code onto novel AI platforms and want to get hands-on access to hardware and SDKs.  


Workshops are application only and subject to eligibility and availability. The workshops are free, and lunch, shared networking sessions, and access to the Meet and Greet function and keynote is included in the developer pass. If you're a machine learning engineer / AI application developer, please apply using the form in the registration section of the website or by emailing [email protected]. There are approximately 30 spaces available.


Developer Efficiency
Novel AI Hardware
Data Science
Software Engineering
Host

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Author:

Régis Pierrard

Machine Learning Engineer
HuggingFace

Régis Pierrard

Machine Learning Engineer
HuggingFace

Author:

Philipp Schmid

Tech Lead
HuggingFace

Philipp Schmid

Tech Lead
HuggingFace

Author:

Tim Santos

Developer Relations Director
Graphcore

Tim is leading the Developer Relations in Graphcore to help the AI & ML community achieve maximum success with IPUs and make the next breakthroughs in machine intelligence. Tim has worn many developer hats in his career, from being a research engineer, data scientist and leading MLOps teams. Along the way, he’s gained experience across all stages of the development lifecycle taking AI applications from experimentation to deployment. If you’re looking to try out IPUs, learn more about our Poplar SDK and tools, showcase your innovations, connect with the community, request educational resources, or provide feedback on our technology, then Tim is your champion.

Tim Santos

Developer Relations Director
Graphcore

Tim is leading the Developer Relations in Graphcore to help the AI & ML community achieve maximum success with IPUs and make the next breakthroughs in machine intelligence. Tim has worn many developer hats in his career, from being a research engineer, data scientist and leading MLOps teams. Along the way, he’s gained experience across all stages of the development lifecycle taking AI applications from experimentation to deployment. If you’re looking to try out IPUs, learn more about our Poplar SDK and tools, showcase your innovations, connect with the community, request educational resources, or provide feedback on our technology, then Tim is your champion.

Developer workshops are restricted to machine learning practitioners from research institutions and enterprises who are interested in learning how to port code onto novel AI platforms and want to get hands-on access to hardware and SDKs.  


Workshops are application only and subject to eligibility and availability. The workshops are free, and lunch, shared networking sessions, and access to the Meet and Greet function and keynote is included in the developer pass. If you're a machine learning engineer / AI application developer, please apply using the form in the registration section of the website or by emailing [email protected]. There are approximately 30 spaces available.

Developer Efficiency
Edge AI
Enterprise AI
ML at Scale
Data Science
Software Engineering

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Author:

Régis Pierrard

Machine Learning Engineer
HuggingFace

Régis Pierrard

Machine Learning Engineer
HuggingFace

Author:

Philipp Schmid

Tech Lead
HuggingFace

Philipp Schmid

Tech Lead
HuggingFace
On Device ML
Vision
Edge Trade Offs
Software Engineering
Hardware and Systems Engineering

Author:

Todd Vierra

Director, Customer Engagement
BrainChip

Todd brings more than 25 years of engineering and technical sales expertise in chip design, electronic design automation, and intellectual property.  He joined BrainChip from ARM, where he was director of field sales engineers for more than 15 years, providing support for ARM processors in the Machine Learning, Internet of Things (IoT), embedded and automotive, client/mobile, and enterprise business divisions. He spent nearly seven years in high-speed ASIC design at Applied Micro Systems, and 4 years at Cadence Design Systems. At Nurlogic Design Inc., and Artisan Components Todd led the technical sales teams for digital and high-speed Analog IP. He has a BS Electrical, Electronics, and Communications Engineering and an MBA from Coleman University.

 

Todd Vierra

Director, Customer Engagement
BrainChip

Todd brings more than 25 years of engineering and technical sales expertise in chip design, electronic design automation, and intellectual property.  He joined BrainChip from ARM, where he was director of field sales engineers for more than 15 years, providing support for ARM processors in the Machine Learning, Internet of Things (IoT), embedded and automotive, client/mobile, and enterprise business divisions. He spent nearly seven years in high-speed ASIC design at Applied Micro Systems, and 4 years at Cadence Design Systems. At Nurlogic Design Inc., and Artisan Components Todd led the technical sales teams for digital and high-speed Analog IP. He has a BS Electrical, Electronics, and Communications Engineering and an MBA from Coleman University.

 

Edge AI
Enterprise AI
ML at Scale
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering

Author:

Vinesh Sukumar

Senior Director & Head of AI/ML Product Management
Qualcomm

Vinesh Sukumar currently serves as Senior Director – Head of AI/ML product management at Qualcomm Technologies, Inc (QTI).  In this role, he leads AI product definition, strategy and solution deployment across multiple business units.

•He has about 20 years of industry experience spread across research, engineering and application deployment. He currently holds a doctorate degree specializing in imaging and vision systems while also completing a business degree focused on strategy and marketing. He is a regular speaker in many AI industry forums and has authored several journal papers and two technical books.

Vinesh Sukumar

Senior Director & Head of AI/ML Product Management
Qualcomm

Vinesh Sukumar currently serves as Senior Director – Head of AI/ML product management at Qualcomm Technologies, Inc (QTI).  In this role, he leads AI product definition, strategy and solution deployment across multiple business units.

•He has about 20 years of industry experience spread across research, engineering and application deployment. He currently holds a doctorate degree specializing in imaging and vision systems while also completing a business degree focused on strategy and marketing. He is a regular speaker in many AI industry forums and has authored several journal papers and two technical books.

Author:

Barrie Mullins

VP, Product
Flex Logix

Barrie has 25+ years of experience working with edge, embedded and AI systems across multiple industries including industrial, automotive, robotics, storage, and communications. Previously, he spent a year at Blaize as head of marketing, and three years at NVIDIA where he led the Jetson Product Marketing team. Prior to NVIDIA, he held multiple roles in Xilinx, including leading product marketing and management for the Zynq product line, sales enablement, business development, customer program management and managing design services. Barrie moved to the United States in 2007 from Ireland, where he worked for Xilinx and two starts ups, Raidtec Corp. and Eurologic Systems, in the Data Storage space where he holds three patents.  

Barrie received his EE from the Munster Technological University, an ME from University College Dublin and an MBA from Santa Clara University’s Leavey School of Business. 

Barrie Mullins

VP, Product
Flex Logix

Barrie has 25+ years of experience working with edge, embedded and AI systems across multiple industries including industrial, automotive, robotics, storage, and communications. Previously, he spent a year at Blaize as head of marketing, and three years at NVIDIA where he led the Jetson Product Marketing team. Prior to NVIDIA, he held multiple roles in Xilinx, including leading product marketing and management for the Zynq product line, sales enablement, business development, customer program management and managing design services. Barrie moved to the United States in 2007 from Ireland, where he worked for Xilinx and two starts ups, Raidtec Corp. and Eurologic Systems, in the Data Storage space where he holds three patents.  

Barrie received his EE from the Munster Technological University, an ME from University College Dublin and an MBA from Santa Clara University’s Leavey School of Business. 

Author:

Vinay Palakkode

Senior Staff ML Engineer & Manager
Rivian

“Vinay Palakkode is a senior staff machine learning engineer and manages a team of deep learning researchers and engineers at Rivian Automotive’s self-driving organization. Vinay holds a master’s degree in electrical and computer engineering from Carnegie Mellon University. He specializes in perception for robotics and high-performance computing. Vinay held prior engineering and management positions at Apple’s Technology Development Group (TDG) and Special Projects Groups (SPG).”

Vinay Palakkode

Senior Staff ML Engineer & Manager
Rivian

“Vinay Palakkode is a senior staff machine learning engineer and manages a team of deep learning researchers and engineers at Rivian Automotive’s self-driving organization. Vinay holds a master’s degree in electrical and computer engineering from Carnegie Mellon University. He specializes in perception for robotics and high-performance computing. Vinay held prior engineering and management positions at Apple’s Technology Development Group (TDG) and Special Projects Groups (SPG).”

Author:

Vamsi Nalluri

Machine Learning HW Architect
Rivian

Vamsi is ML HW Architect at Rivian, and has 17 years of experience in the semiconductor industry working on architecture, verification, and validation.


He most recently was at Xilinx, where he has accelerated sparse neural networks to achieve 3X hardware performance improvement on the 7nm flagship technology platform from Xilinx on many of the industry standard networks like ResNetv50, Yolo and other CNN benchmarks.

Prior to that, he has architected and trained dataflow implementations of quantized and mixed precision neural networks at Intel. 

He graduated from IIT Madras with a B.Tech in Electrical Engineering and is a big tennis fan - which includes playing and watching

Vamsi Nalluri

Machine Learning HW Architect
Rivian

Vamsi is ML HW Architect at Rivian, and has 17 years of experience in the semiconductor industry working on architecture, verification, and validation.


He most recently was at Xilinx, where he has accelerated sparse neural networks to achieve 3X hardware performance improvement on the 7nm flagship technology platform from Xilinx on many of the industry standard networks like ResNetv50, Yolo and other CNN benchmarks.

Prior to that, he has architected and trained dataflow implementations of quantized and mixed precision neural networks at Intel. 

He graduated from IIT Madras with a B.Tech in Electrical Engineering and is a big tennis fan - which includes playing and watching

Author:

Hui Wang

Machine Learning Engineer
Schlumberger

Hui Wang

Machine Learning Engineer
Schlumberger

Cerebras Systems builds the fastest AI accelerators in the industry. In this talk we will review how the size and scope of massive natural language processing (NLP) presents fundamental challenges to legacy compute and to traditional cloud providers. We will explore the importance of guaranteed node to node latency in large clusters, how that can’t be achieved in the cloud, and how it prevents linear and even deterministic scaling. We will examine the complexity of distributing NLP models over hundreds or thousands of GPUs and show how quickly and easily a cluster of Cerebras CS-2s is set up, and how linear scaling can be achieved over millions of compute cores with Cerebras technology. And finally, we will show how innovative customers are using clusters of Cerebras CS-2s to train large language models in order to solve both basic and applied scientific challenges, including understanding the COVID-19 replication mechanism, epigenetic language modelling for drug discovery, and in the development of clean energy. This enables researchers to test ideas that may otherwise languish for lack of resources and, ultimately, reduces the cost of curiosity.  ​

 

Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Andy Hock

VP, Product Management
Cerebras

Dr. Andy Hock is VP of Product Management at Cerebras Systems with responsibility for product strategy. His organization drives engagement with engineering and our customers to inform the hardware, software, and machine learning technical requirements and accelerate world-leading AI with Cerebras’ products. Prior to Cerebras, Andy has held senior leadership positions with Arete Associates, Skybox Imaging (acquired by Google), and Google. He holds a PhD in Geophysics and Space Physics from UCLA.

Andy Hock

VP, Product Management
Cerebras

Dr. Andy Hock is VP of Product Management at Cerebras Systems with responsibility for product strategy. His organization drives engagement with engineering and our customers to inform the hardware, software, and machine learning technical requirements and accelerate world-leading AI with Cerebras’ products. Prior to Cerebras, Andy has held senior leadership positions with Arete Associates, Skybox Imaging (acquired by Google), and Google. He holds a PhD in Geophysics and Space Physics from UCLA.