Strategy | Page 6 | Kisaco Research

Strategy

Color: 
#4f4f4f

The size of the Deep Learning and AI models has increased substantially within the past couple of years. With recent advancements specifically around NLP/Conversational AI and Computer Vision applications powered by large scale models such as BERT, GPT-3 and Vision Transformers (ViT) having hundreds of millions to billions of parameters, deployment and management of such models is becoming challenging. In this talk, I will go over the state-of-the-art models, their Edge AI applications, deployment concerns and approaches on how to leverage them on Edge computing. I will share my experience of deploying such large models from Amazon AI, Uber AI, and Got It AI.

NLP and Speech
Edge Trade Offs
On Device ML
Software Engineering
Data Science
Strategy

Author:

Chandra Khatri

Co-Founder
Got It AI

Chandra Khatri is the Co-Founder at Got It AI, wherein, his team is building the world's first fully autonomous Conversational AI technology. Under his leadership, Got It AI is pushing the boundaries of the Conversational AI ecosystem and delivering the next generation of automation products. Prior to Got-It, Chandra was leading various kinds of applied research groups at Uber AI such as Conversational AI, Multi-modal AI, and Recommendation Systems.

 Prior to Uber AI, he was leading R&D for the Alexa Prize Competition (Alexa AI) at Amazon, wherein he got the opportunity to significantly advance the field of Conversational AI, particularly Open-domain Dialog Systems, which is considered as the holy-grail of Conversational AI and is one of the open-ended problems in AI. Prior to Alexa AI, he was driving NLP, Deep Learning, and Recommendation Systems related Applied Research at eBay. He graduated from Georgia Tech with a specialization in Deep Learning in 2015 and holds an undergraduate degree from BITS Pilani, India.

His current areas of research include Artificial and General Intelligence, Reinforcement Learning, Language Understanding, Conversational AI, Multi-modal and Human-agent Interactions, and Introducing Common Sense within Artificial Agents.

Chandra Khatri

Co-Founder
Got It AI

Chandra Khatri is the Co-Founder at Got It AI, wherein, his team is building the world's first fully autonomous Conversational AI technology. Under his leadership, Got It AI is pushing the boundaries of the Conversational AI ecosystem and delivering the next generation of automation products. Prior to Got-It, Chandra was leading various kinds of applied research groups at Uber AI such as Conversational AI, Multi-modal AI, and Recommendation Systems.

 Prior to Uber AI, he was leading R&D for the Alexa Prize Competition (Alexa AI) at Amazon, wherein he got the opportunity to significantly advance the field of Conversational AI, particularly Open-domain Dialog Systems, which is considered as the holy-grail of Conversational AI and is one of the open-ended problems in AI. Prior to Alexa AI, he was driving NLP, Deep Learning, and Recommendation Systems related Applied Research at eBay. He graduated from Georgia Tech with a specialization in Deep Learning in 2015 and holds an undergraduate degree from BITS Pilani, India.

His current areas of research include Artificial and General Intelligence, Reinforcement Learning, Language Understanding, Conversational AI, Multi-modal and Human-agent Interactions, and Introducing Common Sense within Artificial Agents.

Machine vision workloads are complex, and their performance requirements often present challenges in areas like latency, security, energy use and reliability.  Hierarchal partitioning of those workloads often makes sense, where the machine vision software is split into multiple stages (for example, contrast enhancement, feature extraction, object recognition, threat detection), which are run at different layers of the [intelligent camera -> edge node -> MEC -> cloud] hierarchy. 

This talk will introduce the hierarchal cloud - edge architecture, and discuss the properties and capabilities of its many layers.   It will propose an example segmentation of machine vision algorithms, and investigate the tradeoffs of how we can map them onto the various layers of processing available in the hierarchy.  Finally, it will look at the dual flows of model training and inference for AI applications, and discuss which portions of those flows make sense in different edge layers, and how they can be secured, orchestrated and managed.

Vision
Edge Trade Offs
Software Engineering
Hardware and Systems Engineering
Data Science
Strategy

Author:

Charles Byers

Chief Technology Officer
Industry IoT Consortium and Valqari

Charles Byers

Chief Technology Officer
Industry IoT Consortium and Valqari

The adoption of edge computing is enormous in the automotive industry. So, what lessons have been learned that other industries can adopt and adapt for their own rapid innovation?

Vision
On Device ML
Edge Trade Offs
Software Engineering
Hardware and Systems Engineering
Strategy
Speakers

Author:

Roger Berg

Vice President, North American Research and Development
DENSO International America, Inc.

Roger Berg is Vice President of DENSO’s North American Research and Development group. His latest research interests and responsibilities include next generation connectivity, mobile edge computing, connected automated vehicles and decentralized ledger technologies.

Roger Berg

Vice President, North American Research and Development
DENSO International America, Inc.

Roger Berg is Vice President of DENSO’s North American Research and Development group. His latest research interests and responsibilities include next generation connectivity, mobile edge computing, connected automated vehicles and decentralized ledger technologies.

Author:

Gaurav Singh

Product, AI
Ridecell

Gaurav is part of the Nemo product team at Ridecell, and does advanced AI-based data analytics for ADAS & AD data. He brings a strong AI and machine learning background to his role having previously worked on developing software for autonomous driving and computer vision applications. Gaurav is a Masters in Robotics from Carnegie Mellon University.

Gaurav Singh

Product, AI
Ridecell

Gaurav is part of the Nemo product team at Ridecell, and does advanced AI-based data analytics for ADAS & AD data. He brings a strong AI and machine learning background to his role having previously worked on developing software for autonomous driving and computer vision applications. Gaurav is a Masters in Robotics from Carnegie Mellon University.

Author:

Prashant Tiwari

Former Executive Director, Software Platforms at Volkswagen

Prashant Tiwari

Former Executive Director, Software Platforms at Volkswagen

Author:

Jake Hillard

CEO and Co-Founder
Red Leader Tech

Jake Hillard is an expert in Signal Processing and has launched multiple laser satellite missions to space. He is a Peter Theil Fellow and currently the CEO and Co-Founder of Red Leader Technologies

Jake Hillard

CEO and Co-Founder
Red Leader Tech

Jake Hillard is an expert in Signal Processing and has launched multiple laser satellite missions to space. He is a Peter Theil Fellow and currently the CEO and Co-Founder of Red Leader Technologies

Moderator

Author:

Rob Telson

Vice President of Ecosystems & Partnerships
BrainChip

Rob Telson brings over 25 years of sales expertise in licensing intellectual property and selling EDA technology across multiple vertical markets. He has had success developing sales and support organizations at small, midsize, and large companies. At ARM, a global semiconductor and software design company, Rob was Vice President of Foundry Sales worldwide and prior was Vice President of Sales for the Americas. At Synopsys, he was responsible for building and developing a business focused on disruptive technologies in the semiconductor space. Rob has been impactful on driving the growth and demand of BrainChip’s technology leading Worldwide Sales and Marketing and now developing and leading the Ecosystem and Partnerships at BrainChip. Rob holds a BS in political science from the University of Arizona and a Program for Leadership Development certificate from Harvard Business School.

Rob Telson

Vice President of Ecosystems & Partnerships
BrainChip

Rob Telson brings over 25 years of sales expertise in licensing intellectual property and selling EDA technology across multiple vertical markets. He has had success developing sales and support organizations at small, midsize, and large companies. At ARM, a global semiconductor and software design company, Rob was Vice President of Foundry Sales worldwide and prior was Vice President of Sales for the Americas. At Synopsys, he was responsible for building and developing a business focused on disruptive technologies in the semiconductor space. Rob has been impactful on driving the growth and demand of BrainChip’s technology leading Worldwide Sales and Marketing and now developing and leading the Ecosystem and Partnerships at BrainChip. Rob holds a BS in political science from the University of Arizona and a Program for Leadership Development certificate from Harvard Business School.