Systems Architecture | Page 2 | Kisaco Research

Systems Architecture

Color: 
#1d3459

The leading companies at every level of the data center value chain, from IP, chips, platforms, system OEMs to hyperscalers, are coalescing around CXL technology as a path to revolutionize the data center. With workload demands increasing rapidly, the need for more memory bandwidth and capacity continues to rise. Memory is of critical importance, and its share of the server bill of materials (BoM) continues to grow. Making the best use of vital memory resources is an imperative. With CXL technology, the industry is pursuing tiered-memory solutions that can break through the memory bottleneck while at the same time delivering greater efficiency and improved TCO. Ultimately, CXL technology can support composable architectures that match the amount of compute, memory and storage in an on-demand fashion to the needs of a wide range of advanced workloads.    

CXL
Embedded Memory
Emerging Memories
Hardware Eng.
Memory Systems Eng.
Systems Architecture

Author:

Mark Orthodoxou

VP, Strategic Marketing, Datacenter Product
Rambus

Mark Orthodoxou is the Vice President of Strategic Marketing for Rambus’ Datacenter Products Group. Mark has over 25 years of experience in product management and strategic planning in the semiconductor industry across multiple technology disciplines, including enterprise storage, data center compute, memory subsystems and networking. Mark has evangelized the benefits of serial-attached memory since long before CXL was introduced as a standard and was responsible for the introduction of the first commercially available products in this space. Mark currently sits on the CXL Consortium Marketing Working Group. He has held various leadership positions at Microchip, Microsemi, PMC-Sierra, and IDT.

Mark Orthodoxou

VP, Strategic Marketing, Datacenter Product
Rambus

Mark Orthodoxou is the Vice President of Strategic Marketing for Rambus’ Datacenter Products Group. Mark has over 25 years of experience in product management and strategic planning in the semiconductor industry across multiple technology disciplines, including enterprise storage, data center compute, memory subsystems and networking. Mark has evangelized the benefits of serial-attached memory since long before CXL was introduced as a standard and was responsible for the introduction of the first commercially available products in this space. Mark currently sits on the CXL Consortium Marketing Working Group. He has held various leadership positions at Microchip, Microsemi, PMC-Sierra, and IDT.

Enterprise knowledge graphs (EKGs) offer the ability to store large connected datasets in memory for fast traversal using simple pointer-hopping instructions.  However, keeping hundreds or thousands of cores feed with traversal data has become one of the key challenges for artificial intelligence and analytics.  Despite the exponential growth in graphs databases we have yet to see hardware tuned to graph analytics workloads.  In this session we will review the requirements for EKGs and provide a roadmap of how new memory hardware can be used to solve EKG challenges.

Embedded Memory
External Memory
Systems Design
Use Case
Hardware Eng.
Memory Systems Eng.
Software Eng.
Systems Architecture

Author:

Dan McCreary

Distinguished Engineer, Graph & AI
Optum

Dan is a distinguished engineer in AI working on innovative database architectures including document and graph databases. He has a strong background in semantics, ontologies, NLP and search. He is a hands-on architect and like to build his own pilot applications using new technologies. Dan started the NoSQL Now! Conference (now called the Database Now! Conferences). He also co-authored the book Making Sense of NoSQL, one of the highest rated books on Amazon on the topic of NoSQL. Dan worked at Bell Labs as a VLSI circuit designer where he worked with Brian Kernighan (of K&R C). Dan also worked with Steve Jobs at NeXT Computer.

Dan McCreary

Distinguished Engineer, Graph & AI
Optum

Dan is a distinguished engineer in AI working on innovative database architectures including document and graph databases. He has a strong background in semantics, ontologies, NLP and search. He is a hands-on architect and like to build his own pilot applications using new technologies. Dan started the NoSQL Now! Conference (now called the Database Now! Conferences). He also co-authored the book Making Sense of NoSQL, one of the highest rated books on Amazon on the topic of NoSQL. Dan worked at Bell Labs as a VLSI circuit designer where he worked with Brian Kernighan (of K&R C). Dan also worked with Steve Jobs at NeXT Computer.

The AI ´memory wall´is well documented, where AIML applications are hitting bottlenecks in intra/inter-chip and communication across/to AIML accelerators. Memory requirements to train AIML models are typically several times larger than the number of parameters, while the speed of data transfer has consistently failed to keep up with advancements in compute capabilities.

In the innovative world of dedicated AIML processors and systems, there have been a variety of approaches, both at the chip and systems level, to engineering around these challenges. This panel will look at how leading engineering teams working in this space are tackling the AI memory wall and how they see requirements shifting as requirements for new types of models evolve.

Embedded Memory
Emerging Memories
External Memory
Systems Design
Use Case
Hardware Eng.
Memory Systems Eng.
Systems Architecture
Moderator

Author:

Jean Bozman

President
Cloud Architects Advisors, LLC

Jean S. Bozman is an IT industry analyst focusing on cloud infrastructure and the proud founder of a new company, Cloud Architects Advisors LLC.

She has had experience as an IDC Research VP for 10+ years and has covered the semiconductor industry as an analyst for over 20 years.

Jean Bozman

President
Cloud Architects Advisors, LLC

Jean S. Bozman is an IT industry analyst focusing on cloud infrastructure and the proud founder of a new company, Cloud Architects Advisors LLC.

She has had experience as an IDC Research VP for 10+ years and has covered the semiconductor industry as an analyst for over 20 years.

Panellists

Author:

Sumti Jairath

Chief Architect
SambaNova Systems

Sumti Jairath is Chief Architect at SambaNova Systems, with expertise in hardware-software co-design. Sumti worked on PA-RISC-based Superdome servers back at HP, followed by several generations of SPARC CMT processors at Sun Microsystems and Oracle. At Oracle, Sumti worked on SQL, Data-analytics and Machine Learning acceleration in SPARC processors. Sumti holds 27 patents in computer architecture and hardware-software co-design.

 

Sumti Jairath

Chief Architect
SambaNova Systems

Sumti Jairath is Chief Architect at SambaNova Systems, with expertise in hardware-software co-design. Sumti worked on PA-RISC-based Superdome servers back at HP, followed by several generations of SPARC CMT processors at Sun Microsystems and Oracle. At Oracle, Sumti worked on SQL, Data-analytics and Machine Learning acceleration in SPARC processors. Sumti holds 27 patents in computer architecture and hardware-software co-design.

 

Author:

Venkatram Vishwanath

Data Science Team Lead
Argonne National Laboratory

Venkatram Vishwanath is a computer scientist at Argonne National Laboratory. He is the Data Science Team Lead at the Argonne leadership computing facility (ALCF). His current focus is on algorithms, system software, and workflows to facilitate data-centric applications on supercomputing systems. His interests include scientific applications, supercomputing architectures, parallel algorithms and runtimes, scalable analytics and collaborative workspaces. He has received best papers awards at venues including HPDC and LDAV, and a Gordon Bell finalist. Vishwanath received his Ph.D. in computer science from the University of Illinois at Chicago in 2009.

Research Interests

  • Scientific data analysis and visualization
  • Parallel I/O and I/O middleware
  • Large-scale computing systems and other exotic architectures (Blue Gene, Cray, multi-core systems, GPUs and other accelerators)
  • High-speed interconnects (InfiniBand, high-speed Ethernet, optical), data movement and transfer protocols, and (v) collaboration workspaces

 

Venkatram Vishwanath

Data Science Team Lead
Argonne National Laboratory

Venkatram Vishwanath is a computer scientist at Argonne National Laboratory. He is the Data Science Team Lead at the Argonne leadership computing facility (ALCF). His current focus is on algorithms, system software, and workflows to facilitate data-centric applications on supercomputing systems. His interests include scientific applications, supercomputing architectures, parallel algorithms and runtimes, scalable analytics and collaborative workspaces. He has received best papers awards at venues including HPDC and LDAV, and a Gordon Bell finalist. Vishwanath received his Ph.D. in computer science from the University of Illinois at Chicago in 2009.

Research Interests

  • Scientific data analysis and visualization
  • Parallel I/O and I/O middleware
  • Large-scale computing systems and other exotic architectures (Blue Gene, Cray, multi-core systems, GPUs and other accelerators)
  • High-speed interconnects (InfiniBand, high-speed Ethernet, optical), data movement and transfer protocols, and (v) collaboration workspaces

 

With current trends in DRAM capacities and costs, in-memory database technology is rapidly becoming mainstream. Oracle Database In-Memory features a unique dual-format in-memory architecture designed to optimize the performance of simultaneous analytic and transactional workloads on the same data. In addition, the in-memory columnar technology for Oracle Database In-Memory is also available within the storage tier of the Exadata database machine, allowing for effective in-memory columnar capacities to approach 100s of Terabytes.  In-memory processing is more than simply about speed; it enables a fundamental transformation in business processes. Just as air travel enabled more than faster travel - it fundamentally changed society, Oracle Database In-Memory similarly enables not just faster analytics and transactions, but a fundamental rethinking and drastic simplification of the traditional analytic architectures. In this session we will show how, especially when combined with Oracle's many converged database capabilities, Database In-Memory allows for the development of a new class of real-time enterprise applications, with significant reduction in cost and complexity, while providing unmatched performance across a wide range of workloads.

Systems Design
Use Case
Hardware Eng.
Memory Systems Eng.
Software Eng.
Systems Architecture

Author:

Tirthankar Lahiri

SVP, Data & In-Memory Technologies
Oracle

Tirthankar Lahiri is Vice President of the Data and In-Memory Technologies group for Oracle Database and is responsible for the Oracle Database Engine (including Database In-Memory, Data and Indexes, Space Management, Transactions, and the Database File System), the Oracle TimesTen In-Memory Database, and Oracle NoSQLDB. Tirthankar has 22 years of experience in the Database industry and has worked extensively in a variety of areas including Manageability, Performance, Scalability, High Availability, Caching, Distributed Concurrency Control, In-Memory Data Management, NoSQL architectures, etc. He has 27 issued and has several pending patents in these areas. Tirthankar has a B.Tech in Computer Science from the Indian Institute of Technology (Kharagpur) and an MS in Electrical Engineering from Stanford University.

Tirthankar Lahiri

SVP, Data & In-Memory Technologies
Oracle

Tirthankar Lahiri is Vice President of the Data and In-Memory Technologies group for Oracle Database and is responsible for the Oracle Database Engine (including Database In-Memory, Data and Indexes, Space Management, Transactions, and the Database File System), the Oracle TimesTen In-Memory Database, and Oracle NoSQLDB. Tirthankar has 22 years of experience in the Database industry and has worked extensively in a variety of areas including Manageability, Performance, Scalability, High Availability, Caching, Distributed Concurrency Control, In-Memory Data Management, NoSQL architectures, etc. He has 27 issued and has several pending patents in these areas. Tirthankar has a B.Tech in Computer Science from the Indian Institute of Technology (Kharagpur) and an MS in Electrical Engineering from Stanford University.

As the cost of sequencing drops and the quantity of data produced by sequencing grows, the amount of processing dedicated to genomics is increasing at a rapid pace.  Genomics is evolving in a number of directions simultaneously.  Some key applications scale naturally to use resources available in the cloud, while other computations benefit from on-prem acceleration using FPGAs or GPUs.  All of these computations strain the bandwidth and capacity of available resources.  In this talk, Roche´s Tom Sheffler will share an overview of the memory-bound challenges present in genomics and venture some possible solutions.

External Memory
Systems Design
Use Case
Hardware Eng.
Memory Systems Eng.
Software Eng.
Systems Architecture

Author:

Tom Sheffler

Solution Architect, Next Generation Sequencing
Roche

Tom earned his PhD from Carnegie Mellon in Computer Engineering with a focus on parallel computing architectures and prrogramming models.  His interest in high-performance computing took him to NASA Ames, and then to Rambus where he worked on accelerated memory interfaces for providing high bandwidth.  Following that, he co-founded the cloud video analytics company, Sensr.net, that applied scalable cloud computing to analyzing large streams of video data.  He later joined Roche to work on next-generation sequencing and scalable genomics analysis platforms.  Throughout his career, Tom has focused on the application of high performance computer systems to real world problems.

Tom Sheffler

Solution Architect, Next Generation Sequencing
Roche

Tom earned his PhD from Carnegie Mellon in Computer Engineering with a focus on parallel computing architectures and prrogramming models.  His interest in high-performance computing took him to NASA Ames, and then to Rambus where he worked on accelerated memory interfaces for providing high bandwidth.  Following that, he co-founded the cloud video analytics company, Sensr.net, that applied scalable cloud computing to analyzing large streams of video data.  He later joined Roche to work on next-generation sequencing and scalable genomics analysis platforms.  Throughout his career, Tom has focused on the application of high performance computer systems to real world problems.

Recent work by the National Energy Technology Laboratory and Cerebras Systems Inc, has underscored the critical needs of sufficient bandwidth and latency in scientific computing.  In this work, the team demonstrated (to the best of our knowledge) the fastest solution to field equations in computing history.  These remarkable results were made possible by the trifecta of great memory bandwidth, great interconnect bandwidth, and amazing (one clock cycle) communication latency and injection rate for small messages.  Furthermore, the bandwidths are sufficiently high such that no memory hierarchy is necessary which significantly simplifies programming models and software development effort and expense.  In this talk we will outline the conditions necessary to achieve these results.

Embedded Memory
External Memory
Systems Design
Use Case
Hardware Eng.
Memory Systems Eng.
Software Eng.
Systems Architecture

Author:

Dirk Van Essendelft

HPC & AI Architect
National Energy Technology Laboratory

Dr. Van Essendelft is the principle investigator for the integration of AI/ML with scientific simulations within in the Computational Device Engineering Team at the National Energy Technology Laboratory.  The focus of Dr. Van Essendelft’s work is building a comprehensive hardware and software ecosystem that maximizes speed, accuracy, and energy efficiency of AI/ML accelerated scientific simulations.  Currently, his work centers around building Computational Fluid Dynamics capability within the TensorFlow framework, generating AI/ML based predictors, and ensuring the ecosystem is compatible with the fastest possible accelerators and processors in industry.  In this way, Dr. Van Essendelft is developing NETL’s first cognitive-in-the-loop simulation capability in which AI/ML models can be used any point to bring acceleration and/or closures in new ways.  Dr. Van Essendelft sits on the Technical Advisory Group for NETL’s new Science-Based Artificial Intelligence/Machine Learning Institute (SAMI) and holds degrees in Energy and Geo-Environmental Engineering, Chemical and Biochemical Engineering, and Chemical Engineering from the Pennsylvania State University, University of California, Irvine, and Calvin College respectively.

Recent publications:

  • Rocki, K., Van Essendelft, D., Sharapov, I., Schreiber, R., Morrison, M., Kibardin, V., Portnoy, A., Dietiker, J. F., Syamlal, M., and James, M. (2020) Fast stencil-code computation on a wafer-scale processor, In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp pp 1-14, IEEE Press, Atlanta, Georgia.

Dirk Van Essendelft

HPC & AI Architect
National Energy Technology Laboratory

Dr. Van Essendelft is the principle investigator for the integration of AI/ML with scientific simulations within in the Computational Device Engineering Team at the National Energy Technology Laboratory.  The focus of Dr. Van Essendelft’s work is building a comprehensive hardware and software ecosystem that maximizes speed, accuracy, and energy efficiency of AI/ML accelerated scientific simulations.  Currently, his work centers around building Computational Fluid Dynamics capability within the TensorFlow framework, generating AI/ML based predictors, and ensuring the ecosystem is compatible with the fastest possible accelerators and processors in industry.  In this way, Dr. Van Essendelft is developing NETL’s first cognitive-in-the-loop simulation capability in which AI/ML models can be used any point to bring acceleration and/or closures in new ways.  Dr. Van Essendelft sits on the Technical Advisory Group for NETL’s new Science-Based Artificial Intelligence/Machine Learning Institute (SAMI) and holds degrees in Energy and Geo-Environmental Engineering, Chemical and Biochemical Engineering, and Chemical Engineering from the Pennsylvania State University, University of California, Irvine, and Calvin College respectively.

Recent publications:

  • Rocki, K., Van Essendelft, D., Sharapov, I., Schreiber, R., Morrison, M., Kibardin, V., Portnoy, A., Dietiker, J. F., Syamlal, M., and James, M. (2020) Fast stencil-code computation on a wafer-scale processor, In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp pp 1-14, IEEE Press, Atlanta, Georgia.

Working back from the question,"what do future systems architectures need to look like?", this panel will investigate the current memory, bandwidth and latency bottlenecks in systems today, and compare and contrast datacenter and HPC examples. In discussing the characteristics, similarities, and differences between various server workloads and use cases, such as AIML co-design, acceleration of scientific workloads and others, this panel will attempt to establish context for why memory innovation is so important.


Emerging Memories
External Memory
Systems Design
Use Case
Hardware Eng.
Memory Systems Eng.
Software Eng.
Systems Architecture
Moderator

Author:

Rob Ober

Chief Platform Architect
NVIDIA

Rob is NVIDIA’s data center Chief Platform Architect, working with Hyperscalers to build GPU clusters for AI and Deep Learning, develop systems and platform architecture, and influence the HW and SW GPU roadmaps at NVIDIA. His interest in AI and DL was driven by its impact on computer science and computer architecture.

With more than 35 years experience, Rob was Senior Fellow of Enterprise Technology at SanDisk / FusionIO, Corporate Fellow and Chief Architect at LSI; Fellow and Architect at AMD; Chief Architect at Infineon; Manager of Technologies at Apple Computer, as well as designer of supercomputers, mainframes, and networks.

Rob has over 40 international patents in processor architecture, storage systems, SSDs, networks, wireless, power management, and mobile devices. He has developed architecture and implementation of CRAY, ARM, PowerPC, ARC, Sparc, TriCore and x86 processors.

Rob Ober

Chief Platform Architect
NVIDIA

Rob is NVIDIA’s data center Chief Platform Architect, working with Hyperscalers to build GPU clusters for AI and Deep Learning, develop systems and platform architecture, and influence the HW and SW GPU roadmaps at NVIDIA. His interest in AI and DL was driven by its impact on computer science and computer architecture.

With more than 35 years experience, Rob was Senior Fellow of Enterprise Technology at SanDisk / FusionIO, Corporate Fellow and Chief Architect at LSI; Fellow and Architect at AMD; Chief Architect at Infineon; Manager of Technologies at Apple Computer, as well as designer of supercomputers, mainframes, and networks.

Rob has over 40 international patents in processor architecture, storage systems, SSDs, networks, wireless, power management, and mobile devices. He has developed architecture and implementation of CRAY, ARM, PowerPC, ARC, Sparc, TriCore and x86 processors.

Author:

Nick Wright

Chief Architect & Head, Advanced Technology Group
NERSC

Nick Wright is the advanced technologies group lead and the NERSC chief architect. He focuses upon evaluating future technologies for potential application in scientific computing. He led the effort to optimize the architecture of the Perlmutter machine, the first NERSC platform designed to meet needs of both large scale simulation and data analysis from experimental facilities. Before moving to NERSC, he was a member of the Performance Modeling and Characterization (PMaC) group at the San Diego Supercomputing Center. He earned both his undergraduate and doctoral degrees in chemistry at the University of Durham in England.

Nick Wright

Chief Architect & Head, Advanced Technology Group
NERSC

Nick Wright is the advanced technologies group lead and the NERSC chief architect. He focuses upon evaluating future technologies for potential application in scientific computing. He led the effort to optimize the architecture of the Perlmutter machine, the first NERSC platform designed to meet needs of both large scale simulation and data analysis from experimental facilities. Before moving to NERSC, he was a member of the Performance Modeling and Characterization (PMaC) group at the San Diego Supercomputing Center. He earned both his undergraduate and doctoral degrees in chemistry at the University of Durham in England.

Author:

Zaid Kahn

GM, Cloud AI & Advanced Systems Engineering
Microsoft

Zaid is currently GM in Cloud Hardware Infrastructure Engineering where he leads a team focusing on advanced architecture and engineering efforts for AI. He is passionate about building balanced teams of artists and soldiers that solve incredibly difficult problems at scale.

Prior to Microsoft Zaid was head of infrastructure engineering at LinkedIn responsible for all aspects of engineering for Datacenters, Compute, Networking, Storage and Hardware. He also lead several software development teams spanning from BMC, network operating systems, server and network fleet automation to SDN efforts inside the datacenter and global backbone including edge. He introduced the concept of disaggregation inside LinkedIn and pioneered JDM with multiple vendors through key initiatives like OpenSwitch, Open19 essentially controlling destiny for hardware development at LinkedIn. During his 9 year tenure at LinkedIn his team scaled network and systems 150X, members from 50M to 675M, and hiring someone every 7 seconds on the LinkedIn Platform.

Prior to LinkedIn Zaid was Network Architect at WebEx responsible for building the MediaTone network and later I built a startup that built a pattern recognition security chip using NPU/FPGA. Zaid holds several patents in networking and SDN and is also a recognized industry leader. He previously served as a board member of the Open19 Foundation and San Francisco chapter of Internet Society. Currently he serves on DE-CIX and Pensando advisory boards.

Zaid Kahn

GM, Cloud AI & Advanced Systems Engineering
Microsoft

Zaid is currently GM in Cloud Hardware Infrastructure Engineering where he leads a team focusing on advanced architecture and engineering efforts for AI. He is passionate about building balanced teams of artists and soldiers that solve incredibly difficult problems at scale.

Prior to Microsoft Zaid was head of infrastructure engineering at LinkedIn responsible for all aspects of engineering for Datacenters, Compute, Networking, Storage and Hardware. He also lead several software development teams spanning from BMC, network operating systems, server and network fleet automation to SDN efforts inside the datacenter and global backbone including edge. He introduced the concept of disaggregation inside LinkedIn and pioneered JDM with multiple vendors through key initiatives like OpenSwitch, Open19 essentially controlling destiny for hardware development at LinkedIn. During his 9 year tenure at LinkedIn his team scaled network and systems 150X, members from 50M to 675M, and hiring someone every 7 seconds on the LinkedIn Platform.

Prior to LinkedIn Zaid was Network Architect at WebEx responsible for building the MediaTone network and later I built a startup that built a pattern recognition security chip using NPU/FPGA. Zaid holds several patents in networking and SDN and is also a recognized industry leader. He previously served as a board member of the Open19 Foundation and San Francisco chapter of Internet Society. Currently he serves on DE-CIX and Pensando advisory boards.

Author:

David Emberson

Senior Distinguished Technologist
HPE

David Emberson is Senior Distinguished Technologist for HPC System Architecture, where he is working on future memory system designs for HPE Cray systems. He began his career at MIT's Digital Systems Laboratory, where he built one of the first portable computers in 1975. He has held positions at Prime Computer, Megatest, Ametek Computer Research, and Sun Microsystems. At Sun, Mr. Emberson was a member of the SPARC architecture committee, managed the SparcStation 10 and SparcStation 20 programs, and was Senior Director at SunLabs. His consulting clients have included the Hypertransport Consortium, AMD, Intel, Atheros, PathScale, Qlogic and numerous startup companies.

At HPE he was Technical Director of HPE's PathForward program for the Department of Energy's Exascale Computing Program. His current research is in memory system design for HPC systems. He serves on the JEDEC J42.2 (HBM) committee and is a Senior Member of IEEE. Mr. Emberson has a B.S. in Electrical Engineering from MIT. He holds nineteen patents.

David Emberson

Senior Distinguished Technologist
HPE

David Emberson is Senior Distinguished Technologist for HPC System Architecture, where he is working on future memory system designs for HPE Cray systems. He began his career at MIT's Digital Systems Laboratory, where he built one of the first portable computers in 1975. He has held positions at Prime Computer, Megatest, Ametek Computer Research, and Sun Microsystems. At Sun, Mr. Emberson was a member of the SPARC architecture committee, managed the SparcStation 10 and SparcStation 20 programs, and was Senior Director at SunLabs. His consulting clients have included the Hypertransport Consortium, AMD, Intel, Atheros, PathScale, Qlogic and numerous startup companies.

At HPE he was Technical Director of HPE's PathForward program for the Department of Energy's Exascale Computing Program. His current research is in memory system design for HPC systems. He serves on the JEDEC J42.2 (HBM) committee and is a Senior Member of IEEE. Mr. Emberson has a B.S. in Electrical Engineering from MIT. He holds nineteen patents.

Author:

Uri Rosenberg

Specialist Technical Manager, AI/ML
Amazon Web Services

Uri Rosenberg is the Specialist Technical Manager of AI & ML services within enterprise support at Amazon Web Services (AWS) EMEA. Uri works to empower enterprise customers on all things ML: from underwater computer vision models that monitor fish to training models on satellite images in space; from optimizing costs to strategic discussions on deep learning and ethics. Uri brings his extensive experience to drive success of customers at all stages of ML adoption.

Before AWS, Uri led the ML projects at AT&T innovation center in Israel, working on deep learning models with extreme security and privacy constraints.

Uri is also an AWS certified Lead Machine learning subject matter expert and holds an MsC in Computer Science from Tel-Aviv Academic College, where his research focused on large scale deep learning models.

Uri Rosenberg

Specialist Technical Manager, AI/ML
Amazon Web Services

Uri Rosenberg is the Specialist Technical Manager of AI & ML services within enterprise support at Amazon Web Services (AWS) EMEA. Uri works to empower enterprise customers on all things ML: from underwater computer vision models that monitor fish to training models on satellite images in space; from optimizing costs to strategic discussions on deep learning and ethics. Uri brings his extensive experience to drive success of customers at all stages of ML adoption.

Before AWS, Uri led the ML projects at AT&T innovation center in Israel, working on deep learning models with extreme security and privacy constraints.

Uri is also an AWS certified Lead Machine learning subject matter expert and holds an MsC in Computer Science from Tel-Aviv Academic College, where his research focused on large scale deep learning models.

Use Case
Hardware Eng.
Memory Systems Eng.
Software Eng.
Systems Architecture

Author:

Dimitri Kusnezov

Under Secretary for Science & Technology
Department of Homeland Security

Dr. Dimitri Kusnezov [Kooz-NETS-off] was confirmed as the Under Secretary for the Science and Technology Directorate (S&T) on September 8, 2022. As the science advisor to the Homeland Security Secretary, Dr. Kusnezov heads the research, development, innovation and testing and evaluation activities in support of the Department of Homeland Security’s (DHS) operational Components and first responders across the nation. S&T is responsible for identifying operational gaps, conceptualizing art-of-the-possible solutions, and delivering operational results that improve the security and resilience of the nation. 

Prior to DHS, Dr. Kusnezov was a theoretical physicist working at the U.S. Department of Energy (DOE) focusing on emerging technologies. He served in numerous positions, including the Deputy Under Secretary for Artificial Intelligence (AI) & Technology where he led efforts to drive AI innovation and bring it into DOE missions, business and operations, including through the creation of a new AI Office. 

Dr. Kusnezov has served in scientific and national security positions, including Senior Advisor to the Secretary of Energy, Chief Scientist for the National Nuclear Security Administration, Director of Advanced Simulation and Computing and the Director of the multi-billion-dollar National Security Science, Technology and Engineering programs. He created numerous programs, including for Minority Serving Institutions, international partners, private sector and philanthropic entities. He has worked across agencies to deliver major milestones such as DOE’s 10-year grand challenge for a 100 Teraflop supercomputer, and first of their kind and world’s fastest supercomputers. 

Prior to DOE and his pursuit of public service, Dr. Kusnezov had a long career in academia where he published more than 100 articles and edited two books. He joined Yale University faculty where he was a professor for more than a decade in Theoretical Physics and served as a visiting professor at numerous universities around the world. Before this post, Dr. Kusnezov did a brief postdoc and was an instructor at Michigan State University, following a year of research at the Institut fur Kernphysik, KFA-Julich, in Germany. He earned his MS in Physics and Ph.D. in Theoretical Nuclear Physics at Princeton University and received Bachelor of Arts degrees in Physics and in Pure Mathematics with highest honors from UC Berkeley.

 

Dimitri Kusnezov

Under Secretary for Science & Technology
Department of Homeland Security

Dr. Dimitri Kusnezov [Kooz-NETS-off] was confirmed as the Under Secretary for the Science and Technology Directorate (S&T) on September 8, 2022. As the science advisor to the Homeland Security Secretary, Dr. Kusnezov heads the research, development, innovation and testing and evaluation activities in support of the Department of Homeland Security’s (DHS) operational Components and first responders across the nation. S&T is responsible for identifying operational gaps, conceptualizing art-of-the-possible solutions, and delivering operational results that improve the security and resilience of the nation. 

Prior to DHS, Dr. Kusnezov was a theoretical physicist working at the U.S. Department of Energy (DOE) focusing on emerging technologies. He served in numerous positions, including the Deputy Under Secretary for Artificial Intelligence (AI) & Technology where he led efforts to drive AI innovation and bring it into DOE missions, business and operations, including through the creation of a new AI Office. 

Dr. Kusnezov has served in scientific and national security positions, including Senior Advisor to the Secretary of Energy, Chief Scientist for the National Nuclear Security Administration, Director of Advanced Simulation and Computing and the Director of the multi-billion-dollar National Security Science, Technology and Engineering programs. He created numerous programs, including for Minority Serving Institutions, international partners, private sector and philanthropic entities. He has worked across agencies to deliver major milestones such as DOE’s 10-year grand challenge for a 100 Teraflop supercomputer, and first of their kind and world’s fastest supercomputers. 

Prior to DOE and his pursuit of public service, Dr. Kusnezov had a long career in academia where he published more than 100 articles and edited two books. He joined Yale University faculty where he was a professor for more than a decade in Theoretical Physics and served as a visiting professor at numerous universities around the world. Before this post, Dr. Kusnezov did a brief postdoc and was an instructor at Michigan State University, following a year of research at the Institut fur Kernphysik, KFA-Julich, in Germany. He earned his MS in Physics and Ph.D. in Theoretical Nuclear Physics at Princeton University and received Bachelor of Arts degrees in Physics and in Pure Mathematics with highest honors from UC Berkeley.

 

Emerging Memories
External Memory
Systems Design
Use Case
Hardware Eng.
Memory Systems Eng.
Software Eng.
Systems Architecture

Author:

Zaid Kahn

VP & GM, Cloud AI & Advanced Systems
Microsoft

Zaid is currently GM in Cloud Hardware Infrastructure Engineering where he leads a team focusing on advanced architecture and engineering efforts for AI. He is passionate about building balanced teams of artists and soldiers that solve incredibly difficult problems at scale.

Prior to Microsoft Zaid was head of infrastructure engineering at LinkedIn responsible for all aspects of engineering for Datacenters, Compute, Networking, Storage and Hardware. He also lead several software development teams spanning from BMC, network operating systems, server and network fleet automation to SDN efforts inside the datacenter and global backbone including edge. He introduced the concept of disaggregation inside LinkedIn and pioneered JDM with multiple vendors through key initiatives like OpenSwitch, Open19 essentially controlling destiny for hardware development at LinkedIn. During his 9 year tenure at LinkedIn his team scaled network and systems 150X, members from 50M to 675M, and hiring someone every 7 seconds on the LinkedIn Platform.

Prior to LinkedIn Zaid was Network Architect at WebEx responsible for building the MediaTone network and later I built a startup that built a pattern recognition security chip using NPU/FPGA. Zaid holds several patents in networking and SDN and is also a recognized industry leader. He previously served as a board member of the Open19 Foundation and San Francisco chapter of Internet Society. Currently he serves on DE-CIX and Pensando advisory boards.

Zaid Kahn

VP & GM, Cloud AI & Advanced Systems
Microsoft

Zaid is currently GM in Cloud Hardware Infrastructure Engineering where he leads a team focusing on advanced architecture and engineering efforts for AI. He is passionate about building balanced teams of artists and soldiers that solve incredibly difficult problems at scale.

Prior to Microsoft Zaid was head of infrastructure engineering at LinkedIn responsible for all aspects of engineering for Datacenters, Compute, Networking, Storage and Hardware. He also lead several software development teams spanning from BMC, network operating systems, server and network fleet automation to SDN efforts inside the datacenter and global backbone including edge. He introduced the concept of disaggregation inside LinkedIn and pioneered JDM with multiple vendors through key initiatives like OpenSwitch, Open19 essentially controlling destiny for hardware development at LinkedIn. During his 9 year tenure at LinkedIn his team scaled network and systems 150X, members from 50M to 675M, and hiring someone every 7 seconds on the LinkedIn Platform.

Prior to LinkedIn Zaid was Network Architect at WebEx responsible for building the MediaTone network and later I built a startup that built a pattern recognition security chip using NPU/FPGA. Zaid holds several patents in networking and SDN and is also a recognized industry leader. He previously served as a board member of the Open19 Foundation and San Francisco chapter of Internet Society. Currently he serves on DE-CIX and Pensando advisory boards.