| Page 415 | Kisaco Research

Author:

Tamira M. Moon

Director Health Equity and Diversity Initiatives, Association of State and Territorial Health Officials, CEO and Founder
To the Moon & Back Foundation, Inc.

Tamira M. Moon

Director Health Equity and Diversity Initiatives, Association of State and Territorial Health Officials, CEO and Founder
To the Moon & Back Foundation, Inc.
Systems
Hardware
Infrastructure
Moderator

Author:

Drew Matter

President & CEO
Mikros Technologies

Drew Matter leads Mikros Technologies, a designer and manufacturer of best-in-class direct liquid cold plates for AI/HPC, semiconductor testing, laser & optics, and power electronics.  Mikros provides leading microchannel thermal solutions in single-phase, 2-phase, DLC and immersion systems to leading companies around the world. 

Drew Matter

President & CEO
Mikros Technologies

Drew Matter leads Mikros Technologies, a designer and manufacturer of best-in-class direct liquid cold plates for AI/HPC, semiconductor testing, laser & optics, and power electronics.  Mikros provides leading microchannel thermal solutions in single-phase, 2-phase, DLC and immersion systems to leading companies around the world. 

Author:

Steve Mills

Mechanical Engineer
Meta

Steve Mills is a Mechanical Engineer who has dedicated over 25 years to the development of IT hardware in the enterprise and hyperscale space.  After tours at DELL and Storspeed, he joined Meta in 2012 and is currently a Technical Lead for Data Center and Hardware Interfaces. He also serves on the Open Compute Project Steering Committee representing the Cooling Environments Project. He has 48 US patents and is an author of eight papers covering the packaging and cooling of electronics.

 

 

 

Steve Mills

Mechanical Engineer
Meta

Steve Mills is a Mechanical Engineer who has dedicated over 25 years to the development of IT hardware in the enterprise and hyperscale space.  After tours at DELL and Storspeed, he joined Meta in 2012 and is currently a Technical Lead for Data Center and Hardware Interfaces. He also serves on the Open Compute Project Steering Committee representing the Cooling Environments Project. He has 48 US patents and is an author of eight papers covering the packaging and cooling of electronics.

 

 

 

Author:

Matt Archibald

Director of Technical Architecture – Data Solutions
nVent

Matt Archibald is the Director of Technical Architecture at nVent supporting the data center and networking space. Matt is deeply focused on liquid cooling (close-coupled and direct-to-chip), unified infrastructure management, data center monitoring, and automated data center infrastructure management.

Matt Archibald

Director of Technical Architecture – Data Solutions
nVent

Matt Archibald is the Director of Technical Architecture at nVent supporting the data center and networking space. Matt is deeply focused on liquid cooling (close-coupled and direct-to-chip), unified infrastructure management, data center monitoring, and automated data center infrastructure management.

Author:

Vinod Kamath

Distinguished Engineer
Lenovo Infrastructure Solutions Group

Vinod Kamath

Distinguished Engineer
Lenovo Infrastructure Solutions Group

For an organization to make effective use of an AI cluster, it is important to consider the entire process of designing, building, deploying and managing the resource. At each step, a cluster for AI presents new and different challenges that even experienced IT team members may not have encountered before. In this presentation, Penguin Solutions CTO Philip Pokorny will explore AI clusters from design to daily management and will speak to:

  • Key considerations when designing an AI cluster
  • Important areas that can compromise AI cluster performance
  • Ways that software solutions like Penguin's unique Scyld ClusterWare can address complexities
  • How to ensure maximum value from your AI cluster investment
Data Center
Systems
Hardware
Infrastructure

Author:

Phil Pokorny

Chief Technology Officer
Penguin Solutions

Phil Pokorny is the Chief Technology Officer (CTO) for SGH / Penguin Solutions. He brings a wealth of engineering experience and customer insight to the design, development, support, and vision for our technology solutions.

Phil joined Penguin in February of 2001 as an engineer, and steadily progressed through the organization, taking on more responsibility and influencing the direction of key technology and design decisions. Prior to joining Penguin, he spent 14 years in various engineering and system administration roles with Cummins, Inc. and Cummins Electronics. At Cummins, Phil participated in the development of internal network standards, deployed and managed a multisite network of multiprotocol routers, and supported a diverse mix of office and engineering workers with a variety of server and desktop operating systems.

He has contributed code to Open Source projects, including the Linux kernel, lm_sensors, and LCDproc.

Phil graduated from Rose-Hulman Institute of Technology with Bachelor of Science degrees in math and electrical engineering, with a second major in computer science. 

Phil Pokorny

Chief Technology Officer
Penguin Solutions

Phil Pokorny is the Chief Technology Officer (CTO) for SGH / Penguin Solutions. He brings a wealth of engineering experience and customer insight to the design, development, support, and vision for our technology solutions.

Phil joined Penguin in February of 2001 as an engineer, and steadily progressed through the organization, taking on more responsibility and influencing the direction of key technology and design decisions. Prior to joining Penguin, he spent 14 years in various engineering and system administration roles with Cummins, Inc. and Cummins Electronics. At Cummins, Phil participated in the development of internal network standards, deployed and managed a multisite network of multiprotocol routers, and supported a diverse mix of office and engineering workers with a variety of server and desktop operating systems.

He has contributed code to Open Source projects, including the Linux kernel, lm_sensors, and LCDproc.

Phil graduated from Rose-Hulman Institute of Technology with Bachelor of Science degrees in math and electrical engineering, with a second major in computer science. 

Generative AI
Infrastructure

Author:

Sanchit Juneja

Director, Big Data & ML/AI
Booking.Com

Sanchit Juneja has 18+ years of Tech Leadership Experience in tech and product roles across The US, South-east Asia, Africa, South-east Asia, and Europe with organizations such as Booking.com, AppsFlyer, GoJek, Rocket Internet, and National Instruments. Currently Director- Product (Big Data & ML/AI) with Booking.com 

Sanchit Juneja

Director, Big Data & ML/AI
Booking.Com

Sanchit Juneja has 18+ years of Tech Leadership Experience in tech and product roles across The US, South-east Asia, Africa, South-east Asia, and Europe with organizations such as Booking.com, AppsFlyer, GoJek, Rocket Internet, and National Instruments. Currently Director- Product (Big Data & ML/AI) with Booking.com 

Systems
Hardware
Infrastructure
Moderator

Author:

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Author:

Manoj Wadekar

AI Systems Technologist
Meta

Manoj Wadekar

AI Systems Technologist
Meta

Author:

Taeksang Song

CVP
Samsung Electronics

Taeksang is a Corporate VP at Samsung Electronics where he is leading a team dedicated to pioneering cutting-edge technologies including CXL memory expander, fabric attached memory solution and processing near memory to meet the evolving demands of next-generation data-centric AI architecture. He has almost 20 years' professional experience in memory and sub-system architecture, interconnect protocols, system-on-chip design and collaborating with CSPs to enable hetegeneous computing infrastructure.  Prior to joining Samsung Electronics, he worked at Rambus Inc., SK hynix and Micron Technology in lead architect roles for the emerging memory controllers and systems. 

Taeksang receives his Ph.D. degree from KAIST, South Korea, in 2006. Dr. Song has authored and co-authored over 20 technical papers and holds over 50 U.S. patents. 

 

 

Taeksang Song

CVP
Samsung Electronics

Taeksang is a Corporate VP at Samsung Electronics where he is leading a team dedicated to pioneering cutting-edge technologies including CXL memory expander, fabric attached memory solution and processing near memory to meet the evolving demands of next-generation data-centric AI architecture. He has almost 20 years' professional experience in memory and sub-system architecture, interconnect protocols, system-on-chip design and collaborating with CSPs to enable hetegeneous computing infrastructure.  Prior to joining Samsung Electronics, he worked at Rambus Inc., SK hynix and Micron Technology in lead architect roles for the emerging memory controllers and systems. 

Taeksang receives his Ph.D. degree from KAIST, South Korea, in 2006. Dr. Song has authored and co-authored over 20 technical papers and holds over 50 U.S. patents. 

 

 

Author:

Markus Flierl

CVP Intel Cloud Services
Intel, Corp

Markus joined Intel in early 2022 to lead Intel Cloud Services which includes Intel Tiber Developer Cloud (ITDC/ cloud.intel.com), Intel Tiber App-Level Optimization (formerly known as Granulate). Intel Tiber Developer Cloud provides a range of cloud services based on Intel latest pre-production and production hardware and software with focus on AI workloads. ITDC hosts large production workloads for companies such as seekr or Prediction Guard. Before joining Intel Markus built out NVIDIA’s GPU cloud infrastructure services leveraging cutting edge NVIDIA and open source technologies. Today it is the foundation for NVIDIA’s GeForce Now cloud gaming service which has become the leader in cloud gaming with over 25 million registered users globally as well as NVIDIA’s DGX cloud and edge computing workloads like NVIDIA Omniverse™. Prior to that Markus led product strategy and product development of private and public cloud infrastructure and storage software at Oracle Corporation and Sun Microsystems.

Markus Flierl

CVP Intel Cloud Services
Intel, Corp

Markus joined Intel in early 2022 to lead Intel Cloud Services which includes Intel Tiber Developer Cloud (ITDC/ cloud.intel.com), Intel Tiber App-Level Optimization (formerly known as Granulate). Intel Tiber Developer Cloud provides a range of cloud services based on Intel latest pre-production and production hardware and software with focus on AI workloads. ITDC hosts large production workloads for companies such as seekr or Prediction Guard. Before joining Intel Markus built out NVIDIA’s GPU cloud infrastructure services leveraging cutting edge NVIDIA and open source technologies. Today it is the foundation for NVIDIA’s GeForce Now cloud gaming service which has become the leader in cloud gaming with over 25 million registered users globally as well as NVIDIA’s DGX cloud and edge computing workloads like NVIDIA Omniverse™. Prior to that Markus led product strategy and product development of private and public cloud infrastructure and storage software at Oracle Corporation and Sun Microsystems.