
Tamira M. Moon

Michael Bagel

Drew Matter
Drew Matter leads Mikros Technologies, a designer and manufacturer of best-in-class direct liquid cold plates for AI/HPC, semiconductor testing, laser & optics, and power electronics. Mikros provides leading microchannel thermal solutions in single-phase, 2-phase, DLC and immersion systems to leading companies around the world.

Steve Mills
Steve Mills is a Mechanical Engineer who has dedicated over 25 years to the development of IT hardware in the enterprise and hyperscale space. After tours at DELL and Storspeed, he joined Meta in 2012 and is currently a Technical Lead for Data Center and Hardware Interfaces. He also serves on the Open Compute Project Steering Committee representing the Cooling Environments Project. He has 48 US patents and is an author of eight papers covering the packaging and cooling of electronics.

Matt Archibald
Matt Archibald is the Director of Technical Architecture at nVent supporting the data center and networking space. Matt is deeply focused on liquid cooling (close-coupled and direct-to-chip), unified infrastructure management, data center monitoring, and automated data center infrastructure management.

Vinod Kamath
Mikros Technologies
Website: https://www.mikrostechnologies.com/
Mikros Technologies provides industry leading liquid cooling to a variety of HPC and Data Center markets and is considered the best-in-class solution by next-gen chip designers. Their high-effectiveness heat transfer empowers designers to improve the performance, packaging and reliability of a wide range of complex systems. At Mikros Technologies, their liquid cooling options offer low-pressure drops and low flow rates, so you can enjoy superior performance while consuming less energy. Mikros Technologies liquid cooling solutions can help your data center meet operating goals and creates more power bandwidth for chip designers, AI algorithms and more.
For an organization to make effective use of an AI cluster, it is important to consider the entire process of designing, building, deploying and managing the resource. At each step, a cluster for AI presents new and different challenges that even experienced IT team members may not have encountered before. In this presentation, Penguin Solutions CTO Philip Pokorny will explore AI clusters from design to daily management and will speak to:
- Key considerations when designing an AI cluster
- Important areas that can compromise AI cluster performance
- Ways that software solutions like Penguin's unique Scyld ClusterWare can address complexities
- How to ensure maximum value from your AI cluster investment

Phil Pokorny
Phil Pokorny is the Chief Technology Officer (CTO) for SGH / Penguin Solutions. He brings a wealth of engineering experience and customer insight to the design, development, support, and vision for our technology solutions.
Phil joined Penguin in February of 2001 as an engineer, and steadily progressed through the organization, taking on more responsibility and influencing the direction of key technology and design decisions. Prior to joining Penguin, he spent 14 years in various engineering and system administration roles with Cummins, Inc. and Cummins Electronics. At Cummins, Phil participated in the development of internal network standards, deployed and managed a multisite network of multiprotocol routers, and supported a diverse mix of office and engineering workers with a variety of server and desktop operating systems.
He has contributed code to Open Source projects, including the Linux kernel, lm_sensors, and LCDproc.
Phil graduated from Rose-Hulman Institute of Technology with Bachelor of Science degrees in math and electrical engineering, with a second major in computer science.
Penguin Solutions
Website: https://www.penguinsolutions.com/
Penguin Solutions designs, builds, deploys, and manages AI and accelerated computing infrastructures at scale. With 25+ years of HPC experience – and more than 75,000 GPUs deployed and managed to date – Penguin is a trusted strategic partner for AI and HPC solutions and services for leading organizations around the world.
Designing, deploying, and operating “AI factories” is an incredibly complex endeavor and Penguin has successfully been delivering AI factories at scale since 2017. The company’s OriginAI infrastructure, which is backed by Penguin's specialized intelligent cluster management software and expert services, streamlines AI implementation and management, and enables predictable AI cluster performance that supports customers’ business needs and return on investment goals for clusters small or large, ranging in size from hundreds to thousands of GPUs.
The OriginAI solution builds on Penguin’s extensive AI infrastructure expertise to reduce complexity and accelerate return on investment, providing CEOs and CIOs alike the essential and reliable infrastructure they need to deploy and manage demanding AI workloads at scale in the data center and at the edge. To learn more visit their website at: https://www.penguinsolutions.com. Follow Penguin Solutions on LinkedIn, Twitter, YouTube, and Facebook.

Sanchit Juneja
Sanchit Juneja has 18+ years of Tech Leadership Experience in tech and product roles across The US, South-east Asia, Africa, South-east Asia, and Europe with organizations such as Booking.com, AppsFlyer, GoJek, Rocket Internet, and National Instruments. Currently Director- Product (Big Data & ML/AI) with Booking.com

Steven Woo
I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.
As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.
For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.
I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.
After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.
Education
- Ph.D., Electrical Engineering, Stanford University
- M.S. Electrical Engineering, Stanford University
- Master of Engineering, Harvey Mudd College
- B.S. Engineering, Harvey Mudd College

Manoj Wadekar

Taeksang Song
Taeksang is a Corporate VP at Samsung Electronics where he is leading a team dedicated to pioneering cutting-edge technologies including CXL memory expander, fabric attached memory solution and processing near memory to meet the evolving demands of next-generation data-centric AI architecture. He has almost 20 years' professional experience in memory and sub-system architecture, interconnect protocols, system-on-chip design and collaborating with CSPs to enable hetegeneous computing infrastructure. Prior to joining Samsung Electronics, he worked at Rambus Inc., SK hynix and Micron Technology in lead architect roles for the emerging memory controllers and systems.
Taeksang receives his Ph.D. degree from KAIST, South Korea, in 2006. Dr. Song has authored and co-authored over 20 technical papers and holds over 50 U.S. patents.

Markus Flierl
Markus joined Intel in early 2022 to lead Intel Cloud Services which includes Intel Tiber Developer Cloud (ITDC/ cloud.intel.com), Intel Tiber App-Level Optimization (formerly known as Granulate). Intel Tiber Developer Cloud provides a range of cloud services based on Intel latest pre-production and production hardware and software with focus on AI workloads. ITDC hosts large production workloads for companies such as seekr or Prediction Guard. Before joining Intel Markus built out NVIDIA’s GPU cloud infrastructure services leveraging cutting edge NVIDIA and open source technologies. Today it is the foundation for NVIDIA’s GeForce Now cloud gaming service which has become the leader in cloud gaming with over 25 million registered users globally as well as NVIDIA’s DGX cloud and edge computing workloads like NVIDIA Omniverse™. Prior to that Markus led product strategy and product development of private and public cloud infrastructure and storage software at Oracle Corporation and Sun Microsystems.
Rambus
Website: https://www.rambus.com/
Rambus is a provider of industry-leading chips and silicon IP making data faster and safer. With over 30 years of advanced semiconductor experience, we are a pioneer in high-performance memory subsystems that solve the bottleneck between memory and processing for data-intensive systems. Whether in the cloud, at the edge or in your hand, real-time and immersive applications depend on data throughput and integrity. Rambus products and innovations deliver the increased bandwidth, capacity and security required to meet the world’s data needs and drive ever-greater end-user experiences. For more information, visit rambus.com.