Flow Computing Podcast Series: Prof. Dr. Jörg Keller on PPU Adaptability and Integration (Episode 4)

In Episode 4 of our Flow Computing podcast series, Prof. Dr. Jörg Keller focuses on the adaptability and integration of the PPU. He explains how its design enables a seamless transition for developers and ensures compatibility with existing CPU architectures.

Professor Keller emphasizes that the PPU is not a standalone unit but works in tandem with traditional CPU cores. This collaborative approach allows for gradual adoption, enabling existing code to run unmodified on CPU cores while developers explore the PPU's potential.

"The PPUs do not exist alone—they have front-end cores, which are normal CPU cores. Using a well-known front-end instruction architecture from some current CPUs simplifies the start going to a CPU-plus-PPU architecture. Existing codes for the CPU will immediately execute on the CPU cores without a performance boost on the PPU cores." - Prof. Dr. Jörg Keller

Professor Keller highlights that the transition to a CPU-plus-PPU architecture is designed to be straightforward. With familiar tools and a gradual integration process, adopting PPU technology becomes a practical and appealing choice for developers.

"We have a range of options here. The start comes with a low hurdle—you just have to switch to a new compiler and IDE, which makes the transition relatively straightforward, in my opinion." - Prof. Dr. Jörg Keller

Transcript

JÖRG KELLER: The PPUs do not exist alone—they have front-end cores, which are normal CPU cores. Using a well-known front-end instruction architecture from some current CPUs simplifies the start going to a CPU-plus-PPU architecture. Existing codes for the CPU will immediately execute on the CPU cores without a performance boost on the PPU cores. I already mentioned the availability of a binary-to-binary translator for existing executables. This can help speed up those applications because they can utilize both the CPU and PPU cores. When we also have a compiler and IDE, we can work with existing source code. This allows for experimentation at different scales of time investment. We might start with an automatic detection of parallelizable patterns and run the Flow compiler.

JK: This process might involve increasing time investment, recoding some things, up to hand-tuned kernels specifically written for the PPUs at the extreme end.

JK: We have a range of options here. The start comes with a low hurdle—you just have to switch to a new compiler and IDE, which makes the transition relatively straightforward, in my opinion. It’s not just a wish for the future or something we would like to have. It’s a technology that has already proven itself to be doable and feasible to make such transitions possible.

Curious to learn more about Flow Computing's PPU technology?

Dive into the technical details and insights from Professor Keller’s analysis by requesting access to the full report. Contact us at info@flow-computing.com, and we’ll gladly share it with you!

Contact usX