Flow Computing Podcast Series: Prof. Dr. Jörg Keller on PPU Adaptability and Integration (Episode 4)

In Episode 4 of our Flow Computing podcast series, Prof. Dr. Jörg Keller shifts focus to the adaptability and integration of the PPU. He explains how the PPU's design facilitates a smooth transition for developers and its compatibility with existing CPU architectures.

Professor Keller emphasizes that the PPU is not a standalone unit but works together with traditional CPU cores. This approach allows for a gradual adoption of PPU technology, where existing code can run unmodified on the CPU cores while developers explore the potential of the PPU.

"So the PPUs, they do not exist alone. They have front end cores which are normal CPU cores. So when we use a well-known front end instruction architecture of some current CPU, this simplifies the start. When you go to CPU plus PPU architecture, existing codes for the CPU will immediately execute on the CPU cores..." - Prof. Dr. Jörg Keller

As Professor Keller comments, the transition to a CPU plus PPU architecture is designed to be straightforward. The availability of familiar tools and the gradual integration process make the adoption of PPU technology a feasible and attractive option for developers.

"So we have a scale of options here. And as the start is rather with a low hurdle, you just have to switch to a new compiler and IDE that will make transition not too difficult in my opinion." - Prof. Dr. Jörg Keller

Transcript

JÖRG KELLER: So the PPU’s, they do not exist alone. They have front end cores which are normal CPU cores. So when we use a well known front end instruction architecture of some current CPU, this simplifies the start. When you go to CPU plus PPU architecture, existing codes for the CPU will immediately execute on the CPU cores yet without performance boost on the PPU cores, I already mentioned the availability of a binary to binary translator for existing executables. And this can help just to speed up those applications because then they can make use both of the CPU cores and of the PPU cores, when we also have a compiler and IDE we can work with source codes that we have. And this will allow to experiment at different scales of time investment, we might start with an automatic detection of paralyzable patterns and run the flow compiler.

JK: And this might go with more and more time investment recoding some things up to hand tuned kernels written specifically for the PPU’s on the extreme end.

JK: So we have a scale of options here. And as the start is rather with a low hurdle, you just have to switch to a new compiler and IDE that will make transition not too difficult in my opinion. It's not nothing that is kind of a wish for the future that we would like to have that. But it's technology that has already proven itself as doable and as feasible to make help such transitions possible.

Interested in a deeper dive into Flow Computing's PPU technology?

If you'd like to explore the technical details and insights from Professor Keller's due diligence, you can request access to the full report. Simply contact us at info@flow-computing.com, and we'll be happy to send it your way!

Contact usX