WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

783

I will believe it when I see it. Incredible claims require incredible proof. However, if true it would see a return to the co-processor. you remember having a math co-processor back in the day? Also, for a while there was that company PhysX where you could get a physics accelerator card for your system but it never really got wide adoption.

Archive: https://archive.today/cSRoc

From the post:

>A Finnish startup called Flow Computing is making one of the wildest claims ever heard in silicon engineering: by adding its proprietary companion chip, any CPU can instantly double its performance, increasing to as much as 100x with software tweaks. If it works, it could help the industry keep up with the insatiable compute demand of AI makers. Flow is a spinout of VTT, a Finland state-backed research organization that's a bit like a national lab. The chip technology it's commercializing, which it has branded the Parallel Processing Unit, is the result of research performed at that lab (though VTT is an investor, the IP is owned by Flow). The claim, Flow is first to admit, is laughable on its face. You can't just magically squeeze extra performance out of CPUs across architectures and code bases. If so, Intel or AMD or whoever would have done it years ago. But Flow has been working on something that has been theoretically possible -- it's just that no one has been able to pull it off.

I will believe it when I see it. Incredible claims require incredible proof. However, if true it would see a return to the co-processor. @stupidbird you remember having a math co-processor back in the day? Also, for a while there was that company PhysX where you could get a physics accelerator card for your system but it never really got wide adoption. Archive: https://archive.today/cSRoc From the post: >>A Finnish startup called Flow Computing is making one of the wildest claims ever heard in silicon engineering: by adding its proprietary companion chip, any CPU can instantly double its performance, increasing to as much as 100x with software tweaks. If it works, it could help the industry keep up with the insatiable compute demand of AI makers. Flow is a spinout of VTT, a Finland state-backed research organization that's a bit like a national lab. The chip technology it's commercializing, which it has branded the Parallel Processing Unit, is the result of research performed at that lab (though VTT is an investor, the IP is owned by Flow). The claim, Flow is first to admit, is laughable on its face. You can't just magically squeeze extra performance out of CPUs across architectures and code bases. If so, Intel or AMD or whoever would have done it years ago. But Flow has been working on something that has been theoretically possible -- it's just that no one has been able to pull it off.

(post is archived)

[–] 2 pts (edited )

Yes, I remember co-processors. They went away when the math unit was put in the CPU itself.

turning the CPU from a one-lane street into a multi-lane highway.

We call(ed) that Bit-Slice processing.

but Flow's Parallel Processing Unit (PPU), as they call it, essentially performs nanosecond-scale traffic management on-die to move tasks into and out of the processor faster than has previously been possible.

So, I'm going to present the processor with more work, even though by their own admission the CPU can only do one thing at a time?

I don't understand what they're going on about here. You can shove as much as you want at the processor, but it's only going to be able to handle as many instructions as it can handle - doesn't matter if you flush buffers and fill cache ahead of time, the CPU already has devices to do this, and out of order processing and pipelines do much of what they claim is going on in their little chip.

100x seems like a pipe dream, much like Transmeta's Crusoe processors. Until I see this in place, I'm going to say "This sounds like a money grab."

[–] 1 pt

I think the general idea is that they are trying to move some of the scheduling stuff into silicone rather than software... I thought a lot of that has been done already though?

I do agree that their claims sound sort of insane but that's why its a "ill believe it when I see it".. Probably won't have a viable public product/sample for 10 years (if they make it that long).

[–] 3 pts

I'm wondering also, what marvelous new security issues this will create.

[–] 2 pts

Good point. Probably a great side-channel attack that probably works in user-space/unprivileged space.

[–] 3 pts

Yeah, scheduling has been handled by the CPU since forever ago. Letting the silicon choose what instructions to complete next for ease of use and speed was called out-of-order processing.

The CPU in today's machine may still have the 8088 instruction set in there, but it's as close to a CPU of 1980 as a Model T is to a modern supercar.

[–] 1 pt

Obviously, you don't trust the science!

[–] 2 pts

I do, but science must have some proof, and be open to challenge. If you don't go "Oh boy, let's see how this gets torn apart" with the anticipation that it may lead to greater discoveries, you aren't scienceing correctly.

[–] 3 pts

Proof? That concept left the station in 2019. I don't think we can catch it now.