It sounds like you're still thinking in terms of getting a 100% ironclad correct final answer. My apologies if I misunderstand.
The point here is that many applications don't need that ironclad guarantee. It is in their nature to be tolerant of inaccuracies.
If I'm running a simulation where I'm only sure of my input data +/- 1%, then does it matter if the computer is 99.99% accurate? My input data is the weak link, so getting 100% accuracy from inaccurate data isn't particularly important.
In robotics, if I'm trying to crank out solutions to kinematics equations as fast as possible, then I can either have 100% accurate solutions (well, within the limits of my math) every N milliseconds or I can have 99.99% accurate solutions every N microseconds. Unless I'm doing eye surgery with a robot arm the length of a football field, I can probably live with that modest precision loss and will be thrilled to have a better reaction time. Or be able to handle more complex kinematics problems.
Where the ironclad guarantee is needed, a traditional architecture would be used.
I wonder if these new architectures would allow for every calculations to be run three, five or seven times, with the machine using the most common answer. Or the average. Whatever. Selectable precision. It would be a bit like ray tracing. Keep casting rays until you are happy with the image quality.
Now I'm reminded of unums...