Unum: the next step in the evolution of computer math

If you’re a software engineer, this is something that you should very definitely read. It would be a game changer for performing calculations on computers, and it’s getting a lot of interest almost immediately.

John Gustafson, one of the foremost experts in scientific computing, has proposed a new number format that provides more accurate answers than standard floats, yet saves space and energy. The new format might well revolutionize the way we do numerical calculations.

Perhaps most topical to Battlescape:

Those [64-bit] unums can represent numbers ranging over 600 orders of magnitude with ten decimals of accuracy

Does this mean that Battlescape can use unums to produce more miraculous results? No. It means that if this number format is adopted by CPU and GPU manufacturers that we may see significant improvements in calculation-heavy tasks like simulations and procedural generation. Think PASCAL GPUs are amazing? unum GPUs would very probably make them look like abacuses.

Here’s a large PDF slide deck that Gustafson put together:


A more compact version that doesn’t require a full download:

And an hour-long YouTube interview with Gustafson which I found very informative. It’s essentially a narration of part of the slide deck.

The reason that Gustafson delved into this format was because he’s involved in supercomputing (“one of the foremost experts”). He’s interested in making exascale computing a reality.

Kilo - thousand
Mega - million
Giga - billion
Peta - thousand billion
Tera - million billion
Exa - billion billion

The world’s fastest supercomputer runs at 33 petaFLOPS, but draws 17.6 MW of power. Clearly, we can’t scale up anything to exascale because it would draw over 500 GW of power. More efficient hardware is needed. Gustafson is giving us something that’s dramatically more efficient - and accurate. It’s his best offering on how we can get beyond the current floating point format - in many ways.


Fixed that for you… Jokes aside, I am sceptic about the whole variable size mentioned in the interview. Smells like it could be a major pita to find out, how many bits are in the utag. It could be, that it is always possible, to determine one (and only one) configuration that fits the number of bits, but my guess is, that those unums will be hard to implement on a hardware level (due to their floating size) and might have unforseen problems that lead to slower than anticipated calculations.

More Mark Zuckerberg is said that the capacity increase.And people will remain the same as in 3761 BC. e. Simply increasing the number of clusters in the processor resulting in high heat transfer.

Pretty cool, what a shame we have to wait for hardware peeps to really take advantage of this. Gonna take forever.

As consumer hardware, probably. However, I can easily see it attaining rapid adoption for specialized fields such as supercomputing (its inspiration) and artificial intelligence. In those fields, they may simply not be able to move forward without something like unums. Beyond that, if the people assembling software implementations discover that unums have the sort of wonderful characteristics that Gustafson asserts, then they’re going to push the hardware guys. That may provide faster inroads to the consumer hardware.

And then there’s always the Chinese, who might just build a new ecosystem and take advantage of unums instead of IEEE floats.

Well the only hardware I care about is GPU’s and CPU’s and it’s going to take a while before support is added and then that hardware percolates through consumers - particularly for CPU’s.

Sure. I figured that GPUs might actually adopt the technology more quickly because of the supercomputing angle. If some design house built an unum-based GPU for a supercomputing project, that might be the path to more widespread adoption. Perhaps a FPGA implementation.

As links between CPUs and GPUs become stronger, and more and more of our software functionality becomes based on number crunching (e.g. machine learning), it may be that CPUs need never change. They’ll just be the overall task schedulers.

I doubt they will ever adopt unums at all into a hardware design. It is ‘a tad’ hard to hardwire a datatype that is variable in size by design. A way I can see it happen, is by giving them a set size, like 32 bit unums or 64 bit unums. This however beats one of the purposes of unums (keeping the unum as small as possible in size), which would also mean a waste of energy, which is another thing unums are supposed to prevent. Another way would probably be to fragment the unum but that would probably completely beat any purpose the unum had in the first place and probably make it even slower than IEEE double floats. Not to mention the problem of being able to see how big the unum itself is. since the size of the utag is also variable, you can never be sure where exactly the unum ends.

To be fair though, I haven’t dug through all the stuff you linked, and I can’t shake the feeling that I would need to read the book first, to fully understand unums.

I don’t see why having fixed size unums would be a problem. It just means that, as far as size savings are concerned, you as a programmer can choose to use a 16-bit unum where instead you would be required to use a 32-bit float. I see massive gains from this in real-time computer graphics.

If I understand the video correctly, the Unum is a layer on top of the hardware. Much like OpenGL or Vulcan on top of a GPU. Changes are needed to do it efficiently but current hardware can already do Unum with software help.

I read quite a bit on the topic. On the powerpoint slides there was a mention of unums 2.0 … something about fixed length. How does that compare to unum 1.0 and why a second version?

He’s after a new number format right down to the bare metal.

This post

led to this post

which states

John Gustafson just presented an updated version of his unum proposal at Multicore 2016 (see also the previous post on unums). It’s very different from before:

For those of you who think you have already seen unums, this is a different approach. Every one of the slides here is completely new and has not been presented before the Multicore 2016 conference.

The original unums had sign, exponent, and fraction fields just like IEEE floats and obeying most of the same rules; those unums had three metadata fields, the “utag”, that described the exact-inexact state, the exponent size, and the fraction size, and they therefore had variable length. This new approach makes a complete break from IEEE float compatibility and redesigns the way we represent the infinite space of real numbers on a computer.

They are just as mathematically rigorous as before, but they clear up the remaining clunkiness of the IEEE format. They are so terse and so fast that you can think about solving very difficult equations by trying the entire real number line, overlooking nothing.

I don’t know how practical it all is, but as applied mathematical research it looks very interesting to me at least.

You can find all the slides here:

PDF: A Radical Approach to Computation with Real Numbers
PPTX: A Radical Approach to Computation with Real Numbers

The powerpoint has speaker notes, which is where I took the above citations from.

So he’s continuing to examine the problem, gaining new insights into representing and operating on numeric values in a binary environment.

I have not looked at the new PDF yet.

Edit: Discussion of unum II begins on slide 21 of the PDF. I went through it and the only thing that I picked up on was that he says unum II is fixed size. I guess people were jumping on him about the variable size aspect - which was one of the most interesting aspects of it for me.

He seems particularly enthusiastic about small size unums and using tables to make them insanely fast. Given the rigor of the unum system, I think he’s really trying to convince people to focus on small unums to solve real problems. That, instead of just throwing more precision at IEEE floats in an effort to get them to behave more-or-less rigorously. I’m not sure that’s of particular use to INS for the scales and precisions involved, but simply having the rigor might well be worth it. Designing a special number system might even have some remarkable benefits.

It makes me wonder what people would do with purpose-built unums. Or algorithmically-determined unums.


Yeah it wouldn’t be a problem, but from what I gathered, one of the key things gustafson praised about the unums, was that he could do energy efficient calculations thanks to the variable size. But seeing how there is this unum 2 thing with a fixed size that I completely missed, that kind of makes my previous post obsolete. As JB mentioned, it seems that Gustafson is continuing to finetune his unum…

In large-scale scientific computing and mobile they will be highly concerned about power consumption however for our (current) purposes power consumption is largely immaterial. Nobody buys a high-end consumer GPU worried about its impact on their power bill.

Sounds like you’d need new hardware AND software ( especially compilers ) too.

Which, for fields that are innovating rapidly, is perfectly reasonable. Supercomputing, artificial intelligence, robotics, virtual reality, internet of things, etc. Those fields are still exploring the possibilities so going to a new architecture is entirely possible. Certainly Gustavson was inspired by the pursuit of exascale computing to come up with a replacement for IEEE floats.