When I was a freshman (or so) in high-school, our computer lab had just graduated from a time-share terminal to the next-door university to the Apple II. A kid (Ray Tobey) in the next grade up started to code a project to submit to a Byte Magazine game contest. The due date came and went, but he carried on in every moment of his free time. Long story short, this game became SkyFox of which Woz said "consider this flight simulator as the finest Apple game ever done." From Ray, I learned the value of using continued fraction approximations to trig functions using only integer math. Later, this became useful when I had to implement image rotation in a scan generator for a scanning electron microscope.
01HNNWZ0MV43FF 27 days ago [-]
Oh cool! I think I played that on my family's Apple II. I think it was mislabeled as "Star Fox" and probably pirated. Sorry Ray...
BearOso 27 days ago [-]
Multiplying without having a larger intermediate is much more complex than the article states. You have to use the commutative property of multiplication and split the whole and decimal parts of each number out, otherwise you're stuck with single digit whole numbers or only multiplying fractions.
So you'd take
A.a * B.b and split it into A*B + A*b + a*B + a*b
Or
out = ((A >> fixedbits) * (B >> fixedbits) << fixedbits)
+ ((A >> fixedbits) * b)
+ ((B >> fixedbits) * a)
+ ((a * b) >> fixedbits);
If you can get away with a little less precision and smaller whole numbers, you can avoid some of the multiplications by just doing this, which is quite common:
It's such a shame that multiplication in C (or most other languages, really) doesn't have its natural type (intM, intN) -> int{M+N}. Instead, you have to recover the higher half of the result either by doing additional narrow multiplication yourself, or by using some compiler intrinsic.
jnwatson 27 days ago [-]
Gcc can frequently tell what you're trying to accomplish and emit the correct instructions. Still I agree it would be ideal if this were explicit.
wk_end 27 days ago [-]
I always felt when learning about this stuff that people - pedagogically - make fixed point seem more complicated than it is.
Since this article is talking about more precisely positioning sprites in a 2D world, it could practically be a one-liner: "instead of tracking positions/velocities in pixels, track them in half pixels". Everything falls out of that intuition.
pistoleer 27 days ago [-]
I wonder how many people have reinvented the concept of fixed point when they calculated using "cents" instead of "dollars".
city41 27 days ago [-]
I'm the author of the blog post. I just used sprite positioning as a simple example. Things like collision detection and physics can't be done with half pixels.
wk_end 27 days ago [-]
Not sure what you mean - sure you can.
Trying to read between the lines here, if your objection is to half-pixels because they’re not precise enough for (good) physics, then I apologize for being unclear - I mean half-pixels, or quarter-pixels, or eighth-pixels, or whatever.
Another way of wording my comment is that I think it’s easier - especially for beginners - to think in terms of smaller units (represented as integers) than in terms of a new number format for representing fixed-size fractional parts of larger units. But the two concepts are ultimately the same.
city41 27 days ago [-]
But that's basically what fixed point is, no? Half pixels is fixed point with a single bit for decimals. Quarter pixels is two bits, and so on. I think the disadvantage is you now have to think in a strange unit that isn't intuitive. For my game I tend to think in screen sizes for things. Thinking in screen size*factor would be harder I think. Fixed point is basically just doing that for me and hiding the details really.
To be fair, rereading the post I realize I did make it sound like you would only need this for positioning sprites. I'll see about rewording it.
Or maybe we're both talking about the same thing and you're taking a different approach. That is fair too.
wk_end 27 days ago [-]
Yeah, we're technically talking about the same thing - just a different way of thinking about it.
When I was learning retro game dev (mostly Game Boy), I found fixed point very intimidating. Reading stuff like "the player will move at 1.5 pixels per frame, and to store the decimal point we'll use this special format where certain bits represent the fractional part and certain bits represent the integer part" scared the heck out of me when I was still, like, coming to grips with binary representations at all.
Whereas "the player will move at 3 half-pixels per frame" is just a really straightforward conceptualization. The data representation is the same, the code to convert from half-pixels to pixels is the same, but one way of understanding it feels very technical and abstract.
Especially when working in assembly language (like I was), where you don't really have any kind of typing mechanism, it never really made sense to build a fixed-point data type abstraction.
And, to be clear, I'm not trying to give you, specifically, any guff for this; it's as fine an article on fixed-point that there is.
01HNNWZ0MV43FF 27 days ago [-]
They can't?
fidotron 27 days ago [-]
It's worth saying the original Playstation was entirely fixed point. You can go surprisingly far with it.
I spent so much of the early stage of my career doing early mobile stuff I practically still think in fixed point, and always have to adjust to floats, for example, fixed point results can be compared exactly while with floats that is not a great idea. TeX uses fixed point entirely because it was reproducible across machines in an era where floating point was not.
hansvm 27 days ago [-]
> fixed point results can be compared exactly
You gain the ability to get stable results across machines, but there still necessarily exists a loss of precision, and different implementations of the same algorithm will get different results.
When would you want to compare fixed-point results bitwise though?
__s 27 days ago [-]
I think fixed point would be used a lot more with proper support in programming languages
fidotron 27 days ago [-]
What you would need is the language to track the expected range of the numbers. You often end up with multiple different multiply/divide implementations (shifting amounts before/after) based on if you can safely guarantee you are within an expected range or not.
SideQuark 27 days ago [-]
I doubt it. It fails for far too many useful programming situations that it would cause more problems than floating point.
Cannot use it efficiently for nearly anything: finance software, science software, engineering software, high quality graphics software... 3d software, pretty much anything that has any range needed or ability to lower errors while doing accumulation of information.
This is exactly why floating point was invented and standardized - fixed point is a failure for most any program, and only can work with much effort only for certain situations.
(I've written tons of fixed point code, numerical libraries across the spectrum from high performance, high quality, tunable quality, arbitrary precision libs, posits and unums, IEEE half-float software implementations, and more, so I do know what I'm talking about).
thequux 27 days ago [-]
You might be surprised at where fixed point is already used:
* Finance software: if you're using floats, you're doing something horribly wrong. All balances are measured in integer multiples of some quantum; depending on the system, that may be cents, it may be 0.1 cents, or it may be 0.01 cents. Gradations finer than that simply do not exist, and integer overflow is both more likely to be noticed and more easily explained to be a bug than precision loss (and this matters when you have a regulator asking uncomfortable questions)
* CAD software: floating point may make sense for simulations, but at least for PCB design, layout is done in fixed point. You need consistent precision across the board, and avoiding edge cases in your geometry kernel from precision issues makes everything much easier. Besides, with a 1µm quantum, 32-bit numbers are sufficient for a board 4km on a side. If you need larger, I would love to see your fab.
* Robotics: maybe this one's just me, but expressing motion control algorithms in fixed point has saved me €1/part on more than a few occasions as a result of being able to use an MCU without hardware floating point. Compared to the €0.20/part saved by muntzing the rest of the circuit, the small amount of additional work was totally worth it.
Indeed, the tools for working with fixed point aren't great. C is a lost cause; the best you can do is name your variables things like velocity_12_4 and manually check that the precision lines up. Rust wasn't great when I tried, though const generics may have resulted in an improvement. C++ was, astonishingly enough, quite good; I made a header-only fixedpoint.h with a templated struct `fixed<type,size,precision>` and all inline operations. I get the impression that Ada would be even better, but I've yet to use it in anger.
edflsafoiewq 27 days ago [-]
The most common place is probably image/audio processing. Sample values are usually always quantized, even in intermediate stages. Decoding a JPEG for example is a bunch of fixed point math.
SideQuark 26 days ago [-]
Professional tools even for audio use floating point engines (Ableton has done this for well over a decade [1]). While fixed point may be ok for input/output formats, it is certainly not suitable for complex audio processing needs, where errors accumulate far too fast for any fixed point system at equivalent bit depth.
And fixed point is an order of magnitude slower.
> Decoding a JPEG for example is a bunch of fixed point math
It can be done that way, but modern decoders use float for accuracy and speed when available. Search libjpeg-turbo [2] codebase for float and surprise yourself. Even the ancient libjpeg [3] uses float when possible.
Why not simply used fixed-point? Because it's worse in nearly every way, and an artifact of computing from the 1970s-1980s when JPEG pieces were designed (DCT, lots of work culminating in the final JPEG spec in 1992). When JPEG was designed, computers generally didn't have floating-point hardware. The world of computing has changed a bit since then.
> You might be surprised at where fixed point is already used
Not really, I have worked on all those kinds of things professionally, in fixed point, floating point, and arbitrary precision where appropriate. You apparently have not.
> Finance software: if you're using floats, you're doing something horribly wrong. All balances are measured in integer multiples of some quantum; depending on the system, that may be cents, it may be 0.1 cents, or it may be 0.01 cents. Gradations finer than that simply do not exist,
I take it you have never actually worked in finance; I have. This is so ludicrously wrong that this is clear.
Your claim might work for a free app that only handles your checkbook. Actual finance software has to do things like deal with multiple currencies, high precision compounded calculations, exchange rates that go to high precision and can handle trillions (4T+ a day goes through exchange markets), and on an on. To show you how ludicrous your claim it, let's do a simple experiment. I'll take a 64-bit double, you take 64 bit fixed point math, and let's compute compound interest (a tiny, tiny, trivial computation) tables. We'll make each input trivially simple (real code would handle vastly more cases). Here's your spec:
P = beginning principal in pennies, say loans from 0 to $100M (which is small, but I'll try to help your case, which will still fail...)
R = rate to 0.001, say mult by 1000 so it's an integer for you.
C = times compounded per year, 1-365 (again trivial, but this helps your cause)
Y = # of years to compound, 1-100 (again smaller than real code must handle, but it helps your case).
Your goal is to compute the amount the principal has grown to each compounding period, and at the end, return the amount rounded to the nearest penny (bankers rounding, or round up, or round down, whatever you can muster).
So let's see your function of form `long scaleFV(long P, long R, long C, long Y)` using fixed point.
An trivially simple version with doubles would look like
double FV2(long P, long R, long C, long Y)
{
long N = C * Y; //total number of compounding periods
double p = P;
double r = R/1000.0;
while (N-- > 0)
{
p += p * r / (100.0 * C); // most trivial thing one can do
}
return (long)round(p);
}
Compare both of these to the truth using big float or similar. Trying to do the above, for any trivial fixed point version, will fail over 80% of the time (uniformly sampled inputs from the range above). You likely will never find a case where the fixed point succeeds and the trivial double version fails.
Try it. Show your code that uses the claim you so confidently (and incorrectly) claim above.
And remember this is a trivial, tiny part of the real needs for finance codebases. Once you realize how badly you fail at this simple task where the trivial floating point version works, then please stop claiming things you know nothing about.
> and this matters when you have a regulator asking uncomfortable questions
Oh, I take it you have had this happen? Or did you make this up based on zero experience?
> but at least for PCB design, layout is done in fixed point
In a few systems, like KiCAD it is, not it's certainly not for high end professional systems. Try laying out an ASIC with fixed point, where you might have features at the sub-nm level up to several cm, and you need to track error bounds. And once you need to simulate just about anything (SPICE, etc..) all your fixed point fails.
(Note I use KiCAD a lot for PCB design myself for gadgets I sell; it's good for a few things, but not anywhere as good as a high end PCB design and analysis tool you'd want for high freq stuff and getting past FCC requirements for emissions, which I have also done).
> and avoiding edge cases in your geometry kernel from precision issues makes everything much easier
Yes, easier but larger error. There's a reason pro CAD kernels use floating point, not fixed point. Rotating things in fixed point anything other than 90 degree increments has larger errors by orders of magnitude, and soon your squares are not squares, and your intersections end up with all sorts of bad behavior (see some neat discussion on this in Matt Pharr's Oscar winning book PBRT).
And throughout all this, fixed-point is an order of magnitude slower when hardware float is available, such as places KiCAD runs.
> Robotics: maybe this one's just me, but expressing motion control algorithms in fixed point has saved me €1/part on more than a few occasions as a result of being able to use an MCU without hardware floating point. Compared to the €0.20/part saved by muntzing the rest of the circuit, the small amount of additional work was totally worth it.
It is just you. Yes, at some end-point controller you might use fixed point, but the robotics stuff I've done I often also need stuff like inverse kinematics for motion planning (I'd hate to solve Jacobian inverse stuff in fixed point!), done stuff like probabilistic SLAM flavors (also would diverge so badly as fixed point...), use all flavors of Kalman and other filtering to do sensor fusion, used KF and deep learning stuff to get better estimates from 9DOF mag/gyro/accel sensors, and on and on.
So yes, it may be just you, but I don't know anyone working professionally on robotics systems (and I know a few dozen, having worked with them a loooong time, writing, guess what, numerical code for and with them.).
> I made a header-only fixedpoint.h ...
Same, with also parameter for underlying type so I can plug in int32 where available (e.g., ESP32) int16 where it is not (many microchip chips), or even put in yet another custom type like an int64 emulation built on two int32s.... I've got templated versions to do naive mults and divs (which it seems most people do), one that has correct last bit (takes slightly more bit twiddling, but useful if you want a little more precision). And versions to do faster divs for chips that have software emulated divs (because for fixed point, since at the end you're going to rescale the answer, you can make your div much faster than a naive div followed by a shift). So yes, I have been down all this.
Remember, please show your fixed point code that handles the trivial financial code task above, or stop claiming things you apparently have not done and do not understand.
RaftPeople 26 days ago [-]
When you say "finance" I assume you mean Wall Street type stuff?
For accounting and ERP systems, fixed point is pretty much the universal standard since I entered the industry in the 80's.
SideQuark 25 days ago [-]
> When you say "finance" I assume you mean Wall Street type stuff?
Nope, pretty much any common financial software. Excel is most likely the most used software for doing finance of any type - it's floating point. Quicken is the most common personal finance - floating point. I'd guess the majority of personal finance programs do stuff like mortgages and credit card calculations, which will hit the same compound interest failures I listed above if you try to do it in fixed point, so they'd all be floating point.
If ALL you do is addition, then fixed point could work. But it fails at every other basic accounting task that it would be ludicrous to do any modern or even toy programs in it any more. Compound interest, mortgages, taxes, wealth planning - all floating point. Top 20 hits on github for finance - all use floating point for the math.
All the top hits for ERP on github - floating point. Top ERP commercial systems [1], list Microsoft Dynamics as #1. Looking at features it's most certainly floating point (it has calculated fields allowing arbitrary probabilities, for example). Second place on list is Syspro. Same thing - looking through their documentation they allow arbitrary spreadsheet like computations, even allowing machine learning stuff to be integrated for computations - this is certainly floating point. Third on list is QT9 ERP. Same result - they have modules called "Finance" that allow arbitrary calculations, which is certainly not done in fixed point. Multiple other modules look like they'd need floating point since fixed point would simply accumulate error too quickly (and be slow).
Alternatively if you have any of these programs, you could pull them apart pretty quickly with Ghidra and you'd likely find all floating point math in them.
While fixed-point has uses, it is not used nearly as much any more as people want to believe, and it's certainly a terrible idea for any finance beyond addition if simple numbers, i.e., a checkbook (which itself is getting outdated). If you cannot even do something as simple as compound interest, then I'd hesitate to call it finance software.
In the 1980s, floating point hardware was uncommon, there was no IEEE 754. The computing and finance has moved so far beyond the 1980s that I even doubt much modern greenfield accounting software is fixed point any more.
The databases use NUMERIC/DECIMAL data types almost exclusively. Once in a while you will find a floating point type but it's pretty rare.
The code they write is not generally available to the public so I'm not sure what they are doing inside their code. Some probably use java's BigDecimal, some probably wrote their own libraries to handle data types (e.g. SAP).
> If ALL you do is addition, then fixed point could work. But it fails at every other basic accounting task that it would be ludicrous to do any modern or even toy programs in it any more.
How about the basic task of storing 0.1 or 0.01, it seems pretty good at that and float (binary float) struggles.
Decimal float works, which is why IBM but the DFP units on the Power and Z cpu's.
thequux 25 days ago [-]
Clearly you haven't looked at my CV. I spend about half my day reading this country or that one's regulations to make sure that the small bit of the worlds largest CSD (by at least one measure) that I work on follows the law, and I have the insatiable curiosity to read the rest of it as light bedtime reading. I know whereof I speak regarding financial systems. You don't.
Specifically, accounting law is old. Computers were used to automate what the bookkeepers did by hand, which was emphatically not floating point. At the end of an interest period, interest was computed and booked in rounded form. The next period, interest would be computed based on the number in the books, not the "floating point" number that was rounded from the last time interest was booked. More clearly: put 1c into a bank account earning 1% interest compounded daily. After 1000 years, that account will have precisely 1c in it, because there was never enough interest to earn at least one quantum. This is the difference between math and bookkeeping, and between the sort of math that passes for correct in an ERP system versus the sort of math needed by a bank.
Also, fixed point is not slow, at least not on the systems relevant to actual financial infrastructure. It's all on IBM Z, which has hardware instructions for fixed point decimal calculation because if it didn't, it would run like treacle.
jerf 27 days ago [-]
Is there even any performance benefit on modern CPUs? I tried to consult some real tables but I'm not experienced enough to be sure I'm reading them correctly. If I'm reading something like [1] properly, it looks like it is not a clear win on modern hardware to use fixed point & integer operations. It would depend on the ratio of addition/subtraction/multiplication to division.
(Obviously one must factor out a lot of local considerations, which modern CPUs are full unto overflowing with; I'm kind of looking at a very, very broad average performance question across code bases doing enough different math operations to average out, not whether one particular loop can run theoretically run faster or slower with one or the other.)
It depends a lot on CPU architecture. Floating point units may be tied to each core, or they may be shared, so it may further depend on other concurrent workloads.
There's also SIMD instructions. Modern CPUs have built-in instructions for handling multiple ints or floats as a vector. If you can get your fixed-point varies to for into 8-bit or 16-bit fields instead of 32, then the same sized vector units can handle more values per instruction.
taeric 27 days ago [-]
My gut would be that you also get some benefit by some auxiliary choices. With a lot of precomputed constants, you can probably avoid a lot of known multiplications. That said, yeah, if you have to start doing a lot of fixed point multiplications, you could eat any savings you had.
ack_complete 27 days ago [-]
Address calculations are a place where fixed point can still have an advantage -- it's often less latency and fewer ops to step a fixed-point accumulator and shift it into an address offset on the integer units than to step a floating point accumulator, convert it to integer, then move that over from the FP/vector units to the integer units.
capitainenemo 27 days ago [-]
Hedgewars uses fixed point due to inconsistencies in floating point implementations breaking deterministic lockstep. 0AD and Spring: RTS has similar issues although I think both use streflop now.
mmaniac 27 days ago [-]
This sort of thing is ordinary for consoles without floating point numbers. It's easy to do in software with integers, but sometimes hardware acceleration will use it too.
The SNES and GBA both supported affine transformations, where the elements of the multiplication matrix are fixed point numbers. The Playstation's geometry coprocessor (GTE) used fixed point matrices with quite low 16 bit precision. An emulator feature called PGXP is able to perform these calculations at higher precision.
01HNNWZ0MV43FF 27 days ago [-]
And just to head off the discussion that happens every single thread:
- Yes, the PS1 had "jittery" vertices because 16 bits is not enough precision
- No, it was not because of using integers, you can use integers (AKA fixed-point) to do 3D just fine. If it had been 16.16 (32 bits total) it would probably look fine.
- No, this isn't the cause of the texture warping, that's because unlike the N64 it only supported affine texture mapping, not perspective-correct texture mapping. The PS1 saw every 3D triangle as just a 2D triangle, and texture mapping in 2D differs from texture mapping in 3D
- Yes, the texture warping is why many of the best-looking PS1 games were basically built on a grid or found other ways to use a large amount of small triangles instead of a small amount of large triangles.
vardump 27 days ago [-]
On the PC side, most developers stopped predominantly using fixed point for high performance code somewhere in the Pentium 1-3 era. For 486 class systems it was still pretty useful.
Other than on retro systems, fixed point is still useful in smaller microcontrollers.
VyseofArcadia 27 days ago [-]
A buddy of mine and I are working on a weekend project together. We recently realized that we don't need all that much precision, just a little, and switching from doubles or floats to 16-bit fixed point in our main data structure actually makes it small enough to fit an instance in a typical cache line (< 64 bytes).
Completely unnecessary for our target platform but deeply satisfying.
vardump 27 days ago [-]
For performance sensitive code memory bandwidth is very often the limiting factor, thus compressing values tends to make a lot of sense. The number of CPU cores is increasing much faster than memory bandwidth.
So not necessarily completely unnecessary.
VyseofArcadia 27 days ago [-]
I expect it will have some effect given that we anticipate having ~1000 instances of that data structure alive at worst case, at least dozens at any given time.
pjmlp 27 days ago [-]
Michael Abrash books contain information on these kinds of optimizations, which were exactly on this transition point, as timeframe.
twoodfin 27 days ago [-]
A few years before that era, I was having a lot of fun with
On the Amiga and Atari, based on the Motorial 68000 like the NeoGeo, all 3D games used fixed point arithmetic.
At that time, such games were written in assembler, and you had to be very careful to place the instructions for scaling and descaling in the right places, not only to get the final result in the right units (i.e., screen coordinates), but also in intermediate calculations to preserve precision.
fizzynut 27 days ago [-]
Is it possible to get rid of all the macros TO_FIXED, FROM_FIXED, mult, etc and replace them with a class with the correct constructors / operator overloads?
Then your code doesn't ever need to be aware of the special fixed point math and horrible syntax everywhere and everything just works?
msk-lywenn 27 days ago [-]
Yes it can. I’ve seen it done properly only once though. You still have to pay attention to avoid overflows
26 days ago [-]
jnwatson 27 days ago [-]
FYI, the multiply assert isn't guaranteed to work and can be compiled out by a sufficiently smart compiler. Overflow of signed values is UB, the compiler can assume that UB is impossible, therefore the expression in the if resolves to false.
teo_zero 27 days ago [-]
When I see a snippet like this
int temp = a * b;
ngassert(
abs(temp) >= abs(a),
"overflow!"
);
I wonder what stops the compiler from optimizing away the assert, as it's clearly based on an undefined behavior.
throwawayk7h 27 days ago [-]
Fixed point should really be used way more often than it is. It's much safer, for one thing, and less mysterious. But because C lacks first-class support for fixed point, the whole world has become flopsy.
pjmlp 27 days ago [-]
Besides all the given examples, this was also quite common in J2ME games, given the limitations of the environment, and hardware differences across phones.
djmips 27 days ago [-]
Nit to the author. Decimal point. Decimal digits. Actually binary point and binary digits.
So you'd take
Or If you can get away with a little less precision and smaller whole numbers, you can avoid some of the multiplications by just doing this, which is quite common:Since this article is talking about more precisely positioning sprites in a 2D world, it could practically be a one-liner: "instead of tracking positions/velocities in pixels, track them in half pixels". Everything falls out of that intuition.
Trying to read between the lines here, if your objection is to half-pixels because they’re not precise enough for (good) physics, then I apologize for being unclear - I mean half-pixels, or quarter-pixels, or eighth-pixels, or whatever.
Another way of wording my comment is that I think it’s easier - especially for beginners - to think in terms of smaller units (represented as integers) than in terms of a new number format for representing fixed-size fractional parts of larger units. But the two concepts are ultimately the same.
To be fair, rereading the post I realize I did make it sound like you would only need this for positioning sprites. I'll see about rewording it.
Or maybe we're both talking about the same thing and you're taking a different approach. That is fair too.
When I was learning retro game dev (mostly Game Boy), I found fixed point very intimidating. Reading stuff like "the player will move at 1.5 pixels per frame, and to store the decimal point we'll use this special format where certain bits represent the fractional part and certain bits represent the integer part" scared the heck out of me when I was still, like, coming to grips with binary representations at all.
Whereas "the player will move at 3 half-pixels per frame" is just a really straightforward conceptualization. The data representation is the same, the code to convert from half-pixels to pixels is the same, but one way of understanding it feels very technical and abstract.
Especially when working in assembly language (like I was), where you don't really have any kind of typing mechanism, it never really made sense to build a fixed-point data type abstraction.
And, to be clear, I'm not trying to give you, specifically, any guff for this; it's as fine an article on fixed-point that there is.
https://en.wikipedia.org/wiki/Fixed-point_arithmetic#Softwar...
I spent so much of the early stage of my career doing early mobile stuff I practically still think in fixed point, and always have to adjust to floats, for example, fixed point results can be compared exactly while with floats that is not a great idea. TeX uses fixed point entirely because it was reproducible across machines in an era where floating point was not.
You gain the ability to get stable results across machines, but there still necessarily exists a loss of precision, and different implementations of the same algorithm will get different results.
When would you want to compare fixed-point results bitwise though?
Cannot use it efficiently for nearly anything: finance software, science software, engineering software, high quality graphics software... 3d software, pretty much anything that has any range needed or ability to lower errors while doing accumulation of information.
This is exactly why floating point was invented and standardized - fixed point is a failure for most any program, and only can work with much effort only for certain situations.
(I've written tons of fixed point code, numerical libraries across the spectrum from high performance, high quality, tunable quality, arbitrary precision libs, posits and unums, IEEE half-float software implementations, and more, so I do know what I'm talking about).
Indeed, the tools for working with fixed point aren't great. C is a lost cause; the best you can do is name your variables things like velocity_12_4 and manually check that the precision lines up. Rust wasn't great when I tried, though const generics may have resulted in an improvement. C++ was, astonishingly enough, quite good; I made a header-only fixedpoint.h with a templated struct `fixed<type,size,precision>` and all inline operations. I get the impression that Ada would be even better, but I've yet to use it in anger.
And fixed point is an order of magnitude slower.
> Decoding a JPEG for example is a bunch of fixed point math
It can be done that way, but modern decoders use float for accuracy and speed when available. Search libjpeg-turbo [2] codebase for float and surprise yourself. Even the ancient libjpeg [3] uses float when possible.
Why not simply used fixed-point? Because it's worse in nearly every way, and an artifact of computing from the 1970s-1980s when JPEG pieces were designed (DCT, lots of work culminating in the final JPEG spec in 1992). When JPEG was designed, computers generally didn't have floating-point hardware. The world of computing has changed a bit since then.
[1] https://cdn-resources.ableton.com/80bA26cPQ1hEJDFjpUKntxfqdm...
[2] https://github.com/libjpeg-turbo/libjpeg-turbo
[3] https://www.ijg.org/files/
Not really, I have worked on all those kinds of things professionally, in fixed point, floating point, and arbitrary precision where appropriate. You apparently have not.
> Finance software: if you're using floats, you're doing something horribly wrong. All balances are measured in integer multiples of some quantum; depending on the system, that may be cents, it may be 0.1 cents, or it may be 0.01 cents. Gradations finer than that simply do not exist,
I take it you have never actually worked in finance; I have. This is so ludicrously wrong that this is clear.
Your claim might work for a free app that only handles your checkbook. Actual finance software has to do things like deal with multiple currencies, high precision compounded calculations, exchange rates that go to high precision and can handle trillions (4T+ a day goes through exchange markets), and on an on. To show you how ludicrous your claim it, let's do a simple experiment. I'll take a 64-bit double, you take 64 bit fixed point math, and let's compute compound interest (a tiny, tiny, trivial computation) tables. We'll make each input trivially simple (real code would handle vastly more cases). Here's your spec:
P = beginning principal in pennies, say loans from 0 to $100M (which is small, but I'll try to help your case, which will still fail...) R = rate to 0.001, say mult by 1000 so it's an integer for you. C = times compounded per year, 1-365 (again trivial, but this helps your cause) Y = # of years to compound, 1-100 (again smaller than real code must handle, but it helps your case).
Your goal is to compute the amount the principal has grown to each compounding period, and at the end, return the amount rounded to the nearest penny (bankers rounding, or round up, or round down, whatever you can muster).
So let's see your function of form `long scaleFV(long P, long R, long C, long Y)` using fixed point.
An trivially simple version with doubles would look like
Compare both of these to the truth using big float or similar. Trying to do the above, for any trivial fixed point version, will fail over 80% of the time (uniformly sampled inputs from the range above). You likely will never find a case where the fixed point succeeds and the trivial double version fails.Try it. Show your code that uses the claim you so confidently (and incorrectly) claim above.
And remember this is a trivial, tiny part of the real needs for finance codebases. Once you realize how badly you fail at this simple task where the trivial floating point version works, then please stop claiming things you know nothing about.
> and this matters when you have a regulator asking uncomfortable questions
Oh, I take it you have had this happen? Or did you make this up based on zero experience?
> but at least for PCB design, layout is done in fixed point
In a few systems, like KiCAD it is, not it's certainly not for high end professional systems. Try laying out an ASIC with fixed point, where you might have features at the sub-nm level up to several cm, and you need to track error bounds. And once you need to simulate just about anything (SPICE, etc..) all your fixed point fails.
(Note I use KiCAD a lot for PCB design myself for gadgets I sell; it's good for a few things, but not anywhere as good as a high end PCB design and analysis tool you'd want for high freq stuff and getting past FCC requirements for emissions, which I have also done).
> and avoiding edge cases in your geometry kernel from precision issues makes everything much easier
Yes, easier but larger error. There's a reason pro CAD kernels use floating point, not fixed point. Rotating things in fixed point anything other than 90 degree increments has larger errors by orders of magnitude, and soon your squares are not squares, and your intersections end up with all sorts of bad behavior (see some neat discussion on this in Matt Pharr's Oscar winning book PBRT).
And throughout all this, fixed-point is an order of magnitude slower when hardware float is available, such as places KiCAD runs.
> Robotics: maybe this one's just me, but expressing motion control algorithms in fixed point has saved me €1/part on more than a few occasions as a result of being able to use an MCU without hardware floating point. Compared to the €0.20/part saved by muntzing the rest of the circuit, the small amount of additional work was totally worth it.
It is just you. Yes, at some end-point controller you might use fixed point, but the robotics stuff I've done I often also need stuff like inverse kinematics for motion planning (I'd hate to solve Jacobian inverse stuff in fixed point!), done stuff like probabilistic SLAM flavors (also would diverge so badly as fixed point...), use all flavors of Kalman and other filtering to do sensor fusion, used KF and deep learning stuff to get better estimates from 9DOF mag/gyro/accel sensors, and on and on.
So yes, it may be just you, but I don't know anyone working professionally on robotics systems (and I know a few dozen, having worked with them a loooong time, writing, guess what, numerical code for and with them.).
> I made a header-only fixedpoint.h ...
Same, with also parameter for underlying type so I can plug in int32 where available (e.g., ESP32) int16 where it is not (many microchip chips), or even put in yet another custom type like an int64 emulation built on two int32s.... I've got templated versions to do naive mults and divs (which it seems most people do), one that has correct last bit (takes slightly more bit twiddling, but useful if you want a little more precision). And versions to do faster divs for chips that have software emulated divs (because for fixed point, since at the end you're going to rescale the answer, you can make your div much faster than a naive div followed by a shift). So yes, I have been down all this.
Remember, please show your fixed point code that handles the trivial financial code task above, or stop claiming things you apparently have not done and do not understand.
For accounting and ERP systems, fixed point is pretty much the universal standard since I entered the industry in the 80's.
Nope, pretty much any common financial software. Excel is most likely the most used software for doing finance of any type - it's floating point. Quicken is the most common personal finance - floating point. I'd guess the majority of personal finance programs do stuff like mortgages and credit card calculations, which will hit the same compound interest failures I listed above if you try to do it in fixed point, so they'd all be floating point.
If ALL you do is addition, then fixed point could work. But it fails at every other basic accounting task that it would be ludicrous to do any modern or even toy programs in it any more. Compound interest, mortgages, taxes, wealth planning - all floating point. Top 20 hits on github for finance - all use floating point for the math.
All the top hits for ERP on github - floating point. Top ERP commercial systems [1], list Microsoft Dynamics as #1. Looking at features it's most certainly floating point (it has calculated fields allowing arbitrary probabilities, for example). Second place on list is Syspro. Same thing - looking through their documentation they allow arbitrary spreadsheet like computations, even allowing machine learning stuff to be integrated for computations - this is certainly floating point. Third on list is QT9 ERP. Same result - they have modules called "Finance" that allow arbitrary calculations, which is certainly not done in fixed point. Multiple other modules look like they'd need floating point since fixed point would simply accumulate error too quickly (and be slow).
Alternatively if you have any of these programs, you could pull them apart pretty quickly with Ghidra and you'd likely find all floating point math in them.
While fixed-point has uses, it is not used nearly as much any more as people want to believe, and it's certainly a terrible idea for any finance beyond addition if simple numbers, i.e., a checkbook (which itself is getting outdated). If you cannot even do something as simple as compound interest, then I'd hesitate to call it finance software.
In the 1980s, floating point hardware was uncommon, there was no IEEE 754. The computing and finance has moved so far beyond the 1980s that I even doubt much modern greenfield accounting software is fixed point any more.
[1] https://www.forbes.com/advisor/business/software/best-erp-sy...
The code they write is not generally available to the public so I'm not sure what they are doing inside their code. Some probably use java's BigDecimal, some probably wrote their own libraries to handle data types (e.g. SAP).
> If ALL you do is addition, then fixed point could work. But it fails at every other basic accounting task that it would be ludicrous to do any modern or even toy programs in it any more.
How about the basic task of storing 0.1 or 0.01, it seems pretty good at that and float (binary float) struggles.
Decimal float works, which is why IBM but the DFP units on the Power and Z cpu's.
Specifically, accounting law is old. Computers were used to automate what the bookkeepers did by hand, which was emphatically not floating point. At the end of an interest period, interest was computed and booked in rounded form. The next period, interest would be computed based on the number in the books, not the "floating point" number that was rounded from the last time interest was booked. More clearly: put 1c into a bank account earning 1% interest compounded daily. After 1000 years, that account will have precisely 1c in it, because there was never enough interest to earn at least one quantum. This is the difference between math and bookkeeping, and between the sort of math that passes for correct in an ERP system versus the sort of math needed by a bank.
Also, fixed point is not slow, at least not on the systems relevant to actual financial infrastructure. It's all on IBM Z, which has hardware instructions for fixed point decimal calculation because if it didn't, it would run like treacle.
(Obviously one must factor out a lot of local considerations, which modern CPUs are full unto overflowing with; I'm kind of looking at a very, very broad average performance question across code bases doing enough different math operations to average out, not whether one particular loop can run theoretically run faster or slower with one or the other.)
[1]: https://stackoverflow.com/questions/2550281/floating-point-v...
There's also SIMD instructions. Modern CPUs have built-in instructions for handling multiple ints or floats as a vector. If you can get your fixed-point varies to for into 8-bit or 16-bit fields instead of 32, then the same sized vector units can handle more values per instruction.
The SNES and GBA both supported affine transformations, where the elements of the multiplication matrix are fixed point numbers. The Playstation's geometry coprocessor (GTE) used fixed point matrices with quite low 16 bit precision. An emulator feature called PGXP is able to perform these calculations at higher precision.
- Yes, the PS1 had "jittery" vertices because 16 bits is not enough precision
- No, it was not because of using integers, you can use integers (AKA fixed-point) to do 3D just fine. If it had been 16.16 (32 bits total) it would probably look fine.
- No, this isn't the cause of the texture warping, that's because unlike the N64 it only supported affine texture mapping, not perspective-correct texture mapping. The PS1 saw every 3D triangle as just a 2D triangle, and texture mapping in 2D differs from texture mapping in 3D
- Yes, the texture warping is why many of the best-looking PS1 games were basically built on a grid or found other ways to use a large amount of small triangles instead of a small amount of large triangles.
Other than on retro systems, fixed point is still useful in smaller microcontrollers.
Completely unnecessary for our target platform but deeply satisfying.
So not necessarily completely unnecessary.
https://en.wikipedia.org/wiki/Fractint
At that time, such games were written in assembler, and you had to be very careful to place the instructions for scaling and descaling in the right places, not only to get the final result in the right units (i.e., screen coordinates), but also in intermediate calculations to preserve precision.
Then your code doesn't ever need to be aware of the special fixed point math and horrible syntax everywhere and everything just works?