Hello @orestiskon,

this is a private answer of mine and I think @Cairyn has already given you a nice answer, but I would like to clarify some stuff. And to be clear: The question of yours is formally out of scope of support, you will have to rely for further answers on the community.

##### What is floating point precision?

*e to the pi Minus pi, CC-BY-NC 2.5 XKCD*

I cannot unpack this in all detail and am going for an intuitive understanding. For more details I would recommend the excellent Wikipedia article on Floating-point arithmetic.

So, you have probably seen popular examples for floating point precision errors like `print(1.1 + 2.2)`

, which will print `3.3000000000000003`

, accompanied by devastating statements like *"the vast majority of real numbers cannot be represented in IEEE 754"*. While this is all true, it can be a bit misleading in my opinion.

So, let us try it out ourselves in Python, in order to figure out what that mysterious error is. We can run this code here:

```
# Used to force Python printing numbers with up to 51 decimal places, as
# Python will start rounding otherwise.
PrintNumber = lambda n: print("{:.51f}".format(n))
n = 0.000000000000000444089209850062616169452667236328125
PrintNumber(4 + n)
```

Which will print the following:

```
4.000000000000000000000000000000000000000000000000000
```

Yikes, that is not correct, Python seems to have completely ignored that we did add *n* to four and just returned the number four. So, we did find that pesky floating point error? *n* seems to be some kind of limit which cannot be represented, right? Before we book this under solved, let us try two more calculations:

```
>>> PrintNumber(2 + n)
2.000000000000000444089209850062616169452667236328125
>>> PrintNumber(2 + n + n)
2.000000000000000888178419700125232338905334472656250
```

That is confusing, for the number two the the calculation is correct. And when we then add n to that result again, the new result is still correct. So, there seems to be more to floating point precision than just a fixed value. To understand what is happening, we need to understand at least roughly how floating point numbers are represented.

Other than whole numbers, real numbers are an uncountable set. Which means while there are infinitely many whole numbers, we can still start counting towards infinity, i.e., count *0, 1, 2, 3, ...*. For real numbers, which we are trying to represent with floating point numbers, this cannot be done. When we try to start counting from 0, we cannot even name the next element, since there is always a smaller one; e.g., 0.1 is not the next element, because 0.01 is smaller, which is not the next element, because 0.001 is even smaller, which is not the next element ... and so on. So, we need to pick a certain precision with which to represent real numbers on the computer, since we cannot handle infinite precision there.

Floating point numbers can be represented in many ways, but the dominant form is something called IEEE 754. These representations can come in different bit lengths: 16 bit (half), 32 bit (single), 64 bit (double), and 128 bit (quadruple) precsion. Cinema, Python and most software use dominantly 64 bit, i.e., double precision for floating point representation. These representations are composed of a exponent and an mantissa, for 64 bit their sizes are:

exponent: 11 bit as an unsigned integer

mantissa: 53 bit as an unsigned integer

Now we skip some nasty technial details, but we can realize a fundatental thing:

There are always two powers of two for a given real number. One which is the closest power of two which is smaller than the given number and one that is the closest power of two that is bigger.

So, for the number 3, the closest power of two which is smaller would be 2¹, i.e., 2, and the closest power of two which is bigger would be 2², i.e., 4. This information of between which powers of two a floating point number lies is stored in the exponent and forms an interval. For the example of 3, the smaller power of two was 2¹ and the greater power 2², so the interval is [2, 4]. The mantissa is an unsigned integer with a very high precision (53 bits, i.e. 2⁵² values) which does divide that interval into representable values. From which folllows that the precision for each interval is always the same, 53 bits, but the the length of the interval does change exponentially with the distance of the lower boundary of the interval to zero:

```
53 bit mantissa max value = 4503599627370496
Precision for from 2¹ to 2²:
interval length = 2² - 2¹ = 2
maximum error = 2 / 4503599627370496 = 4,4408920985006261616945266723633e-16
Precision for from 2² to 2³:
interval length = 2³ - 2² = 4
maximum error = 4 / 4503599627370496 = 8,8817841970012523233890533447266e-16
Precision for from 2³ to 2⁴:
interval length = 8
maximum error = 1,7763568394002504646778106689453e-15
...
```

Wikipedia has a nice visualization of that exponential distribution of precision for this form of floating point representation:

*Visualization of floating point precision loss, CC-SA 4.0 Joeleoj123*

From this we can come to a few conclusions:

- We can explain why
`4 + n`

in our example from abov did not work and `2 + n`

did, as *n* is exactly the precision of the *[2, 4]* interval. And *n* is below the precsion of the next *[4, 8]* interval, which resulted in the result being rounded down to the closest value, *4.0*. IEEE 754 rounds to the closest representable value.
- IEEE 754 can only represent numbers in the interval
*[2, 4]* which are the sum of *2* and multiples of that precision *n*.
- For IEEE 754 doubles, there are
*4,503,599,627,370,496* values in the interval *[2⁰, 2¹]*, a range of *1* with a maximum error of *2,220446049250313e-16*.
- For IEEE 754 doubles, there are also
*4,503,599,627,370,496* values in the interval *[2¹⁰²², 2¹⁰²³]*, a range of *4,49423284e307* with the mind boggling maximum error of *9,97920155e291*. For comparison: There are approximately *1e82* atoms in the observable universe.
- Especially that last number should make clear that floating point precision error is not inherently a small value and that amyn bins have an error that is larger than one, which affects our abilty to represent integers with floats:
- Integers from −2⁵³ to 2⁵³ can be represented exactly.
- Integers between 2⁵³ and 2⁵⁴ round to a multiple of 2.
- Integers between 2⁵⁴ and 2⁵⁵ round to a multiple of 4.
- Integers between 2⁵⁵ and 2⁵⁶ round to a multiple of 8.
- ...

##### Yikes, how do we deal with that?

There is no definitive answer to this as already pointed out by @Cairyn. If there would be a "Computers hate this: Five easy tricks to avoid floating point errors!", it would be integrated into hardware or compilers. @Cairyn named a few tricks which more or less work in some situations and there are also more advanced techniques. Some languages also have buildin-tricks up their sleeves, like for example Python with `float.as_integer_ratio`

, `math.fsum`

and the type `decimal`

. But the fancy techniques and language tricks do not help much in your case, since language tricks won't work with `c4d.Vector`

& `c4d.Matrix`

and the fancy techniques usually aim more towards comparing and solving stuff.

It would have been better if you had shown something more concrete, but I assume you have basically a script which does something like the following:

```
import c4d
def main():
obj = op.GetObject()
mg = obj.GetMg()
mg.off += c4d.Vector(1.1, 2.2, 3.3)
obj.SetMg(mg)
```

So, each iteration of this script will pile up a bit of error. Because all the components of the vector we add are not representable as 64 bit floats in the first place and while we run the script, we will traverse through different intervals of precision. And when we do this long enough, we theoretically land at some time t at an offset where the error is larger than the value we do add. But when we assume a starting point of the object which is somewhere in a cube of the side length of 100,000 units at the origin, then the maximum combined error for moving 100 units per frame for 1,000,000 frames will be 0,00145 units. Which follows from:

```
100,000 units lies within [2¹⁶, 2¹⁷] -> max error: 1,4551915228366851806640625e-11
```

So, 64 bits floats are relatively robust when it comes to entertainment oriented CGI and their precision needs for visually representing something, as we usually have both sane frame counts and world units (which is not always given in games and why the can struggle with this). You will still run into floating point problems when you for example want to test if two vectors are parallel or when you have to solve complex equations. But if you want to improve your script anyways, then the best approach is to replace iteration with interpolation. The script of yours is in general problematic, because it couples the world state to the frame rate of the document, i.e., the position of your object at *T=1.0* will be a different one in a document with 25 fps, compared to a document with 50 fps. An interpolation based approach will not have the problem. How you interpolate can vary, here are two very simple examples:

```
import c4d
import math
# The point the object does move away from.
ID_POINT_START = (c4d.ID_USERDATA, 1)
# The point the object does move towards.
ID_POINT_END = (c4d.ID_USERDATA, 2)
# The time span it takes to move from start to end.
ID_DURATION = (c4d.ID_USERDATA, 3)
def main():
"""Interpolate linearly between two points.
This will "snap back" after ID_DURATION has passed.
"""
# Get the object.
obj = op.GetObject()
# Current document time in seconds.
t = doc.GetTime().Get()
# The current offset.
dt = math.fmod(t, op[ID_DURATION]) * (1. / op[ID_DURATION])
# Interpolate the current position and write the value.
p = c4d.utils.MixVec(op[ID_POINT_START], op[ID_POINT_END], dt)
obj[c4d.ID_BASEOBJECT_ABS_POSITION] = p
```

```
import c4d
# The point the object does move away from.
ID_POINT_START = (c4d.ID_USERDATA, 1)
# The travel per second of the object as a vector.
ID_TRAVEL = (c4d.ID_USERDATA, 2)
def main():
"""Interpolate linearly with a starting point and a velocity vector.
This will travel towards infinity.
"""
# Get the object.
obj = op.GetObject()
# Current document time in seconds.
t = doc.GetTime().Get()
# Interpolate the current position and write the value.
p = op[ID_POINT_START] + op[ID_TRAVEL] * t
obj[c4d.ID_BASEOBJECT_ABS_POSITION] = p
```

These two will also be subject to floating point errors, but you will not carry around the error of previous calculations, since you do not depend on the state of the last frame. These are just two very simple examples, many things can be restated as interpolations.

Cheers,

Ferdinand