@r_gigante Okay, thank you for the reply.

# SOLVED Mirroring with Matching or Different Axes

Hi,

I have slightly updated my previous example for the case when the two frames are in a simple rotational relation around one of their standard basis vectors. As already stated, a general solution for this problem is much harder to accomplish and also not really a *'math'* problem, but more a conceptual and algorithmic problem.

There are also other ways to calculate the delta between your two frames if you are in the non-general case, which will have different advantages and disadvantages to the solution provided by me, but will also require certain guarantees regarding the source and target object and their frames to work properly (like for example having their vertices occupy the same points in world space).

Cheers,

zipit

@zipit That is very kind of you to revisit this topic, thank you! Also, very cool gizmos

I have considered allowing the user to choose how the axes are different themselves as you have in your example, but I really want to see if calculating the delta is possible first. The reason is that there could be *many* left & right object pairs to be mirrored that have different axis orientations. Can you think of any way to do this?

For example, this is what I have tried:

*L_Cube*'s, local rotation is (45°, 0°, 0°). Save this matrix, reset the rotation to (0°,0°,0°) locally, and get the global rotation (-90°, 180°, 0°). We save this matrix and reset back to (45°, 0°, 0°).*R_Cube*'s, local rotation is (0°, 45°, 0°). Save this matrix, reset the rotation to (0°,0°,0°) locally, and get the global rotation (90°, 0°, 0°). We save this matrix and reset back to (0°, 45°, 0°).- Mirror the values using your code
- How can we then apply our global rotations that we got when the objects were reset: (-90°, 180°, 0°) and (90°, 0°, 0°) as corrections respectively?
- In other words, how do we determine the
*inverted_result_axis*and*target_adjustment_rotation*from your code based on these values?

Here is what I have tried:

```
l_diff_x = utils.Rad((-90+90)) #difference between x-axes
l_diff_y = utils.Rad((180-0)) #difference between y-axes
l_diff_z = utils.Rad((0-0)) #difference between z-axes
r_diff_x = utils.Rad((90-90)) #difference between x-axes
r_diff_y = utils.Rad((0-180)) #difference between y-axes
r_diff_z = utils.Rad((0-0)) #difference between z-axes
l_correction = utils.MatrixRotX(l_diff_x)
l_correction = utils.MatrixRotY(l_diff_y)
l_correction = utils.MatrixRotZ(l_diff_z)
r_correction = utils.MatrixRotX(r_diff_x)
r_correction = utils.MatrixRotY(r_diff_y)
r_correction = utils.MatrixRotZ(r_diff_z)
l_cube.SetMg(l_reflection * l_correction)
r_cube.SetMg(r_reflection * r_correction)
```

This makes sense to me but it doesn't work. There is something missing...putting the corrections into the objects' frames, or inverting them. I don't know; I have tried them to no avail.

Thank you!

Hi,

there are some fundamental problems with your code, but when I am trying to understand the intention of that code, you also seem to have overlooked the major prerequisite of your approach - It would require both objects to be topologically aligned. That was not the case in any of your example files.

Imagine an object *"source"* that you have just duplicated (*"target"*), so that *target* "sits in the same place" as *source*. If you now would rotate the frame of *target* with Cinema's axis-mode thingy, then its points would occupy the same coordinates in world space as before, but their local coordinates would be different. You could simply compute the transform between the two frames then by `correction = ~source.GetMg() * target.GetMg() `

. If however also *target* itself had been rotated (like it was the case in your files), then you cannot do this anymore, because there are now two sources of information that have been mixed: The orientation of the frame in respect to its vertices and the rotation of the frame in respect to the *source*. We do not have any clue how to untangle that (or more precisely - it is not so easy).

If you do not want to to dial in an angle, but also cannot guarantee that the objects are topologically aligned, you could also compute the correction transform by letting the user choose a *from-to*-axis pair. In pseudo code (i.e. I have written this on my iPad):

```
frm_source, frm_target = source.GetMg(), target.GetMg()
# Rotate the source x-axis to the y-target axis.
if user_choice is "x_source to y_target":
# The two axis in question
a, b = frm_source.v1, frm_target.v2
# The normal to both axis, which is the axis of rotation. We take
# the cross product and normalize it (normalization is technically
# not necessary, but better safe than sorry ;)
nrm = ~(a % b)
# The angle between both axis.
theta = math.acos(~a * ~b)
# We could construct the transform/matrix with that ourselves, but
# why should we when Cinema has Quaternions for us.
quat = c4d.Quaternion()
quat.SetAxis(nrm, theta)
# Get the rotation matrix for that Quaternion.
transform = quat.GetMatrix()
```

Cheers,

zipit

@zipit Hello, thank you again for your help!

You're right: the objects wouldn't be topologically aligned.

I will have a look into the math you provided here, but I don't understand what the other user choice options would be or what determines an object as being "x_source" or "y_target." I couldn't get it working.

In the meantime, I'm going to look into the solutions you proposed in your testing-the-script.c4d file again. Thank you

@zipit I am finally making progress, thanks to your help! I have implemented the `inverted_result_axis`

,

`target_adjustment_axis`

,`target_adjustment_rotation`

and it's working. My concern now is that it's confusing to the user on how to get the desired output. I don't understand it all myself.

Could you please help me to understand what *Inverted Result Axis* means? In your code it says:

```
#inverted_result_axis (int): The axis to adjust in a reflected frame to make it conform with Cinema's left-handed matrices.
```

What would cause the need to change the *Inverted Result Axis* value? If there's a way to determine this value based on reflection axis, I'd rather not surface this as an option to the user. I noticed when the axes were the same, on opposite sides of the ZY plane, *Inverted Result Axis* was X, and in my examples where *Target Adjustment Axis* was Y, the *Inverted Result Axis* was Y.

Finally, if I'm able to reduce these options with the *from-to*-axis pair option, I'd be very interested if you would explain. As mentioned above, the pseudo code did not make sense to me enough to where I could build out the other `user_choice`

options. Could you please explain the *from-to*-axis pair options and how this would work with your testing-the-script example?

Hi,

`inverted_result_axis`

is similar to the xy, xz, and yz option in Cinema's tool and related to what we disused regarding left-handed and right-handed matrices. Reflecting a left-handed matrix (i.e. a Cinema matrix) will always give you a right-handed matrix. To make that result conform with Cinema's matrix orientation again, you will have to flip one axis of the result.

Actually you do not Because like I have shown in my first script and mentioned in the post before, you can feed a right-handed frame into a `c4d.Matrix`

constructor (or modify an existing matrix in such way). Cinema will then just silently flip some random axis in your matrix to make it conformant again (which is both a terrible workflow and API design IMHO).

And I agree, the whole process is rather bloated regarding its options. You could technically remove the flipping option and either flip a fixed axis or leave the choice to Cinema, if you do not care about the orientation of the object. The *from-to* approach would not reduce the number of options, but replace the degree field by another drop-down selection menu. The idea behind this approach is to let the user select an axis in the source and then select an axis in the target, to which the axis in source should be rotated. This would imply the transform between both frames.

The only way to cut down on options is , like it is almost always the case, to make your code smarter, i.e. go the route of what I did refer to as the hard way. One way to do this could be to try to choose or compute a characteristic vector for your internal point data (i.e. the vertices attached to it) for both objects and then construct a quaternion for each object with these vectors and common arbitrary vector (does not really matter what vector). With that you could rotate the objects in and out an identical neutral orientation (for the lack of a better description). But that is only a rough outline, I would have actually try this myself and I am not even sure if this will work.

Cheers,

zipit

@zipit **Thank you for all of the the replies and explanations!**

I think I'm going to have to go the options way. The idea I had was the one I explained above: getting the axes' differences and applying them to the `target_adjustment_rotation`

. I might be able to make it work...let's see.

The Quaternion method you described sounded promising but I wouldn't know how to do it based on your explanation.

Can I connect with you somehow off of the forum? I'd like to send you something as a token of my gratitude for your help.

Hi,

I am happy that I could help. It is very kind of you that you want to express your gratitude, but not necessary.

Happy coding and rendering,

zipit

@zipit I also want to say that the amount of time you contribute here to help out developers is very generous of you. You are doing an amazing job. I would hope that Maxon would actually pay you some retainer fee for your time or at least provide you with a free subscription for all the help you have given everyone here.