Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
@ferdinand
Thanks, that is exactly correct - in fact I did not need to make the orthonormal vector, all one needs do is set the objects Y-axis equal to the camera, and get the x-axis by cross product .
I think I correctly handle the pathological situation after looking at the limiting cases.
The modified function is below.
Randy
EPSILON = 1E-5 # The floating point precision we are going to assume, i.e., 0.00001 # This is based on the python Look at Camera example # https://github.com/PluginCafe/cinema4d_py_sdk_extended/tree/master/plugins/py-look_at_camera_r13 def GetLookAtTransform(host: c4d.Matrix, target: c4d.Matrix, reverseZ=True) -> c4d.Matrix: """Returns a transform which orients its z/k/v3 component from #host to #target. """ # Get the position of both transforms. p: c4d.Vector = host.off q: c4d.Vector = target.off # The normalized offset vector between 'host' (object to be reoriented) and 'target' (the camera) # will become the z-axis of the modified frame for the object . # # If reverseZ = True, the new z-axis is points from camera toward object, if False the reverse # I turn reverseZ on by default, as my initial application is to text splines, which are meant to be # viewed looking down the object z-axis. # In the original implementation # (https://github.com/PluginCafe/cinema4d_py_sdk_extended/tree/master/plugins/py-look_at_camera_r13 ) # the modified y-axisis computed using the global y-axis, and this does not consistently # keep the text upright in the view of the camera. Instead, simply take the object y-axis same as camera y. # # In the pathological case of new object z-axis parallel to camera y : # If reverseZ : set object z = camera y , object y = -camera Z # else : set object z = -camera y, object y = -camera z # if reverseZ : z: c4d.Vector = ~(p - q) if 1. - abs(z * target.v2) > EPSILON : y = target.v2 else : z = target.v2 y = -target.v3 else : z: c4d.Vector = ~(q - p) if 1. - abs(z * target.v2) > EPSILON : y = target.v2 else : z = -target.v2 y = -target.v3 # get x using cross product x: c4d.Vector = ~(y % z) # Return the frame (x, y, z) plus the offset of #host as the look-at transform. return c4d.Matrix(off=p, v1=x, v2=y, v3=z)
I have to add an update to this. While I marked this as 'solved', in fact initially I was not looking carefully at behavior in the view.
While the text stayed 'more or less' vertical with respect to camera orientation sometimes it was far off, depending on the specific camera angle.
I looked at the :Look At Camera python example, and the approach is to set Z as the normalized displacement from host to target, and set 'up' as the global Y axis , then generate normalized X and Y by taking cross products.
I made a modified plugin, and changed the math as follows:
def GetLookAtTransform(host: c4d.Matrix, target: c4d.Matrix) -> c4d.Matrix: """Returns a transform which orients its z/k/v3 component from #host to #target. """ # Get the position of both transforms. p: c4d.Vector = host.off q: c4d.Vector = target.off # The normalized delta vector will become the z-axis of the frame. #z: c4d.Vector = ~(q - p) # I reversed this as my application is for a text spline, where you look down the negative z-axis of # the object in default orientation z: c4d.Vector = ~(p - q) # We compute an up-vector which is not (anti)parallel to #z. #up: c4d.Vector = c4d.Vector(0, 1, 0) \ # if 1. - abs(z * c4d.Vector(0, 1, 0)) > EPSILON else \ # c4d.Vector(EPSILON, 1, 0) # # Instead, take camera Y and orthonormalize with respect to our Z y = ~(target.v2 - (target.v2 * z) * z) # #x: c4d.Vector = ~(up % z) # get x using cross product x: c4d.Vector = ~(y % z) #y: c4d.Vector = ~(z % x) # Return the frame (x, y, z) plus the offset of #host as the look-at transform. return c4d.Matrix(off=p, v1=x, v2=y, v3=z)
This now keeps the text orientation nailed, aligned with the 'up' axis of the camera.
I have not handled the pathological case of the initial Z being directly along the camera Y axis, I will take a look at that.
Sorry if this is solved somewhere else and I did not see it. If not I will try to submit the plugin.
Thanks,
Ah Ok, thanks Ferdinand.
At least I was partly on base.
@cairyn
Thanks very much, your solution and Ferdinand's below both work.
Ferdinand, thanks for the detailed reply, now it is working correctly. I did not realize the importance of the priority setting.
I may have been tricked by BaseDraw not being initially defined, perhaps that is why my initial assignment of the camera as target did not 'take' and I needed to do it again.
About the 'optional stuff' - just misinterpretation on my part. I see 'op' in code examples like the one you pointed me to,
https://plugincafe.maxon.net/topic/14117/gimbal-lock-safe-target-expression/2
which seems to come from a global context, so I assumed represented a selected object. I also see Optional being used to wrap return values, I see this implements something akin to optional variables in Swift. So I assumed 'op' was shorthand for an optional value.
I have been coding long enough that I can conflate and confabulate with the best of them. ;-(
Cairyn's solution above also works, by the way.
Thanks!! I will mark as solved
Ferdinand thanks for the explanation and suggestions.
I tried the constraint tag approach first. (I understand and prefer the 'programmatic approach' you also shared, but don't understand the 'Optional' stuff at this point, which I assume involves interactive object selection??)
When I paste the code below into the console, I get text with associated constraint but no tracking behavior (I can orbit around the text).
I am puzzled that when I examine the constraint tag in the editor, the target is not initially set, neither for aim nor up-axis. If I simply run the assignments again (constrainttag[20001] = camera, etc) they immediately show up in the editor, but still no tracking behavior.
I am clearly missing one or more important points.
Thanks for any advice, Randy
textheight = 10. extrude = c4d.BaseObject(5116) extrude[c4d.EXTRUDEOBJECT_EXTRUSIONOFFSET] = textheight/2. extrude.SetBit(c4d.BIT_ACTIVE) text = c4d.BaseObject(5178) text[c4d.PRIM_TEXT_TEXT] = 'Hello' text[c4d.PRIM_TEXT_ALIGN] = 1 text[c4d.PRIM_TEXT_HEIGHT] = textheight text.InsertUnder(extrude) doc.InsertObject(extrude) camera = doc.GetRenderBaseDraw().GetSceneCamera(doc) constrainttag = c4d.BaseTag(1019364) # set objects for aim and up axis constrainttag[20001] = camera constrainttag[40001] = camera constrainttag[c4d.ID_CA_CONSTRAINT_TAG_UP] = 1 constrainttag[c4d.ID_CA_CONSTRAINT_TAG_AIM] = 1 # aim -Z constrainttag[20004] = 5 # up axis +Y constrainttag[40004] = 1 # up axis when aiming along constrainttag[40005] = 5 text.InsertTag(constrainttag)
Hi,
I am generating a scene with with a python script, and want to add 3d text to label some positions.
I want the labels to maintain readable orientation when I change camera position, and added the 'Look at Camera' tag to the Text . This 'works', but the text is always reversed, i.e. the initial view of the text is from the back, and the tag now serves to maintain the reversed view of the text for any camera orientation.
When I create text in the editor the default orientation puts the text in XY plane, reading left-to-right along X when looking down the Z axis, so I made sure the camera was looking down Z when the scene was built, but the text is still reversed. Further experimentation leads me to believe that initial camera position has no effect on the text orientation. (There is also a 'reverse' property of the text object, does not seem to have any effect.)
I am appending the code I am using below.
Thanks in advance,
extrude = c4d.BaseObject(5116) extrude[c4d.EXTRUDEOBJECT_EXTRUSIONOFFSET] = textheight/2. extrude.SetBit(c4d.BIT_ACTIVE) text = c4d.BaseObject(5178) text[c4d.PRIM_TEXT_TEXT] = 'my label' ## center text[c4d.PRIM_TEXT_ALIGN] = 1 text[c4d.PRIM_TEXT_HEIGHT] = textheight # this experiment had no effect #text[c4d.PRIM_REVERSE] = 1 # ttag = c4d.TextureTag() ttag.SetMaterial(sceneMaterials['H']) text.InsertTag(ttag) looktag = c4d.BaseTag(1001001) text.InsertTag(looktag) text.SetBit(c4d.BIT_ACTIVE) text.InsertUnder(extrude) ## this gets target position in scene pos = numpy.array(mol['atomsInOrder'][theIDX][1]) # convert left-handed coords pos[2] = -pos[2] width = text.GetRad()[1] # compute displacement to put center of text box at CA position disp = pos - numpy.array((0.,textheight/2.,0.)) # extrude.SetAbsPos(disp.tolist()) doc.InsertObject(extrude)
That is great! I did not imagine that working so simply, that's a wonderful feature.
Regarding the 'broken' part, I did 'something' and spheres stopped appearing in the viewport, just the 'tracers'. I cannot reproduce that, I must have screwed up. All is right with the world now.
In case it helps others, the demo code below now does everything I currently need. Programmatically changes position of the emitter, and following this discussion (https://forums.cgsociety.org/t/c4d-animation-via-python/1546556) I also change the color over the course of each 10-frame segment.
Thanks so much for your help, I am really sold on 4D.
import c4d import math import random mat = c4d.BaseMaterial(c4d.Mmaterial) mat.SetName('emitter sphere') mat[c4d.MATERIAL_COLOR_COLOR] = c4d.Vector(0.8, 0.0, 0.0) ## Get RGB tracks for continuous color update redtrack = c4d.CTrack(mat, c4d.DescID(c4d.DescLevel(c4d.MATERIAL_COLOR_COLOR, c4d.DTYPE_COLOR, 0, ), c4d.DescLevel(c4d.VECTOR_X, c4d.DTYPE_REAL, 0))) mat.InsertTrackSorted(redtrack) greentrack = c4d.CTrack(mat, c4d.DescID(c4d.DescLevel(c4d.MATERIAL_COLOR_COLOR, c4d.DTYPE_COLOR, 0, ), c4d.DescLevel(c4d.VECTOR_Y, c4d.DTYPE_REAL, 0))) mat.InsertTrackSorted(greentrack) bluetrack = c4d.CTrack(mat, c4d.DescID(c4d.DescLevel(c4d.MATERIAL_COLOR_COLOR, c4d.DTYPE_COLOR, 0, ), c4d.DescLevel(c4d.VECTOR_Z, c4d.DTYPE_REAL, 0))) mat.InsertTrackSorted(bluetrack) doc.InsertMaterial(mat) sph = c4d.BaseObject(5160) rad = sph.GetRad() particleRad = 2.0 scale = particleRad/rad[0] sph.SetAbsScale((scale,scale,scale)) ttag = c4d.TextureTag() ttag.SetMaterial(mat) sph.InsertTag(ttag) sph.SetBit(c4d.BIT_ACTIVE) emitter = c4d.BaseObject(5109) emitter.SetBit(c4d.BIT_ACTIVE) doc.InsertObject(emitter) sph.InsertUnder(emitter) # emit particles at rate 500 emitter[c4d.PARTICLEOBJECT_BIRTHEDITOR] = 500 emitter[c4d.PARTICLEOBJECT_BIRTHRAYTRACER ] = 500 emitter[c4d.PARTICLEOBJECT_RENDERINSTANCES] = 500 emitter[c4d.PARTICLEOBJECT_SIZEX] = 0.2 emitter[c4d.PARTICLEOBJECT_SIZEY] = 0.2 emitter[c4d.PARTICLEOBJECT_TYPE] = c4d.PARTICLEOBJECT_TYPE_PYRAMID emitter[c4d.PARTICLEOBJECT_ANGLEH] = 2 * math.pi emitter[c4d.PARTICLEOBJECT_ANGLEV] = math.pi emitter[c4d.PARTICLEOBJECT_SHOWOBJECTS] = True fps = 24 emitter[c4d.PARTICLEOBJECT_START] = c4d.BaseTime(0, fps) emitter[c4d.PARTICLEOBJECT_STOP] = c4d.BaseTime(500, fps) emitter[c4d.PARTICLEOBJECT_LIFETIME] = c4d.BaseTime(5, fps) ## Animate 500 frames, new position every ten frames, ## transition to next color (cycle red->green->blue) ## First set key frames for color change nextRGB = [1.,0.,0.] redcurve = redtrack.GetCurve() greencurve = greentrack.GetCurve() bluecurve = bluetrack.GetCurve() for segment in range(50) : frame = 50*segment redkey = redcurve.AddKey(c4d.BaseTime(frame, fps))['key'] redkey.SetValue(redcurve, nextRGB[0]) greenkey = greencurve.AddKey(c4d.BaseTime(frame, fps))['key'] greenkey.SetValue(greencurve, nextRGB[1]) bluekey = bluecurve.AddKey(c4d.BaseTime(frame, fps))['key'] bluekey.SetValue(bluecurve, nextRGB[2]) # # rotate RGB values nextRGB.append(nextRGB.pop(0)) ## run animation frame = 0 pos = c4d.Vector(0,0,0) emitter.SetAbsPos(pos) doc.SetTime(c4d.BaseTime(frame, fps)) c4d.CallCommand(12410) for segment in range(50) : frame = 10 * segment mat.Update(True, True) sph.Message(c4d.MSG_UPDATE) for k in range(3) : pos[k] = -30 + 60*random.random() emitter.SetAbsPos(pos) doc.SetTime(c4d.BaseTime(frame, fps)) c4d.CallCommand(12410) # record
@ferdinand Thanks so much for the detailed reply. At this point I do not need the particles to interact with any forces, this is for display only. I will investigate your links when I want to do something more complex, which I am sure I will.
I have gotten things working partly at this point, I don't know if it is appropriate owing to length, but I am posting demo code below. This just creates a particle emitter and programmatically moves it around.
Unfortunately when I render the animation, by default only a fraction of the objects show up, because I don't know the corresponding property name for 'Birthrate Renderer' . I can manually set this in the dialog and all is well but cannot figure out what the correct symbol is. PARTICLEOBJECT_RENDERINSTANCES does not seen to do it, and if I try to set PARTICLEOBJECT_BIRTHRAYTRACER it seems to break things.
Can you direct me to correct attribute name? Thanks! Randy
import c4d import math import random mat = c4d.BaseMaterial(c4d.Mmaterial) mat.SetName('emitter sphere') mat[c4d.MATERIAL_COLOR_COLOR] = c4d.Vector(0.8, 0.0, 0.0) doc.InsertMaterial(mat) sph = c4d.BaseObject(5160) rad = sph.GetRad() particleRad = 2.0 scale = particleRad/rad[0] sph.SetAbsScale((scale,scale,scale)) ttag = c4d.TextureTag() ttag.SetMaterial(mat) sph.InsertTag(ttag) sph.SetBit(c4d.BIT_ACTIVE) emitter = c4d.BaseObject(5109) emitter.SetBit(c4d.BIT_ACTIVE) doc.InsertObject(emitter) sph.InsertUnder(emitter) emitter[c4d.PARTICLEOBJECT_BIRTHEDITOR] = 500 emitter[c4d.PARTICLEOBJECT_RENDERINSTANCES] = 500 emitter[c4d.PARTICLEOBJECT_SIZEX] = 0.2 emitter[c4d.PARTICLEOBJECT_SIZEY] = 0.2 emitter[c4d.PARTICLEOBJECT_TYPE] = c4d.PARTICLEOBJECT_TYPE_PYRAMID emitter[c4d.PARTICLEOBJECT_ANGLEH] = 2 * math.pi emitter[c4d.PARTICLEOBJECT_ANGLEV] = math.pi emitter[c4d.PARTICLEOBJECT_SHOWOBJECTS] = True fps = 24 emitter[c4d.PARTICLEOBJECT_START] = c4d.BaseTime(0, fps) emitter[c4d.PARTICLEOBJECT_STOP] = c4d.BaseTime(500, fps) emitter[c4d.PARTICLEOBJECT_LIFETIME] = c4d.BaseTime(5, fps) ## Animate 500 frames, new position every ten frames frame = 0 pos = c4d.Vector(0,0,0) emitter.SetAbsPos(pos) doc.SetTime(c4d.BaseTime(frame, fps)) c4d.CallCommand(12410) for segment in range(50) : frame = 10 * segment for k in range(3) : pos[k] = -30 + 60*random.random() emitter.SetAbsPos(pos) doc.SetTime(c4d.BaseTime(frame, fps)) c4d.CallCommand(12410) # record
I was looking for examples of creating particle systems using the python SDK. The articles I find are about accessing particle information after the system is created.
I need to create the systems programmatically. and adjust properties (emitter rate and color) dynamically. Are there any examples available?
Thanks!