@ferdinand Thank you, your example worked great. Now I understand how to use those symbols. Lowering the the degrees from 5 to 1 solved the issue. Thanks!
Example for anyone curious:
spline = c4d.SplineObject(total_points, c4d.SPLINETYPE_CUBIC)
spline[c4d.SPLINEOBJECT_INTERPOLATION] = c4d.SPLINEOBJECT_INTERPOLATION_ADAPTIVE
spline[c4d.SPLINEOBJECT_ANGLE] = c4d.utils.DegToRad(1.)
Basically, what I'm looking to do is increase the number of intermediate points. It seems like the default is to use "Adaptive" with an angle of 5 degrees. However, I would prefer for it to be lower (such as one degree), but I don't see any documentation on the SplineObject page on how to do this with the Python generator. Any ideas?
I found this page here, but I don't understand how to use that with the SplineObject.
I am using a Python generator to create a spline, and I set the interpolation to cubic. However, when the points of the spline are spread out enough, even with the cubic interpolation, it looks more like piecewise linear when getting closer. To it. Look at this video showing what I'm observing:
As you can see, it is certainly smoother than when I set the interpolation to linear, but even when it is cubic, when I zoom in enough, it still looks somewhat jagged, enough to be noticeable. At the end of the video I circle the cursor around the joints on the cubic spline.
Is there some way I can make the cubic interpolation smoother? I would prefer that the cubic interpolation splines doesn't look like a bunch of connected straight line segments.
@ferdinand Your flat UV projection solution worked great! Although I had to change the y to z and subtract the z range mappings from 1 in order to get it to work correctly. I will include a download in case anybody wants to see the details.
Now I just have a few more questions:
But it works when I render it:
Is there any way to get the Python shader to work in the viewport (not be completely black)? Or is it just too slow for that?
The Python shader is very slow to render. If I were to write the shader in C++ and compile it, would you expect a significant speedup? The code would be nearly identical to what I have in the Python "complexShader" code I presented above, but C++ obviously and instead of using Python's "cmath" module I would be using C++'s "std::complex". Would this render noticeably quicker than doing it in Python?
If I wrote a plugin in C++ to generate the geometry instead of doing it in Python, would you expect a significant speedup? I'm thinking so just because, from personal experience, I know nested for loops in Python are quite slow. The code I have written above slows down fast as the "resolution" variable increases. But I'm curious to hear your thoughts.
It seems like there must be more to it or I'm doing something wrong. I added the following right above the return node line of code in the Python generator:
uvwTag = node.MakeVariableTag(c4d.Tuvw, poly_count)
for ii in range(poly_count):
q = ii/poly_count
uvwTag.SetSlow(ii, c4d.Vector(q,q,0), c4d.Vector(q,q,0), c4d.Vector(q,q,0), c4d.Vector(q,q,0))
But I'm still getting the red error color. I also have the Python plugin print "hi" when it errors, and it is printing "hi". I haven't done the math to correctly set the values for the UV coordinates yet (that "q" number is just a placeholder/test to see if it errors, and it does).
Regardless of what I set the UVW coordinates to be at each point, I shouldn't be getting the error color. So do you know what I'm missing?
@m_magalhaes Thank you, that Python fresnel example was extremely helpful. I got very close with the following:
from c4d import plugins, bitmaps, utils
#warning Please obtain your own plugin ID from http://www.plugincafe.com # I didn't
#if a Python exception occurs during the calculation of a pixel colorize this one in red for debugging purposes
def Output(self, sh, cd):
if cd.vd: #if shader is computated in 3d space
pi = math.pi
u = cd.p
v = cd.p
tt = cd.t # /28.0
osc = math.sin(2*pi*tt)
min_y = -2*pi
max_y = 2*pi
min_x = -2*pi
max_x = 2*pi
# to view correctly when applied to a plane in c4d, have x axis pointing right, z axis pointing up, and y axis pointing at the camera
x = c4d.utils.RangeMap(u, 0, 1, min_x, max_x, clampval = True)
y = c4d.utils.RangeMap(1-v, 0, 1, min_y, max_y, clampval = True)
z = x + y*1j
out = cmath.exp(z)
angle = cmath.phase(out)/pi % 2.0 # wrap it at pi to match Mathematica's color mapping (mathematica: -pi = cyan, 0 = red, pi = cyan)
hue = c4d.utils.RangeMap(angle, 0.0, 2.0, 0, 1, clampval = True)
colorHSV = c4d.Vector(hue, 1.0, 1.0)
colorRGB = c4d.utils.HSVToRGB(colorHSV)
else: #if shader is computated in 2d space
def FreeRender(self, sh):
#Free any resources used for the precalculated data from InitRender().
IDS_COMPLEX_SHADER=10001 #string resource, must be manually defined
return plugins.RegisterShaderPlugin(PLUGIN_ID, plugins.GeLoadString(IDS_COMPLEX_SHADER), 0, complexShader, "", 0)
This works on a plane. However, when I apply the same material to the Python generator, it returns the error color (red):
I'm assuming this is because the geometry generated in the Python generator doesn't have UVs. Is that correct? If so, what is the proper way to add them? If not, why does it work on the plane but not the Python generator (which contains the same code as my first post).
Here's the sample .c4d file and the .pyp plugin in case anybody wants to test it for themselves. I was testing this on R19:
I'm struggling to understand exactly what U and V are in the Formula Effector. The documentation states:
This is where you enter the formula for all effects apart from Manual. u and v are parameters that run from 0 to 1 along the horizontal and vertical axes respectively.
But this definition doesn't make it clear how that's different than the y or x axis. If I apply a formula effector to a sphere, here are some results I get:
When I just use "U" in the formula:
When I just use "V" in the formula:
I don't see the relationship between "the horizontal and vertical axes" in this case.
Does somebody have a more precise definition of what u and v represent, specifically in "spherical" mode? What do they range from (is it 0 to 1)?
@m_magalhaes This helps a lot, but unfortunately I do need this to work in the render, and ideally be accessible via other plugins (such as redshift). If it is only accessible through standard render, that is OK too. So it looks like your Python tag solution won't work since that is only visible in the viewport. It seems like option 1 that you have listed (Create your own shader that will be able to translate the pixel position to the color you need. It will work for both viewport and render) is what I will need to do. The question is, is there a way to make the shader determine its color based on Python code?
Once again, the main thing to realize here is that the function that I'm plotting can depend on time (even though it doesn't in the simplified example I gave above), and the color will also change with time. So whatever code is generating the surface needs to also generate the color since the y height represents the modulus of f(x,z) (absolute value (radius) of the complex number) and the hue represents the argument of f(x,z) (angle of the complex number).
It's OK if I have to copy and paste the code into two different places. Whatever generates the geometry needs the modulus of f(x,z) and whatever generates the color needs the argument of f(x,z).
In my example, using the exponential function made it overly simple. Here's how a more complicated function could potentially look as time passes (sorry it's laggy):
Is this possible at all? I'm willing to dive into C++ if necessary, but Python is definitely ideal.
@x_nerve This is really helpful, but I don't understand how to do Solution 1. Do you have an example of how to do this? All I need to do is see working Python code.
I have seen a few examples on how to use set the vertex colors of geometry using Python, but the problem is that they all depend on the geometry already existing, being editable, and having a material pointing to a vertex color tag that is already on the object.
What I don't understand is how I would do this if the geometry is being created within a Python generator. If I add a material to the Python generator, I can't point the Vertex Map Shader to a vertex color tag because I can't add a vertex color tag to the Python generator without making it editable (which I don't want to do). I can use c4d.VertexColorTag to add it to the geometry, but it doesn't show up in the object manager, so I can't point the material to it.
Really, all I want is a way to color geometry that is generated within a Python generator based on the vertices specified. Does anybody have a simplified example of how I can do this?
For example, how would I have a Python generator create a cube and set a random color at each of its 8 vertices in such a way that the material with a Vertex Map Shader can display visibly? The key here is that the Python generator is not made editable, and the geometry of the cube itself is created within the Python generator.
So basically this:
entirely from within a Python generator. How can I do that?