Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
Hello there,
I've taken my first leap into the C++ SDK and experienced some early triumphs and defeats. I'm working slowly on a shader starting from the Xbitmapdistortion SDK example. I have the SDK documentation open in about 20 tabs but still feel unsure about how I'm approaching what I'd like to accomplish.
I'm hoping to write a bare bones Ptex shader. I don't need to deal with most of the issues that surround Ptex as my only goal is to get a shader that I can bake into UDIMs I figured I'd start the effort by generating per-polygon UV coordinates. It took me a while but I came up with a working solution which exists mostly in the Output method of my young shader:
Vector PtexData::Output(BaseShader* chn, ChannelData* cd) { if (cd->vd == nullptr) // It took me a while to realize I needed this here :) return Vector(0.0); RayPolyWeight weight; RayHitID hit = cd->vd->lhit; Int32 faceid = hit.GetPolygon(); /* I'm not currently using the faceid, but I will need it to index into the sub-images of the Ptex file; is this a bad idea? Is there another way?*/ cd->vd->GetWeights(hit, cd->vd->p, &weight); Vector coord = Vector(weight.wa + weight.wb, weight.wb + weight.wc, 0.0); return coord; }
So first off I suppose I'm wondering: have I done anything horribly wrong or inefficient in the above code? That leads me to my next question: what is the best practice for loading an external texture (in this case Ptex)? I was not planning to build a Ptex file handler but instead to 'brute force' this and load the required Ptex images into a PtexCache in my InitRender() and use Ptex's getPixel() method to 'sample' directly using the coordinates I've generated. I have several concerns with this approach (do I need to consider MIP levels? even if my only goal is to bake the shader?). Unfortunately I don't even know enough to know whether they're valid concerns or not
Does anyone have a suggestion for an acceptable (not necessarily best) approach for doing this?
Finally, could anyone recommend additional avenues for getting up to speed on modern C++ more generally (this is the first time I've touched C++ since 2001) and the Cinema 4D SDK in particular?
I've managed to stumble my way through to a working (debug) prototype of a Ptex Shader. It "works" but I've been developing it with a very specific purpose in mind: converting the Moana dataset Ptex files to UDIMs. As a result, and due in large part to my inexperience with C++ and the SDK, this prototype is missing pretty much all of the features it would require to be a production ready shader I'm not doing proper error checking and it's quite possible I've written a memory leak or two in here
Most telling: the plug-in is crashing when I switch the build to release. Debug is working, but I am unfortunately clueless when it comes to debugging
This Ptex shader doesn't work under several conditions that would need to be addressed if I were working on this for other people to use in production:
With all of those caveats out of the way, I've uploaded the source to Google Drive in the hopes that folks will point out the idiotic things I've done so that I can improve and also so that someone more capable might pick this up and carry it through to a real shader
P.S. For anyone working on Windows that doesn't yet know: I highly recommend picking up vcpkg. If I'd had to build Ptex from scratch I don't think I'd even have gotten this far.
@bentraje I'm not sure that this will address your desired use case, but if I ever need to apply the same deformer to objects in different hierarchies I'll turn to the Surface deformer.
@kbar said in Ptex Shader Progress:
I had planned on doing this again and converting the OBJ files to C4D files in the process, to make them smaller and faster to load. Which is what you are doing with Redshift proxies I believe?
Yes, the Redshift Proxies work as a sort of reference so the main c4d file would be quite small. In addition the display of each proxy can be simplified to either reduce or eliminate the viewport being bogged down with drawing faces. Of course, when everything is just a bounding box the scene doesn't look like much before rendering!
The other benefit of the proxy is specific to Redshift in that the mesh will not need to be processed before the BVH is constructed and rendering can begin.
The only thing I can think of would be to Clone a light onto a Matrix and apply the bend to the Matrix object. The resulting positions and rotations of the Matrix should follow the bend but you will not get any deformation of the lights. They will still be flat lights but they should be distributed along the bend.
@bentraje it does, but it essentially behaves as an instance of another deformer. So the deformation is defined in one hierarchy location but can also be re-applied in any arbitrary location.
Like I said, that might not be what you're after, but it's helped me in certain situations.
@bentraje Apologies, I was perhaps a bit misleading in my description of the Surface Deformer. It actually behaves more like an 'instance' of the entire deformer stack that's affecting another object.
c4d194_deformer_affect_outside_hierarchy_v002.c4d
@kbar I was able to load OBJs from the Moana dataset in R20 using Python and the ids did align with the Ptex face indices. My method was fairly naïve: loop from face 0 up to the ptex file's faceCount and assume they align
Unfortunately I was using OpenImageIO bindings that I built myself and some of the Ptex files were coming in horribly corrupted. I posted about it to the oiio mailing list but got no responses. I found that solid color tiles might be the cause of the OIIO corruption of the Ptex files and posted an example here: https://drive.google.com/open?id=1z4N3xru0_TPRxVRDoCaQ7ODAUliJRfjM
In the above case the corrupted tiles should be solid white.
Eventually I gave up on solving this through Python + OpenImageIO. I'm trying to pick C++ back up to tackle Ptex but I haven't touched it in 20 years and it's slow going I finally managed to get Visual Studio to build the SDK examples
At any rate, I don't think there's an issue with the OBJ polygon ids in R20.
Here are my results after 35 minutes of processing:
Much better than the 12 hours and 0 results I got from my previous attempt
It looks like there are 5,183,087 segments. I have them split across 519 spline objects. The scene's viewport navigation is still fairly responsive, about 25fps when all the splines are visible. Higher if I zoom into a section.
I'd still like to improve the method if anyone can provide more information on how I might use SetAllPoints for the splines here.
SetAllPoints
I'm curious whether it would be possible to create a system whereby an external python script could modify the preferences of a running instance of Cinema 4D.
My first thought was to generate some code in the external script and pass it to Cinema 4D for execution, but that seems (A) potentially unsafe and (B) more complex than it needs to be.
My second thought was to set a system environment variable from the external script and read it from Cinema 4D. In my case it could be something simple like setting EXTERNAL_STATE = 1 under certain circumstances. This would require Cinema 4D to watch for changes to the environment variable and I'm not sure that's possible.
The third approach might be for the external script to create a temporary file to represent one state and then remove it in another state. Alternatively it could write data into that file for Cinema 4D to read. This would still require some way for Cinema 4D to watch for changes.
I suppose I'm curious if there are any known approaches to this sort of thing or which of my 3 potential approaches might be most reasonable. In the 2nd and 3rd cases I'm also curious whether anyone knows of an elegant/efficient way to watch for a change to an environment variable or file.
Thinking on it a bit more I suppose the most elegant solution would be to send messages directly to Cinema 4D that it could catch and handle in a plug-in. Is that possible?
Thanks!
I was just brushing up on the Undo system for a simple Python script and it occurred to me that a context manager like this:
with doc.UndoBlock(): doc.AddUndo(...) #code that needs to be undone
Might be a nice little quality of life improvement. I suspect that this would be a break from the C++ API so perhaps it's not in the cards, but I thought it was worth bringing up. I would propose that the current method remain valid but a context manager be added as well.
@ferdinand it looks like this hasn't changed in S24. Is there an expected timeline for a fix to these bindings?
Thanks again for the update. GPUtil came up as an option when I was researching external modules but it only works with NVIDIA hardware. That might not be so bad for now especially as the Python check for GPU hardware vendor in Cinema 4D does seem to be working.
I'll mark this as solved for now. I look forward to the API updates!
@zipit thank you so much! c4d.GeGetCinemaInfo() makes sense. Unfortunately the current Python documentation is a bit misleading as c4d.GeGetSystemInfo() is not listed as deprecated and the description for c4d.GeGetCinemaInfo() makes it sound like it's exclusively for determining if the current C4D session is NFR.
c4d.GeGetCinemaInfo()
c4d.GeGetSystemInfo()
Thank you very much for looking into c4d.storage.GeGetMemoryStat(). I'm excited to work on my little OpenGL control panel and stretch my GUI knowledge further
c4d.storage.GeGetMemoryStat()
I wanted to try making a small python control panel to monitor viewport OpenGL memory usage. I was also going to explore methods to force a reduction in VRAM usage.
However, I've noticed several pieces that seem to be missing or non-functional in the Python API and was hoping I could get some info on them.
First it seems that several of the flags returned by c4d.GeGetSystemInfo() are missing from the python API. The only ones that appear to exist are c4d.SYSTEMINFO_NOGUI and c4d.SYSTEMINFO_OSX. I was hoping that c4d.SYSTEMINFO_OPENGL would be in there.
c4d.SYSTEMINFO_NOGUI
c4d.SYSTEMINFO_OSX
c4d.SYSTEMINFO_OPENGL
Second and more importantly I can't seem to get any information that's useful in my case from c4d.storage.GeGetMemoryStat(). The BaseContainer returned seems to only have values for c4d.C4D_MEMORY_STAT_MEMORY_INUSE and c4d.C4D_MEMORY_STAT_MEMORY_PEAK. All of the other keys return 0 or the keys don't exist in the BaseContainer at all. The latter is the case for c4d.C4D_MEMORY_STAT_OPENGL_USED and c4d.C4D_MEMORY_STAT_OPENGL_ALLOCATED which are the ones I was hoping for.
c4d.C4D_MEMORY_STAT_MEMORY_INUSE
c4d.C4D_MEMORY_STAT_MEMORY_PEAK
c4d.C4D_MEMORY_STAT_OPENGL_USED
c4d.C4D_MEMORY_STAT_OPENGL_ALLOCATED
@zipit said in Generating Splines from JSON Data:
Aside from instantiating the SplineObject in the first place, you would also have the problem that Cinema's splines are not static, i.e. are being dynamically cached. That would mean all this point data had to be reprocessed each time the cache for this spline is being build.
SplineObject
Perhaps this is a reason it would be good to build splines made of fewer segments/points? I will not be modifying the curves after they're built the first time. Are you saying that the cache is built for the SplineObject even outside of the call to SplineObject.Message(c4d.MSG_UPDATE)? That's not the way it seems to behave based on my (limited) observations.
SplineObject.Message(c4d.MSG_UPDATE)
This is more an academic point due to the mentioned problems, but setting each point individually seems very inefficient, you should push all points in at once, using PointObject.SetAllPoints.
PointObject.SetAllPoints
I'd like to use this method but I haven't found any examples of using it on a SplineObject. My concern is that I would have to call this once per SplineObject (as opposed to per segment) which would entail keeping a 10,000 element python list of which each entry is a 5 element python list of c4d.Vector alive until the SplineObject points are ready to be set. I might be able to use one of the methods from itertools to do this relatively quickly, but I'm just not sure if it'll actually be an improvement. I suppose it would be if the cache is in fact being built whenever a point is added.
c4d.Vector
itertools
The only way to go would be to reduce the dataset in dimensionality, which would mean for a curve to do some curve fitting. Cinema has a curve fitting function in its API, its even accessible in Python in the c4d.utils module, but I somehow doubt that it is up to the task. You will probably have to use numpy and scipy for that.
c4d.utils
numpy
scipy
Each curve is already very simple; just 5 points that Disney expects to be interpolated (I'm using B-Spline in this case). Perhaps lowering the number of intermediate points for this case would be beneficial, especially as these splines are very distant from the majority of the scene (occupy a small section of the frame).
I've gone ahead and modified my code to chunk the curve data so that each SplineObject ends up with only 10,000 segments. Now processing 100,000 curves takes only 42s down from the prior 10m13s. This seems to be a roughly linear increase in time from the 5s for 10,000 segments that I'd previously recorded... so I think I'm happy for now?
point_count = 0 segment_count = 0 spline_count = 1 num = 10000 spline = srcSpline.GetClone() spline.SetName("{0}_{1:0>4d}".format(name, spline_count)) curves = ijson.items(json_file, 'item', buf_size=32*1024) for i, curve in enumerate(curves): index = i % num if index >= (num - 1): spline.InsertTag(texture_tag.GetClone()) spline.InsertTag(object_tag.GetClone()) spline.SetLayerObject(layer) spline.Message(c4d.MSG_UPDATE) doc.InsertObject(spline, group) point_count = 0 segment_count = 0 spline_count += 1 spline = srcSpline.GetClone() spline.SetName("{0}_{1:0>4d}".format(name, spline_count)) if spline_count > 10: break point_count += len(curve) segment_count += 1 spline.ResizeObject(point_count, segment_count) for id, point in enumerate(reversed(curve)): spline.SetSegment(segment_count-1, len(curve), False) spline.SetPoint(point_count-id-1, flipMat * c4d.Vector(*(float(p) for p in point)))
Hello!
I'm revisiting the Moana Island data set and I'm making great progress; I've got almost all of the assets converted into Redshift Proxies.
The biggest problem I'm currently facing is a 3GB JSON file that defines renderable curves on the largest mountain asset. I don't know exactly how many curves are defined in this file, but based on the curve count and data size of other JSON files I think its roughly 5.2 million curves. Each point of the curve is an array with 3 items, each curve is an array of N points, and the curves are stored inside of a top level array.
The built-in json module must load the entire file into memory before operating on it. I've experienced extremely poor behavior on any JSON file over 500MB with the json module so I am instead parsing the files with ijson which allows for iterative reading of the JSON files as well as a much faster C backend based on YAJL.
json
ijson
Using ijson I was able to read an 11GB file that stored transform matrices for instanced assets on the beach. However, even using ijson I cannot seem to build a spline from the curves in this 3GB file (I gave up after letting the script run for 12 hours). I have a suspicion it has more to do with the way I'm building the spline object than parsing the data. So I have some questions. Is there a performance penalty for building a single spline with millions of segments? Should I instead build millions of splines with a single segment? Or would it be better to try and split the difference and build thousands of splines with 10,000 segments each?
I've done a little performance testing with my current code and right now it takes 10 minutes 13 seconds to build a single spline out of the first 100,000 curves in the file. However, if I build just the first 10,000 curves it only takes 5 seconds.
I'm leaning heavily toward chunking the splines into 10,000 segment batches but I want to first see if my code could be further optimized, here is the relevant portion:
curves = ijson.items(json_file, 'item', buf_size=32*1024) #curves is a generator object that returns the points for each segment successively for i, curve in enumerate(curves): #for performance testing I'm limiting the number of segments parsed and created if i > num: break point_count += len(curve) #tracking the total number of points segment_count += 1 #tracking the number of segments spline.ResizeObject(point_count, segment_count) #resizing the spline for id, point in enumerate(reversed(curve)): spline.SetSegment(segment_count-1, len(curve), False) spline.SetPoint(point_count-id-1, flipMat * c4d.Vector(*(float(p) for p in point))) spline.Message(c4d.MSG_UPDATE)
I never would've found the command id so thanks a ton for sharing that. Marking as solved.