On 01/04/2015 at 15:46, xxxxxxxx wrote:
Cinema 4D Version: 14
Platform: Mac ;
Language(s) : C++ ;
Hi, I'm having an issue with what seems to me to be a really complex situation, I'm hoping that I'm just over thinking it though. After looking though the sdk I'm having no luck finding anything, although I might just be blind.
I want to find out a way to get the rendered output color from a ray, so rendering a single point and getting the output color instead of rendering an entire scene and getting an image. It seems like with rays and VolumeData you can draw a point on a single object but I want to be able to do anywhere in a scene. Not really sure where to get started and any hints in a good direction would be appreciated!
On 02/04/2015 at 07:03, xxxxxxxx wrote:
can you tell us a little bit more on what you want to do? The VolumeData class is only available inside the rendering pipeline so you can only use it with a VideoPostData, MaterialData or ShaderData plugin.
But of course you could simply use RenderDocument() to render the (current) document. You can customize the render settings and enable the "Render Region" to only render the parts of the image that you need.
On 02/04/2015 at 14:22, xxxxxxxx wrote:
Basically what I want to do is be able to get the color of any given point from a scene. I've messed around with RenderDocument() but I don't think it's going to work because of the problem with the resolution of the resulting picture, the pixels wouldn't be precise enough if I was getting a series of points that are close to each other.
I hope that makes more sense,
On 02/04/2015 at 15:08, xxxxxxxx wrote:
Because the color of any given point in the scene will be dependent upon the rendering (lights, textures, shadows, etc.), you will literally need to render that pixel fully. But then you have to determine what the pixel covers. While the render can do subpixel rendering, I think that it can only return a 1x1 pixel minimum as a render.
Unless you just want the texture color. That then involves raycasting and UVs.
On 03/04/2015 at 08:45, xxxxxxxx wrote:
The first method sounds like I want. Do you have any advice in how to approach that?
The second method might work as well, I had considered that, but to do it wouldn't I have to find out what object is in the location I want, then I would need to use a GeRayCollider to find the point on the object and then use VolumeData and the UVs to get it? I didn't think that process would be clean enough to work so I stopped pursuing it.
On 03/04/2015 at 18:06, xxxxxxxx wrote:
Considering that a single render pixel would almost certainly encapsulate many points on an object, a trick might be to clone the render camera and zoom linearly (maintaining view plane orthographics to the point in the scene) to the point of interest so that you get the point of interest isolated as much as possible (with some error/extraneous calculated in the results). You reduce the error area while increasing the results. The problem is determining the best zoom amount so that you get the quickest render while not being overly 'myopic' (too much zoom might require that you sample larger than one pixel to get the result). You are really going to have to make a decision on what a 'point' is on the object (surface area versus render area). That is the crux of your problem. The ideological 'point' is an infinitely small mathematical nothing. We have to raise it to a practical 'point' that can be defined in the bounds of your expected results.
On 14/04/2015 at 12:16, xxxxxxxx wrote:
Thanks for the help! I got a rudimentary version kind of working, but I don't think this method is going to fulfill my needs and the render overhead is too much so I'm going to move onto some other projects for the time being.