Thank you for reaching out to us. You clearly tried to be precise in your question and we appreciate this, but there are still some ambiguities we must resolve first.
Compute distance from a point to a mesh
[...] which is hit by the blue line from the camera [...]
You talk first about points and meshes, and then about points and lines, and show a bunch of null objects and joints. Intersection testing (sets of) line segments and (sets of) triangles requires different approaches.
In general, one should also add that intersection testing two lines in 3D (or similar cases as a line segment and a ray) is very unlikely to yield an intersection due to the vastness of 3D space and floating-point precision. What one usually does there is define a tolerance, where one then searches the closest point pair on the two line segments, tests if the distance of that pair is below that tolerance, and then considers these two lines to intersect (although they technically do not).
Connected two that is your fuzziness regarding what you want to compute: An intersection or the projection? Your thread-title implies that you are interested in projections (i.e., computing the closest point p on some geometry G for a query point q). However, later you switch to terminology which more implies ray-casting/intersection testing as you give a ray direction and use words as along. But ray casting will not "compute [the] distance from a point to a mesh" but the intersection point(s) (when there is a hit) for the geometry and the ray. The line segment formed by the query point and intersection points has a length, but it would be pure coincidence if it were equal to the shortest distance (projection) of the query point and geometry.
When you are interested in the shortest distance between a query point q and some geometry G, ray casting is not the correct method, you must project q onto G. Your example mentions some radius to look in, which I interpret as sort of a search window for the ray casting, but this is not only very error prone or very computationally complex as you might have to ray-cast millions of times to fill that window, but also unnecessary, you can just project the point.
This is all slightly out of scope of support, as these are more principal techniques rather than things in our APIs.
GeRayCollider is ray-triangle intersection testing helper class with some light optimization built into it. You cannot carry out ray-line-segment intersections with it. If you want to intersection test multiple objects, you have to intersection test them one by one.
VolumeData::TraceGeometry is bound to the rendering process.
- There is also the class
ViewportSelect with which you can "pick" objects in a viewport.
Added to that is that there is no real geometry in your screenshot, there are joints, null objects, and camera object. None of which provides any explicit discrete geometry as polygons or line segments. You can of course "write stuff around that" which then fills in the data for the visualizations these objects use. But you cannot just intersection test a joint object or a null object which is displayed as a circle.