R12 - plugin conversion



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 15/11/2010 at 09:53, xxxxxxxx wrote:

    User Information:
    Cinema 4D Version:   12 
    Platform:   Windows  ; Mac  ;  Mac OSX  ; 
    Language(s) :     C++  ;

    ---------
    I have this problem on porting my plugin from R11.5 to R12.
    In Read procedure I have to load an array of vectors.
    The problem born on loading an 11.5 scene in R12. Now I have to use the ReadSVector (becouse ReadVector read a LVector), but seems that this doesn't work! The stream is aligned until I call this function, but when I load the first vector the value isn't correct and then C4D show an error message on loading scene!
    Seems that the R12 ReadSVector isn't compatible with R11.5 ReadVector?

    Thank's in adanced

    Lorenzo



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 18/11/2010 at 06:45, xxxxxxxx wrote:

    I made some other test and finally I found a way to solve the problem, but I need to understand why all seems work in this way.
    The code is this:
    _
    In ::WRITE procedure on 11.5 and <

    hf->WriteLong(v);//v is long - hf is hyperfile
         for (j=0;j<v;j++)
         {
              hf->WriteVector(defpos[j]);//defpos=array of Vector~SVector in R11
         }
         v=defarea.GetCount();
         hf->WriteLong(v);

    In ::READ procedure on 12 Wrong WAY

    hf->ReadLong(&n;); //n is LONG

    for (j=0;j<n;j++)
         {
              hf->ReadSVector(&ve;);//ve is SVector
              defpos.Push(ve.ToLV());//defpos array of Vector~LVector in R12
         }
         hf->ReadLong(&n;);

    In ::READ procedure on 12 RIGHT WAY

    hf->ReadLong(&n;); //n is LONG

    for (i=0;i<n;i++)
         {
              hf->ReadVector(&ve;);//ve is Vector
              defpos.Push(ve);//defpos array of Vector~LVector in R12
         }
         hf->ReadLong(&n;);
    _

    I don't understand why if I write in R11.5 with WriteVector method, that save a struct like SVector, then I must Read in R12 with ReadVector that read a LVector struct instead?



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 18/11/2010 at 07:23, xxxxxxxx wrote:

    Hi Lorenzo,

    yes, you are absolutely right. That doesn´t make much sense indeed. I would also expect the ReadSVector to be the equivalent operation for reading pre-R12 vectors written with WriteVector. Actually when I converted my project to R12 I simply assumed that is the case (nothing else would make sense really).

    Could someone officially comment on this? And would this behavior also apply to ReadReal and ReadSReal accordingly?

    I hope not otherwise this would mean rewriting and contacting all my customers and also do unnecessary conversions to SVectors after reading in into Vectors. Oh com´on...



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 18/11/2010 at 07:28, xxxxxxxx wrote:

    Please post the code line of your array construction.

    cheers,
    Matthias



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 18/11/2010 at 07:42, xxxxxxxx wrote:

    My array declaration is this:

    GeDynamicArray<Vector>defpos;

    in both versions, but in R12 is clearly double precision

    thank's



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 18/11/2010 at 07:52, xxxxxxxx wrote:

    Originally posted by xxxxxxxx

    My array declaration is this:

    GeDynamicArray<Vector>defpos;

    in both versions, but in R12 is clearly double precision

    thank's

    Ah, I thought as much :) Vector is defined as LVector in R12. If you don't want to convert to double precision you have to construct your array with SVector and use ReadSVector().

    cheers,
    Matthias



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 18/11/2010 at 08:12, xxxxxxxx wrote:

    thank god. :)



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 18/11/2010 at 08:16, xxxxxxxx wrote:

    Sorry but I don't understand!
    The problem is that in R12 to read a Vector struct from 11.5 scene (that is like SVector for R12), I used an SVector that then I convert in LVector using: ve.ToLV()

    see the example posted up (WRONG WAY CODE)

    But this doesn't work!
    I hope of being clear.



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 18/11/2010 at 08:20, xxxxxxxx wrote:

    You need to use GeDynamicArray_<_svector_>_ to get the same as in R11.5. Oh wait, now I see what you mean. Right, it actually should still be correct to use ReadSVector when written WriteVector with 11.5 (which you say doesn´t work right?)



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 18/11/2010 at 08:25, xxxxxxxx wrote:

    Ok, I will do some tests.

    cheers,
    Matthias



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 18/11/2010 at 08:29, xxxxxxxx wrote:

    Originally posted by xxxxxxxx

    You need to use GeDynamicArray<SVector> to get the same as in R11.5.
    Oh wait, now I see what you mean. Right, it actually should still be correct to use ReadSVector when written WriteVector with 11.5 (which you say doesn´t work right?)

    Yes, the problem is that on reading I don't use the array vector directly, but I use a temporary var that is formatted like a SVector to conform the R11 struct!



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 18/11/2010 at 08:33, xxxxxxxx wrote:

    Exactly as Matthias explains.  If you want to have R12<->R11- compatibility with binary files then you will need to remember type sizes.  I had fun when dealing with Real since it is equivalent to SReal in R12.  But Real in R12 is LReal (as can be seen if you hover over a variable in VC++). Same situation as Vector.

    This is why I am an advocate for types by size (but we can keep our general types too!) unlike how it is with 'long' changing its size dependent upon the bit-width.  I want these standard types instead:

    BYTE/UBYTE: 8-bits
    WORD/UWORD: 16-bits
    LONG/ULONG: 32-bits
    LLONG/ULLONG: 64-bits
    DLONG/UDLONG: 128-bits
    etc.

    The problem is no one codes to a standard like this.  They have WriteLong(LONG lv) and LONG can be 32, 64, 128, or whatever bits dependent upon various configurations.  Bad in my book.  Great for when you never read and write files (maybe).  Painful and minefield otherwise - whenever you need backward compatibility or support both 32-bit and 64-bit systems, for instance.

    In other words, instead of changing the definition of a type name by system (like how int can be any one of many, many sizes dependent upon OS and bit-width - now that is a minefield and I never use int any longer), make type names that encode bit-size once and for all.  We are never going to have 3-bits or 54-bits.  Bit-widths are always a power of 2, starting at 8 (8, 16, 32, 64, 128, 256, 512, 1024, 2048, etc. etc.).  My mantra: make it explicit.  int isn't explicit - it is overly fluid.

    I will disembark from my soap-box now. :D



  • THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

    On 02/12/2010 at 22:21, xxxxxxxx wrote:

    Amen, Brother Robert :).

    I have actually run across SDK-type example code for reading/writing a proprietary 3D file-format (examples provided by the designer/author of the format) that had 'int' slathered all through it (in the structures being written/read). Needless to say, I sent him an e-mail on the subject :). Of course this format also used 16bit word values to store "number of polygons" and "number of vertices" values, so it wasn't exactly a modern/forward-thinking format.


Log in to reply