Solved Plugin Update use "disklevel" or new PluginID

Hi can you please explain me in a nutshell,
how this disklevel works in RegisterObjectPlugin for instance?

So lets say I have a plugin, the user saves the scene.

Then I release an update and added a few things and maybe the description also has changed and maybe some ID's have changed.

The user installes now the new plugin version.
When the user would load the saved scene with the new plugin version, the behavior of the plugin is different because some ID's have changed for instance.

Can I use disklevel argument in the RegisterObjectPlugin - Method and increase the number.
Or is it better then to use a new PluginID for a bigger update of the Plugin.

I tried this disklevel, in the old version it was not set in the RegisterObjectPlugin Method so it is "0".
In the update I set it to "1".
I placed the updated Plugin Folder also to the Plugin Folder it has a different folder name but same PluginID.
So I thought when the scene was saved with disclevel = 0 he loads the old plugin.
But he loads the new plugin.
Or am I completely misunderstanding something here?

Cheers Thomas

Cheers
Tom

Hello @ThomasB,

Thank you for reaching out to us. The concept of a disk levels is tied to (manually) serializing scene data with c4d.storage.HyperFile, and is metadata for differentiating reading different versions.

:information_source: This also means that when for example your ObjectData plugin is storing all data in its data container, i.e., as parameters reflected in the c4d.BaseContainer returned by BaseList2D.GetData, the concept of a disk level is meaningless to you.

Relevant become disk levels only when a c4d.plugins.NodeData derived plugin hook overwrites NodeData.Read, .Write and .CopyTo (you must always implement all three of them). In NodeData.Read you get then passed in the disk level of the node to be loaded. Disk levels are not a mechanism to run multiple plugin versions in parallel, 'load an older plugin' as your posting seems to imply, but to enable newer versions of a plugin to deal with data serialzed by an older version in a different format.

So, let us assume we have for example two versions V1 and V2 of a plugin of type Foo, both saving custom data with a Read and Write method. We can now imagine that in a simplified analogy, for the same scene and node:

  • Foo.Write[V1] - Writes {"data": (0, 1, 2)}.
  • Foo.Write[V2] - Writes {"data_collection": [(True, 0), (True, 1), (True, 2)]}.

I.e., V2 saves more data and data in a different form than V1. In practice this would be data in the form of HyperFile, but imagining things as dict data makes a much more tangible example. Disk levels then allow a V2 plugin version to also deal with data written by V1 version. But the disk level is just metadata, it does not do anything on its own, the V2 Foo.Read would have to actively support both V1 and V2 data, as pseudo code:

def Read(self, data: dict, diskLevel: int) -> None:
    """V1 Foo.Read.

    Reads only V1 data. The data form is:

        {"data": (0, 1, 2)}
    """
    if diskLevel != 1:
        raise IOError("Node data is of unsupported version.")
    
    points: list[int] | None = data.get("data", None)
    if points is None:
        raise IOError("Node contains malformed data.")
    
    # Let the read data contribute to the plugin state.
    self._points: list[int] = points

def Read(self, data: dict, diskLevel: int) -> None:
    """V2 Foo.Read.

    Has been fashioned to read both V1 and V2 data. The two hypothetical forms are:

        V1: {"data": (0, 1, 2)}
        V2: {"data_collection": [(True, 0), (True, 1), (True, 2)]}

    In practice this would be data in the HyperFile format, this example just illustrates the 
    principle.
    """
    if diskLevel not in (1, 2):
        raise IOError("Node data is of unsupported version.")
    
    points: list[int] | None = []
    states: list[bool] | None = []

    # Deal with V1 data.
    if diskLevel == 1:
        points: list[int] | None = data.get("data", None)
        if points is None:
            raise IOError("Node contains malformed data.")
        states = [True for _ in points]
    # Deal with V2 data.
    elif diskLevel == 2:
        collection: dict[str, list[tuple[bool, int]]] | None = data.get("data_collection", None)
        if collection is None:
            raise IOError("Node contains malformed data.")
        
        states = [n[0] for n in collection]
        points = [n[1] for n in collection]

    # Let the read data contribute to the plugin state.
    self._states: list[bool] = states
    self._points: list[int] = points

In the front end, this then bubbles up in the form of c4d.C4DAtom.Read and .Write but these call just the respective NodeData implementations in addition to (de-)serializing the data container itself.

Cheers,
Ferdinand

MAXON SDK Specialist
developers.maxon.net

Hello @ThomasB,

Thank you for reaching out to us. The concept of a disk levels is tied to (manually) serializing scene data with c4d.storage.HyperFile, and is metadata for differentiating reading different versions.

:information_source: This also means that when for example your ObjectData plugin is storing all data in its data container, i.e., as parameters reflected in the c4d.BaseContainer returned by BaseList2D.GetData, the concept of a disk level is meaningless to you.

Relevant become disk levels only when a c4d.plugins.NodeData derived plugin hook overwrites NodeData.Read, .Write and .CopyTo (you must always implement all three of them). In NodeData.Read you get then passed in the disk level of the node to be loaded. Disk levels are not a mechanism to run multiple plugin versions in parallel, 'load an older plugin' as your posting seems to imply, but to enable newer versions of a plugin to deal with data serialzed by an older version in a different format.

So, let us assume we have for example two versions V1 and V2 of a plugin of type Foo, both saving custom data with a Read and Write method. We can now imagine that in a simplified analogy, for the same scene and node:

  • Foo.Write[V1] - Writes {"data": (0, 1, 2)}.
  • Foo.Write[V2] - Writes {"data_collection": [(True, 0), (True, 1), (True, 2)]}.

I.e., V2 saves more data and data in a different form than V1. In practice this would be data in the form of HyperFile, but imagining things as dict data makes a much more tangible example. Disk levels then allow a V2 plugin version to also deal with data written by V1 version. But the disk level is just metadata, it does not do anything on its own, the V2 Foo.Read would have to actively support both V1 and V2 data, as pseudo code:

def Read(self, data: dict, diskLevel: int) -> None:
    """V1 Foo.Read.

    Reads only V1 data. The data form is:

        {"data": (0, 1, 2)}
    """
    if diskLevel != 1:
        raise IOError("Node data is of unsupported version.")
    
    points: list[int] | None = data.get("data", None)
    if points is None:
        raise IOError("Node contains malformed data.")
    
    # Let the read data contribute to the plugin state.
    self._points: list[int] = points

def Read(self, data: dict, diskLevel: int) -> None:
    """V2 Foo.Read.

    Has been fashioned to read both V1 and V2 data. The two hypothetical forms are:

        V1: {"data": (0, 1, 2)}
        V2: {"data_collection": [(True, 0), (True, 1), (True, 2)]}

    In practice this would be data in the HyperFile format, this example just illustrates the 
    principle.
    """
    if diskLevel not in (1, 2):
        raise IOError("Node data is of unsupported version.")
    
    points: list[int] | None = []
    states: list[bool] | None = []

    # Deal with V1 data.
    if diskLevel == 1:
        points: list[int] | None = data.get("data", None)
        if points is None:
            raise IOError("Node contains malformed data.")
        states = [True for _ in points]
    # Deal with V2 data.
    elif diskLevel == 2:
        collection: dict[str, list[tuple[bool, int]]] | None = data.get("data_collection", None)
        if collection is None:
            raise IOError("Node contains malformed data.")
        
        states = [n[0] for n in collection]
        points = [n[1] for n in collection]

    # Let the read data contribute to the plugin state.
    self._states: list[bool] = states
    self._points: list[int] = points

In the front end, this then bubbles up in the form of c4d.C4DAtom.Read and .Write but these call just the respective NodeData implementations in addition to (de-)serializing the data container itself.

Cheers,
Ferdinand

MAXON SDK Specialist
developers.maxon.net

@ferdinand
thank you very much

Cheers
Tom