UNSOLVED maxon::Url 206 partial content

I am trying to download a file and maxon::Url::OpenInputStream() returns an error with the message "partial get response (206) missing".

Does maxon::Url support partial content?

The same url when pasted into a webbrowser downloads the file just fine. So it seems to me that maxon::Url is not yet fully complete.

maxon::Url webFile("https:://myurltoafile");
iferr(maxon::InputStreamRef inputStream = webFile.OpenInputStream())
{
  LogMessage(err.GetMessage(), MAXON_SOURCE_LOCATION);
  return;
}

Hello @kbar,

Thank you for reaching out to us. Without the concrete URL, it is quite hard to evaluate your problem. But I had a peek at the code where your error is being raised and this seems to be a status code standardization problem as they have come up before for the Url class.

The error "partial get response (206) missing" is being raised when a response header contains the content-range field but does not have the status code 206 (I agree, the error message is a bit misleading, as it indicates the opposite). So, without any concrete URL to test things, I would assume that your URL returns a status code other than 206, but still uses the content-range field, which our Url implementation then finds and responds to with its a bit cryptic error message.

We had a similar topic for error messages some while ago, with the central question being: What status codes are allowed when? In the error case it was that some websites send content although they return a status code 404. There is unfortunately no binding standard definition for status codes and their usage. Browsers, especially the large ones, have become exceptionally good at making the best out of such ambiguous responses. Our Url type is much more bare bones in this regard. I could try to get this fixed, I however doubt that I will find a developer who is motivated to do so, because of the unbound nature of the problem. To avoid ambiguity: I will bring this up internally, but I am not optimistic for us fixing this.

You yourself could try to:

  1. Analyze how the browser is requesting the data. You are not setting up any request headers in your example. Setting the right flags could let the server behave differently (the Plugin Café thread linked above also contains examples for that).
  2. Setup the response header of your request, so that you can see what is actually going on.
  3. Try using maxon::FileUtilities::ReadFileToMemory(webFile, data) instead of UrlInterface::OpenInputStream().

Cheers,
Ferdinand

Hi @ferdinand,

Thanks for looking into it.

I can't share a URL since it is a file on Cloudflare. I could send one directly via email with a longer expiry time for people to test. But if there is no desire to fix the Maxon API then it may not be worth it? And since I also need it to work in R20 onwards a fix wouldn't help me anyway in this case.

I will try your suggestion of maxon::FileUtilities::ReadFileToMemory and see if that helps, but it feels like I may have to remove all my code that uses the Maxon API and go back to libCurl instead. The maxon API works fine for AWS S3, but seems unreliable for other providers.

Cheers,
Kent

Just had a look at maxon::FileUtilities::ReadFileToMemory and I can't use that unfortunately, since I am downloading large files and need to chunk download them to disk to show progress information. It looks like ReadFileToMemory downloads the full file in one go, so only useful for very small files. So I will have to go back to libCurl.

Hey @kbar,

well, it could be that we are overlooking something, and I can only eliminate that by debugging against your URL to see what Cinema 4D is doing (and what data the server is sending). I could have a go with a 206 status URL of our own, but my suspicion would be that the problem does not appear in that context. So, what we need is an URL which is not 206 but has that partial content field. But there could be involved more, with redirects etc., so in the end an URL which is proven to fail is the only way to go.

R20 is indeed a tough requirement. As a minor suggestion: I am not sure on how liberal you are with integrating and maintaining external dependencies, especially when you want to support such a wide range of Cinema 4D versions, but you could also use the Python framework to do the downloading. Python's urllib has been mature for quite a long time, and you should be able to handle any URL on any system with it.

Cheers,
Ferdinand

Hi @ferdinand,

I will see if I can get a URL to you next week to debug on your end.

I went with C++ because that suited me better for the project. I compile libCurl already for every version of C4D from R20 to S26 so it is not a problem to keep using it. It works well. But I try to use the maxon API when I can if it works for my needs.

I know python has all of this nailed down. But I do all my development in C++ for a variety of reasons so I am sticking with that for now. I deal with Gigabytes of data transfer daily so there is a lot of complexity that I know how to deal with in C++, but in python I have only scratched the surface of it in regards to C4D integration. So threading and fine grained control over memory was always a worry if I was to jump ship.

Hey @kbar,

sure, no pressure and that all sounds reasonable, I was just offering options. And to be clear here: I am not optimistic for this being user fixable. But you never know for sure until you have tried.

Cheers,
Ferdinand