BaseSound and sound in general [SOLVED]

On 26/11/2015 at 04:30, xxxxxxxx wrote:

User Information:
Cinema 4D Version:   17 
Platform:   Windows  ;   
Language(s) :     C++  ;


I can't find any good example on BaseSound, or how to employ that class, or how MoGraph Sound and Loudspeaker are using it, or how to play a sound.

1. The help insists that only WAV files can be loaded. However, I have already loaded an MP3 successfully into a MoGraph Sound object, and it plays fine. (The help also insist that the Frequency Graph doesn't update dynamically but it does.)

2. BaseSound is directly derived from C4DAtom so it's not a base class for the OM objects. I assume it's stored in a Container? How do I extract the BaseSound? Is that even the correct class to pull from a Sound?

3. GetSampleEx() promises to get me a sample. A parameter for that is "i" - not a BaseTime, but an i. I suppose the samples are stored in a huge one-dimensional array, and this is the index to that, and if I want to have a sample at a certain time, I need to recalculate a Frame (or a BaseTime) into a sequence of i's that were sampled at the timespan of that frame... how?

4. The Init() function provides me with two questions even. There's a parameter sample_cnt which decides the number of samples... per what? Second, frame, overall runtime of the sound? So if the sound is three seconds long and I have 25 fps, if I want ten samples per frame that's 3*25*10 = 750 samples?

And then there is the frequency. So a BaseSound represents only one single frequency out of the whole frequency spectrum? At what sample density? If I want a sample for 10-12kHz, what do I use for this parameter? How do I sample different widths of the spectrum, like one sample with 10-12 and another with 8-15kHz?

And how does the MoGraph Sound handle that? Sound has a frequency graph and influences the clones in its cloner by frequency. So it has, like, one BaseSound for each frequency? Or maybe it's not working with BaseSound at all...

Sound is a three-dimensional thing with discretization along every axis. Time is one axis (samples per second being its discretization), frequency the second, and the third (result) axis is the intensity of that frequency at that time. BaseSound doesn't seem to represent that at all.

I'm afraid I don't get the sound functionality. Does anybody have actual code for me?

On 27/11/2015 at 04:12, xxxxxxxx wrote:


As you see in the description of the BaseSound class it has member functions to read and write it into a HyperFile. This means that the BaseSound is typically stored as a member variable in the host object and not stored in the object's BaseContainer. The Loudspeaker object for example does not use BaseSound at all, the MoGraph sound effector does not provide any access to its internaly used data.

To play a sound the GePlaySnd class is used. On the class' description page you find an example how to use it.

The "i" argument of GetSampleEx() is the index of the requested sample. You can get the total number of samples and the sample rate of the loaded sound file with the GeSndInfo structure received with GetSoundInfo(). So the sample for any given time is the sample rate multiplied with the time in seconds.

The sample count argument of Init() is the total sample count: the length in seconds multiplied with the sample frequency/rate (number of samples per second, Hz). So the total length of the sound (in seconds) is the result of the total sample count and the given sample frequency (second parameter).

To get sound frequencies and the frequency intensity you would have to apply the Fourier transformation to the data. Currently the SDK apparently does not include any tools to do that, you have to search for Fast Fourier transform algorithms.

Best wishes,

On 27/11/2015 at 15:07, xxxxxxxx wrote:

Thanks, I guess I'm beginning to understand the sound object now. (I realize that my sound is actually not in the loudspeaker but in the sound track of the loudspeaker.) So the BaseSound actually contains only the hull curve and therefore a two-dimensional representation that is not broken down by frequencies. Which is then also used to draw the
curve through GetBitmap(), leading to the graphic representation I see in the timeline for the sound track, and in the powerslider when I activate soundwave. I suppose the sound effector threw me off, there, in believing it should already be frequency-split data.

I'm probably not mistaken in assuming that the sound effector performs the FFT internally to apply a frequency intensity value to each clone. Of course if I can't access that data directly, the only way to sample it (by frequency) would be to record each clone's changed value over time.

I could extract the effector's sound file by file name from the MGSOUNDEFFECTOR_FILE value, then use that to load the same file into a BaseSound of my own, and work from there.

Thanks again, I will consider my options for the usecase I have in mind.