>> It seems that if shared memory is used communication time grows
>> (almost) linearly with amount of data. I must admit, that I
>> believed
>> this time is fairly constant.
>
> This is not possible. We have to copy data into shared memory,
> which,
> by definition, means that we have to copy each byte from
> process-specific memory to shared memory. At the root of this is
> the
> memcpy() function, which, although it is usually highly tuned and
> not
> like the simplistic pseudocode shown below, can be abstracted as the
> following operation:
>
> for (i = 0; i < size; ++i) {
> *(dest++) = *(src++);
> }
>
> Which is, by definition, O(n).
>
> Make sense?
>
OK, my delusion about the shared memory communication time stems from
the assumption that the OS provides some efficient mechanism to share
memory. Obviously this is not the case and copying cannot be avoided.
Thanks for the explanations.
|