1 of 1 people found this helpful
The issue you see is termed as head loss. The loss is due to the delay in establishing the multicast "connectivity" from the receiving machine to the source machine. This delay is OS and the intermediate router/switch dependant. This shall not be resolved at the application/LBM level.
There are various ways to overcome headloss, all done at the source/sending application side:
1. Using late join. This inturn depends on the retention size/age.
2. Sending dummy messages for few seconds (The time shall be fine tuned using trial and error method, finding the time taken for multicast join). The receiving application should be capable of understanding this dummy messages.
3. Using MIM for the first few messages and continuing with regular streaming. MIM is again multicast and you may end up seeing head loss on the MIM too.
4. Source application sleep. The sending thread sleeps for few seconds (have to fine tune by trial & error method), before starting the streaming. The example program that comes with the installation uses this method.
The most commonly used method is 'sending application sleep' and 'dummy message sending'.
Sending dummy messages for few seconds: what sort of messages would I need to send here? I thought "head loss" was the issue (although I didnt know the term!)... so we got the app to subscribe to data on each defined lbtrm, but this did not seem to resolve the "head loss"
So for example if I have all symbols beginning with A on mcast socket 126.96.36.199:20070 I was hoping that if I primed the MIM with a subscription to say APPL then topic ABCD would not have to worry about the "head loss" issue but it seemed not to be the case... I suppose I need to know at what level does LBM share lbtrm joins
Is UMP an option?
I guess not, there are also events/callbacks regarding EOS/BOS and topic resolution status. Have you looked at them?