The title of the post probably didn't make any sense at all so I'll explain more here. It was very difficult to give this use case a name. Anyway, description follows.
Imagine the flow is triggered by an event of some sort. A single message arrives. That message then needs to be split into 60 separate smaller messages in an ordered fashion. Then every second one of those 60 messages must be released to the next endpoint which is the outbound channel adapter where the flow terminates.
Simple scenario solution
The above can be done using a gateway initially to publish event messages to the first channel. Then a splitter can be used to split the large message into a series of small messages that then get sent to a bounded queue channel. Then a poller element on the next endpoint retrieves one message every second. If I'm correct that should cover the simple scenario.
Unfortunately the simple scenario is a little too simple and real life is not like that. We have several event messages arriving in real time continuously. And once each of these larger messages have been split into the smaller messages the ordering of smaller messages from all the larger messages must happen in an interleaved yet fifo fashion. Here is an example.
A large message M1 gets split into A3, A2, A1
A large message M2 gets split into B3, B2, B1
A large message M3 gets split into C3, C2, C1
The outgoing ordering of the smaller messages to the next endpoint every second must be:
1 second - A1, B1, C1
1 second - A2, B2, C2
1 second - A3, B3, C3
That is what I mean by interleaving smaller messages originating from multiple different larger messages.
Complex scenario solution
For the complex scenario extending the simple scenario solution to multiple messages simply doesn't work. The bounded queue channel may need to be arbitrarily large due to large numbers of inbound message into the flow. Having an unbounded queue channel may have the theoretical risk of OOM. Also a single queue wouldn't allow us to interleave as shown above.
The only solution I've been able to come up with is for the first endpoint (E1) to split the large message and populate a fifo queue and then to relay that queue to the next endpoint (E2) which maintains a set of queues. Then every second E2 pulls the head of all registered fifo queues in the set and sends them to the next endpoint (E3) which is the outbound channel adapter where the flow terminates. This last step could be E2 publishing to a gateway that E3 is listening on or E3 could potentially poll E2.
Now at this point one could ask the question that I asked which is that why is E2, being an endpoint, doing the job of a channel. I have thought about writing a custom channel to do the job of E2 in as generic a way as possible - essentially to maintain a series of queues and to take the head off each of the queues and send them to next endpoint. However before I did this I wanted to ask the experts here. I've also considered a traditional resequencer and splitter but I can't see how they would help in this case.
If anything is unclear or if you have any ideas please let me know.