Can anyone suggest a mechanism by which we may configure a Spring Integration Channel to implement comparator based ordering (ala PriorityChannel) without limiting that queue to be in-memory? (We also need to ensure that messages added to this queue are never lost before being delivered to and processed by a recipient, presumably based on a highly available AMPQ or JMS implementation, so the in-memory java.util.concurrent.PriorityBlockingQueue based implementation of PriorityChannel will not suffice.)
Thanks for any suggestions!
If your JMS provider supports JMSPriority (they are not obliged to), the jms-backed channel should work for you out of the box - it sets the JmsPriority using the MessageHeaders.PRIORITY in the SI message, when a message is sent to the queue/topic backing the channel.
AMQP does not have a concept of message priority.
You can simulate it by having n AMQP-backed channels and route to them (e.g. HIGH, MED, LOW). It's only a simulation because it doesn't prevent LOW or MED messages being processed before HIGH but, as long as the ratios are reasonable (small HIGH, med MED, and many LOW), the simulation is reasonable. You can further tweak things by altering the number of consumers on each channel (having more consumers on a HIGH priority channel etc).
No static set of priorities can provide the capabilities of custom comparators. For example, one category of messages we'll be delivering through this channel has a time component. Messages of that category with identical topics are prioritized such that newer messages have higher priority than older ones. I have not looked in detail at the QoS options available...is it somehow possible to gain such functionality with a combination of prioritization and QoS settings?
Just to clarify one point... AMQP does have a concept of priority, but it's not currently supported by RabbitMQ (it IS planned however). The minimum requirement according to the spec is to support just 2 levels, but a broker implementation MAY support up to 10 levels. Here is a useful resource for comparing RabbitMQ features to the spec: http://www.rabbitmq.com/specification.html (search for "priority" and you will see what I'm referring to above).
@Dale - sounds like you might need a PriorityQueue that's backed by a message store, except it would still keep messages in memory (or at least as much metadata as needed for the comparator), and manage the message store for the sole purpose of restoring the memory condition during startup.
We don't currently have that; if you decide to implement something, you may want to consider submitting it to the extension repository.
With the upcoming 2.2. release, the poller can be configured to remove the message from the store after the downstream flow completes.
That's not a bad idea, except that this channel is also used to distribute work to a cluster full of endpoints in other containers (presumably on other boxes), and purely in-memory solutions also have "purely in one container" drawbacks.
After talking through my understanding of the desired behavior with a colleague, I'm uncertain that this design is internally consistent, probably because the terms "time sensitive" and "non time sensitive" are not accurate. Both are time sensitive, but certain content could be dropped on the floor if a more timely version is available (think stock ticker data, or sporting event scoreboard updates), so I was trying to figure out a way to avoid expending computation on "stale" data by using priority queues within a channel. I am starting to think that I was barking up the wrong tree. No comparator that we could describe would correctly order the time-sensitive messages without potentially resulting in starvation for the other class of messages. I think I need to go back to the source of this use case and better learn precisely what the time-sensitive requirements are.
We're gaining plenty of benefit by using this open source resource (thank you!), so should we develop a generic solution to this (or other) sub-problems, I will definitely lobby to have us contribute it back to the cause.