great work - thank you!!
Setting send-timeout="0" to the <filter> makes it work without exception . The one thing I'm still a bit worried about is that there are still two task-scheduler threads transferring the same file:
Only one of these two file transfers will result in a message, that's ok. But transferring the same file twice? Hmm...
07:24:30.942 [task-scheduler-6] DEBUG d.v.i.ftp.SingleFtpFileFilter - Accepted file indices_20111128.csv
07:24:30.942 [task-scheduler-2] DEBUG d.v.i.ftp.SingleFtpFileFilter - Accepted file indices_20111128.csv
07:24:31.271 [task-scheduler-2] INFO o.s.i.ftp.session.FtpSession - File have been successfully transfered to: xxxx/yyyy/indices_20111128.csv
07:24:31.271 [task-scheduler-6] INFO o.s.i.ftp.session.FtpSession - File have been successfully transfered to: xxxx/yyyy/indices_20111128.csv
Concerning your second part - using the new FTP gateway instead: well, thank you, it is always good to question a certain approach ! My use case is somehow in the middle of the two scenarios:
- I'm interested in specific files only.
- Download is on demand (orchestrated by an overall workflow).
- But: I don't know if my piece of information has already been published on the remote system when I start my process. So I want polling as well: I want to start polling and if after lets say one hour my piece of interest hasn't been published on the remote system I'm giving up continuing with different actions in my overall workflow.
- I do not want to poll the remote system all day long.
- So this is what I want: polling, started on demand and only for a well defined period of time.
So I thougt that starting / stopping an inbound adapter might be a good idea. Or am I able to manage that with a gateway-adapter too?