May 15th, 2009, 10:46 AM
Job that has itemWriter when restarted does not update output file
Job that can be restarted takes Comma DeLimited File (CSV) with 200 rows, item Reader reads them with commit interval of 10 and writes to new file using itemWriter that is Pipe De Limited File.
<job id="CSVFileLoadJob" restartable="true">
<chunk reader="itemReaderForTest" processor="itemProcessor" writer="itemWriter" commit-interval="10"/>
In CSV file there is error on line 195, hence the pipe output file has 190 rows. (perfect)
manually Fix input file on line 195 and restart job the item reader works perfect it picks up rows 191-200 but the output file still has only 190 rows and the job is successfully completed..
<beans:bean id="itemWriter" class="org.springframework.batch.item.file.FlatFil eItemWriter">
<beans:property name="saveState" value="true"/>
<beans:property name="shouldDeleteIfExists" value="true"/>
<beans:property name="resource" ref="outputResource" />
<beans:bean class="org.springframework.batch.item.file.transfo rm.DelimitedLineAggregator">
<beans:property name="delimiter" value="||"/>
<beans:bean class="org.springframework.batch.item.file.transfo rm.BeanWrapperFieldExtractor">
<beans:property name="names" value="accountId,dataSourceTypeId,originalAccountN umber,accountNumber,primarySSN,updatedBy,updatedDa te"/>
Are there any special settings to allow file to be updated or is anything in configuration wrong that is not updating file...
Thanks in advance
May 15th, 2009, 11:14 AM
I think that what you are describing is related to http://jira.springframework.org/browse/BATCH-1225. It will be fixed for the 2.0.1 release.
May 15th, 2009, 11:27 AM
not sure.. in my scenario i don't see any overwrites, just the new records that are part of re-run are not appended to file. Changed test and this is what i could see to confirm...
Changed commit-interval to 300, Hence when it failed at record 195 there was no entires in output file (purfect), then fixed input file not to have errors and restarted job it created all 200 rows.
Hence it looks like the issue is when the non-comitted rows are restarted they are not updating output file...
May 15th, 2009, 12:42 PM
Since you have 200 records, try it with a commit interval of 10 and make record #105 fail. When you fix the record and restart it you will probably see that some 10 rows are missing from the middle. (This is the bug I mentioned).
May 15th, 2009, 01:30 PM
thanks for detail got it.. The chunk of records in transaction block that failed are lost when restarted..
May 15th, 2009, 01:58 PM