Thanks for creating a JIRA entry for this Harald.
You mentioned a couple of posts back that we might even get lucky and have this feature available to us on version 2.1.1. What's needed for this feature to get a higher priority and be released as part of 2.1.1? The reason I'm asking is because my "put-the-data-that-the-partitioner-code-generates-in-the-job-execution-context-on-a-separate-step" workaround only sometimes work. The data sometimes get too big that it wouldn't fit the DB column.
Here's the exception I'm getting when the information that I've gathered becomes very very big.
I'll be able to avoid this issue is the data is split per Step Execution Context, ie. have the actual partitioner code executed in the Partitioner class.
SQL [UPDATE BATCH_JOB_EXECUTION_CONTEXT SET SHORT_CONTEXT = ?, SERIALIZED_CONTEXT = ? WHERE JOB_EXECUTION_ID = ?];
Data truncation: Data too long for column 'SERIALIZED_CONTEXT' at row 1;
nested exception is com.mysql.jdbc.MysqlDataTruncation: Data truncation: Data too long for column 'SERIALIZED_CONTEXT' at row 1