As discussed in another thread I created a little sample app that uses JiBX and Axiom, to see how far I could make Spring-WS go. I profiled it using YourKit, and made some code changes to make it perform even faster.
Normally, Axiom caches all read XML into an internal buffer. However, you can also ask for a non-caching XMLStreamReader, which directly reads from the underlying transport. The drawback is - of course - that you can only read this data once. If you're using plain Axiom (without SWS), using payload caching or not requires a code change (see the JavaDocs here and here).
I didn't like that much, so instead, I added a new property to the AxiomSoapMessageContextFactory: payloadCaching. To quote from the Javadocs:
By default payloadCaching is true. Pretty self-explanatory, I hope. Since the bulk of the message is bound to be in the message payload, I only made an option for that.
Setting this property to false will read the contents of the body directly from the TransportRequest. However, [b]when this setting is enabled, the payload can only be read once[b]. This means that any endpoint mappings or interceptors which are based on the message payload (such as the PayloadRootQNameEndpointMapping, the PayloadValidatingInterceptor, or the PayloadLoggingInterceptor) cannot be used. Instead, use an endpoint mapping that does not consume the payload (i.e. the SoapActionEndpointMapping).
So in your application context, instead of defining:
you can now define:
<bean id="messageContextFactory" class="org.springframework.ws.soap.saaj.SaajSoapMessageContextFactory"/>
to get that extra bit of performance.
<bean id="messageContextFactory" class="org.springframework.ws.soap.axiom.AxiomSoapMessageContextFactory">
<property name="payloadCaching" value="false"/>
Now, I also did some benchmarks to test the performance of this improvement. I deployed the SWS app with JIBX marshalling on a dual 3Ghz Xeon, running Java 5 and the Resin app server. I then used the apache bench (ab) tool to send a 68k soap request 2000 times, using 200 concurrent threads. Obviously, performance tricks like these only make sense with larger messages. Note that, because we use a marshaller, we read the entire payload. Things are very different when you only read a part of it.
I ran three tests, one for each context factory setting. Here are the highlights:
SaajSoapMessageContextFactory with SAAJ 1.3
Requests per second: 34.35 [#/sec] (mean)
Time per request: 5821.728 [ms] (mean)
Time per request: 29.109 [ms] (mean, across all concurrent requests)
Transfer rate: 3.52 [Kbytes/sec] received
AxiomSoapMessageContextFactory with payload caching
Requests per second: 31.02 [#/sec] (mean)
Time per request: 6448.146 [ms] (mean)
Time per request: 32.241 [ms] (mean, across all concurrent requests)
Transfer rate: 3.18 [Kbytes/sec] received
AxiomSoapMessageContextFactory without payload caching
Requests per second: 182.22 [#/sec] (mean)
Time per request: 1097.548 [ms] (mean)
Time per request: 5.488 [ms] (mean, across all concurrent requests)
Transfer rate: 18.68 [Kbytes/sec] received
Quite a drastic increase :-)
In conclusion, here are some best practices regarding message contexts:
- If you require attachment (SwA) support on the server side, use the SaajSoapMessageContextFactory. Axiom does not support adding SwA attachments.
- If you need to read the payload multiple times from the message, use the SaajSoapMessageContextFactory. Axiom is actually slower, though not by far.
- If you only read a part of a big payload using SAX or StAX (multiple times or not), use the AxiomSoapMessageContextFactory, with payload caching. Obviously Axiom does not cache what it does not read.
- If you only want to read the payload once (completely or not), use the AxiomSoapMessageContextFactory, without payload caching.
Since this is only a configuration option, you can play around with it during development time (to have more logging, for instance), and change it when deploying. After all, it does not require any code change.