Periodic connection closed exception in server.log


This is not a new issue, but the bits of the trace have changed in various releases. I suspect this is just a client disconnecting while using a longer-running endpoint (such as video playback, or file downloading), and I’m wondering how to suppress these particular exceptions, or if it’s a bug in catalina or grizzly. I have seen code in Glassfish which suppresses ClientAbortException, and maybe that code just isn’t as up to date with the newer types of listeners and connectors. We see this 1-3 times a day on average. OS is Windows, JDK is Oracle 21, Payara version is 6.2023.12.

[2024-05-30T14:57:54.542-0400] [Payara 6.2023.12] [WARNING] [] [jakarta.enterprise.web] [tid: _ThreadID=231 _ThreadName=http-thread-pool::http-listener-2(46)] [timeMillis: 1717095474542] [levelValue: 900] [[
  StandardWrapperValve[default]: Servlet.service() for servlet default threw exception Connection closed
	at org.glassfish.grizzly.asyncqueue.TaskQueue.onClose(
	at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.onClose(
	at org.glassfish.grizzly.nio.transport.TCPNIOTransport.closeConnection(
	at org.glassfish.grizzly.nio.NIOConnection.doClose(
	at org.glassfish.grizzly.nio.NIOConnection$
	at org.glassfish.grizzly.nio.DefaultSelectorHandler.execute(
	at org.glassfish.grizzly.nio.NIOConnection.terminate0(
	at org.glassfish.grizzly.nio.transport.TCPNIOConnection.terminate0(
	at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(
	at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(
	at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.processAsync(
	at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(
	at org.glassfish.grizzly.ProcessorExecutor.execute(
	at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(
	at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(
	at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(
	at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.executeIoEvent(
	at org.glassfish.grizzly.strategies.AbstractIOStrategy.executeIoEvent(
	at org.glassfish.grizzly.nio.SelectorRunner.iterateKeyEvents(
	at org.glassfish.grizzly.nio.SelectorRunner.iterateKeys(
	at org.glassfish.grizzly.nio.SelectorRunner.doSelect(
	at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(
	at org.glassfish.grizzly.threadpool.AbstractThreadPool$
	at java.base/
Caused by: An established connection was aborted by the software in your host machine
	at java.base/ Method)
	at java.base/
	at java.base/
	at java.base/
	at java.base/
	at java.base/
	at org.glassfish.grizzly.nio.transport.TCPNIOUtils.flushByteBuffer(
	at org.glassfish.grizzly.nio.transport.TCPNIOUtils.writeSimpleBuffer(
	at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(
	... 16 more




do you use https?


Hi @StevenHachel - yes, we do use HTTPS.

Then it’s simple. Unfortunately, the Payara developers don’t really take care of this problem, which is really annoying.

Payara Administration Console:
You have to go to Configurations → server-config → Network Config → http-listener-2 (the configured https listener) → http → deactivate HTTP/2.

Then everything will work again and you won’t have constant crashes. With a very demanding JSF application, this can be disastrous if there are constant crashes. Containers are not reloaded, constant Javascript errors, incorrect page loading, etc…


Hi @StevenHachel - actually we disabled HTTP2 years ago, for reasons like you specify.

We aren’t actually having any application issues of any kind. These traces just show up periodically. My assumption is that these are just client disconnects from long file downloads or video playback, not actual errors. I was just hoping there was a clear way to suppress this particular stack trace as it makes it hard to find real errors in the server.log file.

Ah, okay. Too bad I couldn’t help.