Skip to content
Advertisement

JSch SFTP connections getting stuck

We have a java 7 code base where we are using Apache commons vfs2 v2.2 which is using JSch-0.1.54 as sftp provider.

Now, the use case is to transfer files to a remote host via sftp. But, every now and then, the file upload process gets stuck. After taking the thread dump of the application, we found both the threads (t1, which sends the data to remote sftp & t2, which receives data from sftp) are in wait state forever. Below is the thread dump snapshot.

JSch session thread:

"Connect thread remote.ftp.com session" daemon prio=10 tid=0x00007f99cc243000 nid=0x144 in Object.wait() [0x00007f9985606000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    at java.io.PipedInputStream.awaitSpace(PipedInputStream.java:273)
    at java.io.PipedInputStream.receive(PipedInputStream.java:231)
    - locked <0x000000043eda02d8> (a com.jcraft.jsch.Channel$MyPipedInputStream)
    at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
    at com.jcraft.jsch.IO.put(IO.java:64)
    at com.jcraft.jsch.Channel.write(Channel.java:438)
    at com.jcraft.jsch.Session.run(Session.java:1459)
    at java.lang.Thread.run(Thread.java:748)

Application thread used to upload file data.

"akka.actor.default-dispatcher-19" prio=10 tid=0x00007f99d4012000 nid=0xea in Object.wait() [0x00007f9988785000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    at com.jcraft.jsch.Session.write(Session.java:1269)
    - locked <0x0000000440a61b48> (a com.jcraft.jsch.ChannelSftp)
    at com.jcraft.jsch.ChannelSftp.sendWRITE(ChannelSftp.java:2646)
    at com.jcraft.jsch.ChannelSftp.access$100(ChannelSftp.java:36)
    at com.jcraft.jsch.ChannelSftp$1.write(ChannelSftp.java:806)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
    - locked <0x0000000440aab240> (a org.apache.commons.vfs2.provider.sftp.SftpFileObject$SftpOutputStream)
    at org.apache.commons.vfs2.util.MonitorOutputStream.write(MonitorOutputStream.java:104)
    - locked <0x0000000440aab240> (a org.apache.commons.vfs2.provider.sftp.SftpFileObject$SftpOutputStream)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
    - locked <0x0000000440aac278> (a org.apache.commons.vfs2.provider.DefaultFileContent$FileContentOutputStream)
    at org.apache.commons.vfs2.util.MonitorOutputStream.write(MonitorOutputStream.java:104)
    - locked <0x0000000440aac278> (a org.apache.commons.vfs2.provider.DefaultFileContent$FileContentOutputStream)
    at org.apache.commons.vfs2.provider.DefaultFileContent.write(DefaultFileContent.java:741)
    at org.apache.commons.vfs2.provider.DefaultFileContent.write(DefaultFileContent.java:720)
    at org.apache.commons.vfs2.provider.DefaultFileContent.write(DefaultFileContent.java:691)
    at org.apache.commons.vfs2.provider.DefaultFileContent.write(DefaultFileContent.java:707)
    at org.apache.commons.vfs2.FileUtil.copyContent(FileUtil.java:78)
    at org.apache.commons.vfs2.provider.AbstractFileObject.copyFrom(AbstractFileObject.java:289)

After looking at the codebase of Jsch library, Here is what i feel is happening.

  1. Application thread is uploading file data in the chunks of 4KB.
  2. After every chunk write, application reads the input socket for any acknowledgements until input socket buffer is empty.
  3. During chunk writing, it checks for the ssh window size. If it is less than the payload size, we wait until remote server resizes it.(This is a place where my app thread is waiting forever) This resizing message is listened by the ssh session thread and same is updated at the channel object after which application thread proceeds for writing.
  4. In a separate thread, session is listening for the incoming data from the remote server. Depending on the message it receives, it takes the relevant action like resizing the channel window, pass the acknowledgement message to the channel (read application) for consumption.
  5. Now, when a message arrives for channel consumption, it is written to the PipedOutStream which is linked to the PipedInputStream. This input stream is read by the application thread for acknowledgement messages. In case, the application thread fails to read any message, the buffer of PipedOutputstream gets full and hence it goes into wait state until application reads some data. (This is the place where session thread is waiting forever)

Now, both the threads are depending on each other. Hence, this is kind of a deadlock.

Further, i have checked the linux machine on which this app is running, RecQ of the socket is kept on building up. Which means, socket is still alive and remote server is keep on sending 32KB packets every now and then.

sudo netstat -anpt | grep 19321
tcp6       0      0 10.14.233.97:59594      64.233.167.130:19321    TIME_WAIT   -
tcp6   58256      0 10.14.233.97:58214      64.233.167.130:19321    ESTABLISHED 460144/java
tcp6  499888      0 10.14.233.97:58422      64.233.167.130:19321    ESTABLISHED 460144/java
tcp6       0      0 10.14.233.97:59622      64.233.167.130:19321    ESTABLISHED 460144/java
tcp6       0      0 10.14.233.97:59608      64.233.167.130:19321    TIME_WAIT   -
tcp6   74672      0 10.14.233.97:56656      64.233.167.130:19321    ESTABLISHED 460144/java
tcp6   92688      0 10.14.233.97:56842      64.233.167.130:19321    ESTABLISHED 460144/java

Now, I have 2 questions.

  1. Why is this happening? This happens rarely but when happens, it occurs frequently.
  2. How to resolve this issue?

P.S. I am aware of the multi threading issues of Apache commons vfs library, hence, all the ssh sessions are running in separate thread. hence, it does not look like the issue from library.

Advertisement

Answer

Why is this happening? This happens rarely but when happens, it occurs frequently.

I feel there could be 2 reasons.

  1. Network is slow.
  2. The client machine does not have enough resources due to which it cannot prioritize the session thread.

For every packet (8KB + header data) the application thread writes to the remote sftp server, it receives ~28bytes of acknowledgement data. Now, this data is written by the session thread in PipedOutputStream which is connected to PipedInputStream and same is consumed by application thread. Further, the buffer size of this stream is 32KB.

Now as per logic, application threads keeps on writing packets to the socket until there is minimum of 1KB data present in PipedInputStream for its consumption. This approximately translates to ~37 acknowledgements. But due to any of the above 2 reasons, these ack packets might not be available for consumption and hence application thread will keep on writing the packets to the output socket until the remote window size i.e. rwsize of the remote server reaches its limit.

What is rwsize?

Here, rwsize is the parameter which is communicated by the remote server when it sends the channel open confirmation message to the client machine. This is the flow control parameter. It is a hard limit put on communication channel as per SSH protocol. Further, both client and server keeps the count of this parameter. For every byte of data transfer done by client machine, it keeps on decreasing the value of this parameter until it becomes ~0. Once it becomes ~0, it waits for the window resizing message from the remote server which means that server has consumed some outstanding amount of data and it is ready for further consumption.

Now, in my scenario, value of this param was 32MB. Therefore, my application thread was capable of writing 32MB of data without any issues. Now, once this limit was hit, it went into forever wait state waiting for the remote window resize message. Now the session thread was responsible for receiving both window resize message as well as acknowledgement messages. And both type of messages were received on FCFS basis.

Therefore, when application thread went into wait state, session thread might have started receiving ack messages first. Since each ack message was of ~28 bytes and buffer is of 32KB. It could only ingest ~1170 ack messages without any blocking. However, rwsize was of 32MB and 1 packet was of ~8KB, it is quite much possible that there are 4144 ack messages pending to be consumed. Therefore, if remote server generated window resizing message after it generated at least ~1170 ack messages, the session thread will block on PipedOutputStream forever before it was able to receive window resizing message. And hence the issue.

How to resolve this issue?

This issue was faced by some people earlier as well. Here is the link of their bug report and the fix. Fix is to increase the PipedInputStream buffer size. however, i think this fix is brittle unless you increase the buffer size large enough such that it could accommodate all possible messages until window resizing message arrives.

For my case, I fixed it by making sure that application thread consumes all the acknowledgement messages before going into permanent wait state once the rwsize limit is reached. For the code details, one can refer this commit. Below is the excerpt of the code added before calling sendWRITE method.

if(rwsize<21+handle.length+_len+4) {
  flush();
}
User contributions licensed under: CC BY-SA
9 People found this is helpful
Advertisement