Ganymed SSH-2 for Java FAQ

This FAQ includes information regarding topics that were discussed in e-mails between developers and users of the Ganymed SSH-2 for Java library.

Ganymed SSH-2 for Java homepage: http://www.ganymed.ethz.ch/ssh2
Last update of FAQ: oct-27-2006.

Please report bugs, typos and any kind of suggestions to Christian Plattner (plattner at inf.ethz.ch).


Sections:


When I start program XYZ with putty (or openssh, ..., whatever) then everything works. However, if I use "Session.execCommand", then XYZ behaves differently or does not work at all!

Short answer:

The most often source of problems when executing a command with Session.execCommand() are missing/wrong set environment variables on the remote machine. Make sure that the minimum needed environment for XYZ is the same, independentely on how the shell is being invoked.

Example quickfix for bash users:

  1. Define all your settings in the file ~/.bashrc
  2. Make sure that the file ~/.bash_profile only contains the line source ~/.bashrc.
  3. Before executing Session.execCommand(), do NOT aquire any type of pseudo terminal in the session. Be prepared to consume stdout and stderr data.

Note: If you really want to mimic the behavior of putty, then don't use Session.execCommand(), instead aquire a pty (pseudo terminal) and then start a shell (use Session.requestPTY() and Session.startShell()). You then have to communicate with the shell process at the other end through stdin and stdout. However, you also have to implement terminal logic (e.g., escape sequence handling (unless you use a "dumb" pty), "expect-send" logic (output parsing, shell prompt detection), etc.).

Long answer:

If you login by using putty, then putty will normally request a "xterm" pty and your assigned shell (e.g., bash) will be started (a so called "interactive login shell"). In contrast, if you use Session.execCommand() to start a command then (unless you ask for it) no pty will be aquired and the command will be given to the shell as an argument (with the shell's "-c" option).

The way a shell is being invoked has an effect on the set of initialization files which will be read be the shell.

To demonstrate the difference, try the following (from the command line, e.g., with an OpenSSH client):

  1. Login interactively and print the environment with the "env" command:
     
    [user@host ~] ssh 127.0.0.1
    [user@host ~] env

     
  2. Let the ssh server execute the "env" command (equivalent to using Session.executeCommand()):
     
    [user@host ~] ssh 127.0.0.1 "env"

If you compare the two outputs, then you will (unless you have adjusted your shell's settings) observe different environments.

If you are interested in the details, then please read the INVOCATION section in man page for the bash shell. You may notice that the definitions of "interactive" and "non-interactive" (and combinations with "login") are little bit tricky.

[TOP]

My program sometimes hangs when I only read output from stdout! Or: can you explain me the story about the shared stdout/stderr window in the SSH-2 protocol? Or: what is this "StreamGobbler" thing all about?

In the SSH-2 low level protocol, each channel (e.g., session) has a receive window. When the remote SSH daemon has filled up our receive window, it must wait until we have consumed the input and are ready to accept new data.

Unfortunately, the SSH-2 protocol defines a shared window for stderr and stdout. As a consequence, if, for example, the remote process produces a lot of stderr data and you never consume it, then after some time the local receive window will be full and the sender is blocked. If you then try to read() from stdout, your call will be blocked: there is no stdout data (locally) available and the SSH daemon cannot send you any, since the receive window is full (you would have to read some stderr data first to "free" up space in the receive window).

Fortunately, Ganymed SSH-2 uses a 30KB window - the above described scenario should be very rare.

Many other SSH-2 client implementations just blindly consume any remotely produced data into a buffer which gets automatically extended - however, this can lead to another problem: in the extreme case the remote side can overflow you with data (e.g., leading to out of memory errors).

What can you do about this?

  1. Bad: Do nothing - just work with stderr and stdout Inputstreams and hope that the 30KB window is enough for your application.
  2. Better, recommended for most users: use two worker threads that consume remote stdout and stderr in parallel. Since you probably are not in the mood to program such a thing, you can use the StreamGobbler class supplied with Ganymed SSH-2. The Streamgobbler is a special InputStream that uses an internal worker thread to read and buffer internally all data produced by another InputStream. It is very simple to use:
    InputStream stdout = new StreamGobbler(mysession.getStdout());
    
    InputStream stderr = new StreamGobbler(mysession.getStderr());
    You then can access stdout and stderr in any order, in the background the StreamGobblers will automatically consume all data from the remote side and store in an internal buffer.
  3. Advanced: you are paranoid and don't like programs that automatically extend buffers without asking you. You then have to implement a state machine. The condition wait facility offered by Session.waitForCondition() is exactly what you need: you can use it to wait until either stdout or stderr data has arrived and can be consumed with the two InputStreams. You can either use the return value of Session.waitForCondition() or check with InputStream.available() (for stdout and stderr) which InputStream has data available (i.e., a read() call will not block). Be careful when wrapping the InputStreams, also do not concurrently call read() on the InputStreams while calling Session.waitForCondition() (unless you know what you are doing).
    Please have a look a the SingleThreadStdoutStderr.java example.
  4. The lazy way: you don't mind if stdout and stderr data is being mixed into the same stream. Just allocate a "dumb" pty and the server will hopefully not send you any data on the stderr stream anymore. Note: by allocating a pty, the shell used to execute the command will probably behave differently in terms of initialization (see also this question).

[TOP]

Why are the session's Input- and OutputStreams not buffered?

If you need it, then this library offers quite a raw type of access to the SSH-2 protocol stack. Of course, many people don't need that kind of low level access. If you need buffered streams, then you should the do the same thing as you would probably do with the streams of a TCP socket: wrap them with instances of BufferedInputStream and BufferedOutputStream. In case you use StreamGobblers for the InputStreams, then you don't need any additional wrappers, since the StreamGobblers implement buffering already.

This code snippet will probably work well for most people:

InputStream stdout = new StreamGobbler(mysession.getStdout());
InputStream stderr = new StreamGobbler(mysession.getStderr());
OutputStream stdin = new BufferedOutputStream(mysession.getStdin(), 8192);

[TOP]

Why can't I execute several commands in one single session?

If you use Session.execCommand(), then you indeed can only execute only one command per session. This is not a restriction of the library, but rather an enforcement by the underlying SSH-2 protocol (a Session object models the underlying SSH-2 session).

There are several solutions:

[TOP]

I cannot open more than 10 concurrent sessions (or SCP clients).

You are probably using OpenSSH. By looking at their source code you will find out that there is a hard-coded constant called MAX_SESSIONS in the session.c file which is set to "10" by default. This is a per connection limit. Unfortunately, it is not a run-time tunable parameter. However, this limit has no effect on the number of concurrent port forwardings. Please note: this information is based on the OpenSSH 4.3 release.

Possible solutions:

Just for completeness: starting from release 210, the thrown exception may look as follows:

java.io.IOException: Could not open channel (The server refused to open the channel (SSH_OPEN_ADMINISTRATIVELY_PROHIBITED, 'open failed'))

[TOP]

Password authentication fails, I get "Authentication method password not supported by the server at this stage".

Many default SSH server installations are configured to refuse the authentication type "password". Often, they only accept "publickey" and "keyboard-interactive". You have different options:

In general it is a good idea to call either Connection.getRemainingAuthMethods() or Connection.isAuthMethodAvailable() before using a certain authentication method.

Please note that most servers let you in after one successful authentication step. However, in rare cases you may encounter servers that need several steps. I.e., if one of the Connection.authenticateWithXXX() methods returns false and Connection.isAuthenticationPartialSuccess() returns true, then further authentication is needed. For each step, to find out which authentication methods may proceed, you can use either the Connection.getRemainingAuthMethods() or the Connection.isAuthMethodAvailable() method. Again, please have a look into the SwingShell.java example.

[TOP]

Why does public key authentication fail with my putty key?

When using putty private keys (e.g., .ppk files) with public key authentication, you get a "Publickey authentication failed" exception. The reason is that the library currently is not able to directly handle private keys in the proprietary format used by putty. However, you can use the "puttygen" tool (from the putty website) to convert your key to the desired format: load your key, then go to the conversions menu and select "Save OpenSSH key" (which saves the key in openssl PEM format, e.g., call it "private.pem").

[TOP]

I am sending data to a remote file using the "cat" method, but not all data is being written.

Please read carefully the answer to the following question.

[TOP]

I want to pump data into a remote file, but the amount of data to be sent is not known at the time the transfer starts.

The SCP protocol communicates the amount of data to be sent at the start of the transfer, so SCP remains out of consideration. Possible other solutions:

Be careful if you use the "cat" approach, as it may happen that not all your data will be written. If you close the stdin stream and immediatelly close the session (or the whole connection) then some SSH servers do not send the pending data to the process being executed ("cat" in this case). You have to wait until "cat" has received the EOF and terminates before closing the session. However, waiting for the termination may not always work, since SSH servers sometimes "forget" to send the exit code of the remote process. The following code MAY work:

Session sess = conn.openSession();
sess.execCommand("cat > test.txt");
OutputStream stdin = sess.getStdin();

... out.write(...) ... out.write(...) ...

/* The following flush() is only needed if you wrap the  */
/* stdin stream (e.g., with a BufferedOutputStream).     */
out.flush();

/* Now let's send EOF */
out.close();

/* Let's wait until cat has finished                     */
sess.waitForCondition(ChannelCondition.EXIT_STATUS, 2000);
/* Better: put the above statement into a while loop!    */
/* In ANY CASE: read the Javadocs for waitForCondition() */

/* Show exit status, if available (otherwise "null")     */
System.out.println("ExitCode: " + sess.getExitStatus());
/* Now its hopefully safe to close the session           */
sess.close();

(Just a thought for another solution: execute cat > test.txt && echo "FINISHED" and wait until you get "FINISHED" on stdout... - try it on your own risk =)

[TOP]

Do you have an example for the usage of feature XYZ?

Please have at look at the examples section in the distribution, especially at the SwingShell.java example.

[TOP]
!!! Dieses Dokument stammt aus dem ETH Web-Archiv und wird nicht mehr gepflegt !!!
!!! This document is stored in the ETH Web archive and is no longer maintained !!!