OpenSSH/Print version


The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.

The OpenSSH suite provides secure remote access and file transfer.[1] Since its initial release, it has grown to become the most widely used implementation of the SSH protocol. During the first ten years of its existence, it had largely replaced older corresponding unencrypted tools and protocols. The OpenSSH client is included by default in most operating system distributions, including MacOS, AIX, Linux, BSD, and Solaris. Any day you use the Internet, you are using and relying on hundreds if not thousands of machines operated and maintained using OpenSSH. A survey in 2008 showed that of the SSH servers found running, just over 80% were OpenSSH. [2] Even with the advent of the Internet of Things and the increased use of IPv6, a cursory search of Shodan [3] for SSH-2.0 services on port 22 in November 2022 showed 87% of responding addresses running OpenSSH. [4]

OpenSSH was first released towards the end of 1999. It is the latest step in a very long and useful history of networked computing, remote access, and telecommuting.

This book is for fellow users of OpenSSH to help save effort and time through using OpenSSH, and especially SFTP, where it makes sense to use it.

  1. "OpenSSH".
  2. "Statistics from the current scan results". 2008.
  3. "SSH-2.0 Search Results". Retrieved 2022-11-07.
  4. At that time, 87% was 17,093,847 out of 19,715,171 systems. Dropbear made up about 6% at 1,092,738 systems and then a long tail filled in the rest. A Shodan search including non-standard ports, not just port 22, showed noticeably more SSH services (25,520,277) whereof 83% (21,215,882) where OpenSSH.

Table of Contents

Overview edit

The OpenSSH suite provides secure remote access and file transfer. Since its initial release, it has grown to become the most widely used implementation of the SSH protocol. During the first ten years of its existence, ssh has largely replaced older corresponding unencrypted tools and protocols. The OpenSSH client is included by default in most operating system distributions, including MacOS, Linux, BSD, AIX and Solaris. Any day you use the Internet, you are using and relying on dozens if not hundreds of machines operated and maintained using OpenSSH. A survey in 2008 showed that of the SSH servers found running, just over 80% were OpenSSH. [1]

OpenSSH was first released towards the end of 1999. It is the latest step in a very long and useful history of networked commuting, remote access and telecommuting.

History of OpenSSH edit

The first release of OpenSSH was in December 1999 as part of OpenBSD 2.6. The source code was originally derived from a re-write of the last available open version, ssh 1.2.12 specifically, of SSH[2]. SSH went on to become Tectia SSH.

Ongoing development of OpenSSH is done by the OpenBSD group. Core development occurs first on OpenBSD, then portability teams bring the changes to other platforms. OpenSSH is an integral part of as good as all server systems today and a good many network appliances such as routers, switches and networked storage. The first steps were in many ways the biggest.

The Early Days of Remote Access edit

Some of the tools that inspired the need for SSH have been around since the beginning, too, or very near the beginning of the Internet. Remote access has been a fundamental part of the concept since the idea stage and the nature and capabilities of this access has evolved as the network has evolved in scale, scope and usage. See the web version of the Lévénez Unix Timeline[3] by Éric Lévénez for an overview of systems development and the web version of Hobbes' Internet Timeline[4] by Robert H Zakon for an overview of the development of the Internet.


  • Telnet was one of the original ARPAnet application protocols, named in RFC 15 from September 1969. It was used to access a host at a remote site locally. Telnet was described starting two years later in RFC 137, RFC 139, RFC 318 and others, including RFC 97. That is as good a turning point as any to delineate Telnet.


  • Thompson Shell, by Ken Thompson, was an improvement on the old text-based user interface, the shell. This new one allowed redirects but was only a user interface and not for scripting.
  • In the same year FTP, the file transfer protocol, was described in RFC 114. A key goal was to promote use of computers over the net by allowing users at any host on the network to use the file system of any cooperating host.


  • Bill Joy created BSD's C shell which is named for the C-like syntax it uses. It allows job control, history substitution, and aliases, which features we find in today's interfaces.
  • In the same year, the Bourne Shell by Steve Bourne at Bell Labs [5] was created. It is the progenitor to the default shells used in most distros today: ksh and bash.


  • The remote file copy utility, rcp, appeared in 4.2 BSD. rcp copied files across the net to other hosts using rsh, which also appeared staring 4.2 BSD, to perform its operations. Like telnet and ftp, all passwords, user names, and data are transmitted unencrypted in clear text. Both rsh and rcp were part of the rlogin suite.


  • PGP, written at MIT by Philip Zimmermann[6], charted new waters for encrypted electronic communications with the goals of preserving civil liberties online, ensuring individual privacy, keeping encryption legal in the USA, and protecting business communications. Like SSH it uses asymmetric encryption with public / private key pairs.


  • Kerberos V (RFC 1510) authentication service from MIT's project Athena [7] provides a means for authentication over an open, unsecure network. Kerberos got its original start in 1988.

SSH - open then closed edit


  • Tatu Ylönen at the then Helsinki University of Technology developed the first SSH protocol and programs, releasing them under an open license[8] as per the norm in computer science, software engineering, and advanced development. [9]


  • Björn Grönvall dug out the most recent open version of ssh, version 1.2.12[10] [11]. He and Holger Trapp did the initial work to free the distribution, resulting in OSSH


  • The SSH2 protocol is defined

OpenSSH edit


  • OpenSSH begins based on OSSH. Niels Provos, Theo de Raadt, Markus Friedl developed the cryptographic components during the port to OpenBSD which became the OpenSSH we know today. Dug Song, Aaron Campbell and many others provided various non-crypto contributions. openssl library issues were sorted by Bob Beck. Damien Miller, Philip Hands, and others started porting OpenSSH to Linux. Finally OpenSSH 1.2.2 was release shipped with OpenBSD 2.6 in December 1, 1999.[12]


  • Markus Friedl added SSH 2 protocol support to OpenSSH version 2.0, which was released in June.[13] OpenSSH 2.0 shipped with OpenBSD 2.7. Niels Provos and Theo de Raadt did most of the checking. Bob Beck updated OpenSSL. Markus also added support for the SFTP protocol later that same year.
  • In September of 2000, the long wait in the USA for the patents on the RSA algorithms to expire was over. In the European Union the European Patent Convention of 1972 frees software, algorithms, business methods or literature, unlike the unfortunate, anti-business situation in the USA. This freedom in Europe hangs by a thread at the moment.
  • SSH Tectia changes licenses again.


  • Damien Miller completed the SFTP client which was released in February.
  • SSH2 became the default protocol


  • Built-in chroot support for sshd.


  • As of OpenSSH 5.4, the legacy protocol SSH1 is finally disabled by default.


  • As of OpenSSH 6.7, both the base and the portable versions of OpenSSH can build against LibreSSL instead of OpenSSL for certain cryptographic functions.


  • OpenSSH 7.4 removes server support for the SSH1 legacy protocol.


  • OpenSSH 9.5 ssh-keygen(1) generates Ed25519 keys by default instead of old RSA keys.

Note: OpenSSH can be used anywhere in the whole world because it uses only algorithms unencumbered by software patents, business method patents, algorithm patents, and so on. These types of patents do not apply in Europe, only physical inventions can be patented in Europe, but there are regions of the world where these problems do occur. Small and medium businesses in Europe have been active in politics to keep the advantage.

Why Use OpenSSH? edit

A lot has changed since the commercialization of the Internet began in 1996. It was once a University and Government research network and if you were on the net back then, odds were you were supposed to be there. Though it was far from being utopia, any misbehavior could usually be quickly narrowed down to the individuals involved and dealt with easily, usually with no more than a phone call or a few e-mails. Few, if any, sessions back then were encrypted and both passwords and user names were passed in clear text.

By then, the WWW was more than a few years under way and undergoing explosive growth. The estimated number of web servers online in 1996 grew from 100,000 at the beginning of the year to close to 650,000 by the end of the same year[14]. When other types of servers are included in those figures, the estimated year-end number was over 16,000,000 hosts, representing approximately 828,000 domains.[14]

Nowadays, hosts are subject to hostile scans from the moment they are connected to the network. Any and all unencrypted traffic is scanned and parsed for user names, passwords, and other sensitive information. Currently, the biggest espionage threats come from private companies, but governments, individuals, and organized crime are not without a presence.

Each connection from one host to another goes through many networks and each packet may take the same or a different route there and back again. This example shows thirteen hops among three organizations from a student computer to a search engine:

% /usr/sbin/traceroute -n
traceroute: Warning: has multiple addresses; using
traceroute to (, 30 hops max, 40 byte packets
 1 xx.xx.xx.xx           0.419 ms	0.220 ms	0.213 ms	University of Michigan
 2 xx.xx.xx.xx           0.446 ms	0.349 ms	0.315 ms	Merit Network, Inc.
 3 xx.xx.xx.xx           0.572 ms	0.513 ms	0.525 ms	University of Michigan
 4 xx.xx.xx.xx           0.472 ms	0.425 ms	0.402 ms	University of Michigan
 5 xx.xx.xx.xx           0.647 ms	0.551 ms	0.561 ms	University of Michigan
 6 xx.xx.xx.xx           0.945 ms	0.912 ms	0.865 ms	University of Michigan
 7 xx.xx.xx.xx           6.478 ms	6.503 ms	6.489 ms	Merit Network, Inc.
 8 xx.xx.xx.xx	         6.597 ms	6.590 ms	6.604 ms	Merit Network, Inc.
 9       64.935 ms	6.848 ms	6.793 ms	Google, Inc.
10        17.606 ms	17.581 ms	17.680 ms	Google, Inc.
11        17.736 ms	17.592 ms	17.519 ms	Google, Inc.
12        17.767 ms	17.778 ms	17.930 ms	Google, Inc.
13        17.903 ms	17.835 ms	17.867 ms	Google, Inc.:

The net is big. It is not uncommon to find a trail of 15 to 20 hops between client and server nowadays. Any machine on any of the subnets the packets travel over can eavesdrop with little difficulty if the packets are not well encrypted.

What OpenSSH Does edit

The OpenSSH suite gives the following:

  • Encrypted remote access, including tunneling insecure protocols.
  • Encrypted file transfer
  • Run remote commands, programs or scripts and, as mentioned,
  • Replacement for rsh, rlogin, telnet and ftp

More concretely, that means that the following undesirable activities are prevented:

  • Eavesdropping of data transmitted over the network.
  • Manipulation of data at intermediate elements in the network (e.g. routers).
  • Address spoofing where an attack hosts pretends to be a trusted host by sending packets with the source address of the trusted host.
  • IP source routing

As a free software project, OpenSSH provides:

  • Open Standards
  • Flexible License - freedom emphasized for developers
  • Strong Encryption using these ciphers:
    • AES
    • ChaCha20[15]
    • RSA
    • ECDSA
    • Ed25519
  • Strong Authentication, supported methods: gssapi-with-mic, hostbased, keyboard-interactive, none , password and publickey[16]
    • Public Key: can authenticate using multiple keys since March 2015 (OpenSSH 6.8)[17]
    • Single Use Passwords
    • Kerberos
    • Dongles
  • Built-in SFTP
  • Data Compression
  • Port Forwarding
    • Encrypt legacy protocols
    • Encrypted X11 forwarding for X Window System
  • Key Agents
  • Single Sign-on using
    • Authentication Keys
    • Agent Forwarding
    • Ticket Passing
    • Kerberos
    • AFS

What OpenSSH Doesn't Do edit

OpenSSH is a very useful tool, but much of its effectiveness depends on correct use. It cannot protect from any of the following situations.

  • Misconfiguration, misuse, or abuse.
  • Compromised systems, particularly where the root account is compromised.
  • Insecure or inappropriate directory settings, particularly home directory settings.

OpenSSH must be properly configured and on a properly configured system in order to be of benefit. Arranging both is not difficult, but since each system is unique, there is no one-size-fits-all solution. The right configuration is dependent on the uses the system and OpenSSH are put to.

If you login from a host to a server and an attacker has control of root on either side, he can listen to your session by reading from the pseudo-terminal device because even though SSH is encrypted on the network it must communicate in clear text with the terminal device.

If an attacker can change files in your home directory, for example via a networked file system, he may be able to fool SSH.

Last but not least, if OpenSSH is set to allow everyone in, whether on purpose or by accident, it will.

Why Use Encryption edit

Encryption has been a hot topic in computing for a long time. It became a high priority item in national and international politics in 1991 when Dr. Phil Zimmermann at the Massachusetts Institute of Technology (MIT) first published Pretty Good Privacy (PGP). With the arrival of the first web shops, encryption went from a specialty to a requirement and increasing volumes of money changed hands online. By 1996, encryption became essential for e-business. By 2000, it became recognized as a general, essential prerequisite in electronic communication. Currently, in 2010, there is almost no chance of maintaining control over or integrity of any networked machine for more than a few minutes without the help of encryption.

Nowadays, much communication over computer networks is still done without encryption. That would be most communication, if inadequate encryption is also taken into account. This is despite years of warnings, government recommendations, best practice guidelines and incidents. As a result, any machine connected to the network can intercept communication that passes over that network. The eavesdroppers are many and varied. They include administrators, staff, employers, criminals, corporate spies, and even governments. Corporate espionage alone has become an enormous burden and barrier.

Businesses are well aware of dumpster diving and take precautions to shred all paper documents. But what about electronic information? Contracts and negotiations, trade secrets, patent applications, decisions and minutes, customer data and invoicing, personnel data, financial and tax records, calendars and schedules, product designs and production notes, training materials, and even regular correspondence go over the net daily. Archived materials, even if they are not accessed directly, are usually on machines that are available and accessed for other reasons.

Many company managers and executives are still unaware that their communications and documents are so easily intercepted, in spite of apparent and expensive access restrictions. In many cases these can be shown to be ineffectual and at best purely cosmetic. Security Theater is one obstacle and in the field of security it is more common to find snake oil than authentic solutions. Still, there is little public demonstration of awareness of the magnitude of corporate espionage nowadays or the cost of failure. Even failure to act has its costs. Not only is sensitive data available if left unencrypted, but also trends in less sensitive data can be spotted with a large enough sampling. A very large amount of information can be inferred even from lesser communications. Data mining is now a well-known concept as is the so-called wireless wiretap. With the increase in online material and activity, encryption is more relevant than ever even if many years have passed since the issues were first brought into the limelight.

Excerpt of ssh-1.0.0 README from July 12, 1995 edit

Tatu Ylönen, then at the Helsinki University of Technology, wrote the README[18] accompanying the early versions of his Open Source software, SSH. The following is an excerpt about why encryption is important.

ssh-1.0.0 README 1995-07-12



Currently, almost all communications in computer networks are done without encryption. As a consequence, anyone who has access to any machine connected to the network can listen in on any communication. This is being done by hackers, curious administrators, employers, criminals, industrial spies, and governments. Some networks leak off enough electromagnetic radiation that data may be captured even from a distance.

When you log in, your password goes in the network in plain text. Thus, any listener can then use your account to do any evil he likes. Many incidents have been encountered worldwide where crackers have started programs on workstations without the owners knowledge just to listen to the network and collect passwords. Programs for doing this are available on the Internet, or can be built by a competent programmer in a few days.

Any information that you type or is printed on your screen can be monitored, recorded, and analyzed. For example, an intruder who has penetrated a host connected to a major network can start a program that listens to all data flowing in the network, and whenever it encounters a 16-digit string, it checks if it is a valid credit card number (using the check digit), and saves the number plus any surrounding text (to catch expiration date and holder) in a file. When the intruder has collected a few thousand credit card numbers, he makes smallish mail-order purchases from a few thousand stores around the world, and disappears when the goods arrive but before anyone suspects anything.

Businesses have trade secrets, patent applications in preparation, pricing information, subcontractor information, client data, personnel data, financial information, etc. Currently, anyone with access to the network (any machine on the network) can listen to anything that goes in the network, without any regard to normal access restrictions.

Many companies are not aware that information can so easily be recovered from the network. They trust that their data is safe since nobody is supposed to know that there is sensitive information in the network, or because so much other data is transferred in the network. This is not a safe policy.

Individual persons also have confidential information, such as diaries, love letters, health care documents, information about their personal interests and habits, professional data, job applications, tax reports, political documents, unpublished manuscripts, etc.

There is also another frightening aspect about the poor security of communications. Computer storage and analysis capability has increased so much that it is feasible for governments, major companies, and criminal organizations to automatically analyze, identify, classify, and file information about millions of people over the years. Because most of the work can be automated, the cost of collecting this information is getting very low.

Government agencies may be able to monitor major communication systems, telephones, fax, computer networks, etc., and passively collect huge amounts of information about all people with any significant position in the society. Most of this information is not sensitive, and many people would say there is no harm in someone getting that information. However, the information starts to get sensitive when someone has enough of it. You may not mind someone knowing what you bought from the shop one random day, but you might not like someone knowing every small thing you have bought in the last ten years.

If the government some day starts to move into a more totalitarian direction, there is considerable danger of an ultimate totalitarian state. With enough information (the automatically collected records of an individual can be manually analyzed when the person becomes interesting), one can form a very detailed picture of the individual's interests, opinions, beliefs, habits, friends, lovers, weaknesses, etc. This information can be used to 1) locate any persons who might oppose the new system 2) use deception to disturb any organizations which might rise against the government 3) eliminate difficult individuals without anyone understanding what happened. Additionally, if the government can monitor communications too effectively, it becomes too easy to locate and eliminate any persons distributing information contrary to the official truth.

Fighting crime and terrorism are often used as grounds for domestic surveillance and restricting encryption. These are good goals, but there is considerable danger that the surveillance data starts to get used for questionable purposes. I find that it is better to tolerate a small amount of crime in the society than to let the society become fully controlled. I am in favor of a fairly strong state, but the state must never get so strong that people become unable to spread contra-official information and unable to overturn the government if it is bad. The danger is that when you notice that the government is too powerful, it is too late. Also, the real power may not be where the official government is.

For these reasons (privacy, protecting trade secrets, and making it more difficult to create a totalitarian state), I think that strong cryptography should be integrated to the tools we use every day. Using it causes no harm (except for those who wish to monitor everything), but not using it can cause huge problems. If the society changes in undesirable ways, then it will be to late to start encrypting.

Encryption has had a "military" or "classified" flavor to it. There are no longer any grounds for this. The military can and will use its own encryption; that is no excuse to prevent the civilians from protecting their privacy and secrets. Information on strong encryption is available in every major bookstore, scientific library, and patent office around the world, and strong encryption software is available in every country on the Internet.

Some people would like to make it illegal to use encryption, or to force people to use encryption that governments can break. This approach offers no protection if the government turns bad. Also, the "bad guys" will be using true strong encryption anyway. Thus, any "key escrow encryption" or whatever it might be called only serves to help monitor the ordinary people and petty criminals; it does not help against powerful criminals, terrorists, or espionage, because they will know how to use strong encryption anyway.


Thanks also go to Philip Zimmermann, whose PGP software and the associated legal battle provided inspiration, motivation, and many useful techniques, and to Bruce Schneier whose book Applied Cryptography has done a great service in widely distributing knowledge about cryptographic methods.


ssh-1.0.0 README 1995-07-12

Phil Zimmermann on encryption and privacy, from 1991, updated 1999 edit

Phil Zimmermann wrote the encryption tool Pretty Good Privacy (PGP) in 1991 to promote privacy and to help keep encryption, and thus privacy, legal around the world. Considerable difficulty occurred in the United States until PGP was published outside and re-imported in a very visible, public manner.

Why I Wrote PGP
Part of the Original 1991 PGP User's Guide (updated in 1999)

"Whatever you do will be insignificant, but it is very important that you do it."

–Mahatma Gandhi.

It's personal. It's private. And it's no one's business but yours. You may be planning a political campaign, discussing your taxes, or having a secret romance. Or you may be communicating with a political dissident in a repressive country. Whatever it is, you don't want your private electronic mail (email) or confidential documents read by anyone else. There's nothing wrong with asserting your privacy. Privacy is as apple-pie as the Constitution.

The right to privacy is spread implicitly throughout the Bill of Rights. But when the United States Constitution was framed, the Founding Fathers saw no need to explicitly spell out the right to a private conversation. That would have been silly. Two hundred years ago, all conversations were private. If someone else was within earshot, you could just go out behind the barn and have your conversation there. No one could listen in without your knowledge. The right to a private conversation was a natural right, not just in a philosophical sense, but in a law-of-physics sense, given the technology of the time.

But with the coming of the information age, starting with the invention of the telephone, all that has changed. Now most of our conversations are conducted electronically. This allows our most intimate conversations to be exposed without our knowledge. Cellular phone calls may be monitored by anyone with a radio. Electronic mail, sent across the Internet, is no more secure than cellular phone calls. Email is rapidly replacing postal mail, becoming the norm for everyone, not the novelty it was in the past.

Until recently, if the government wanted to violate the privacy of ordinary citizens, they had to expend a certain amount of expense and labor to intercept and steam open and read paper mail. Or they had to listen to and possibly transcribe spoken telephone conversation, at least before automatic voice recognition technology became available. This kind of labor-intensive monitoring was not practical on a large scale. It was only done in important cases when it seemed worthwhile. This is like catching one fish at a time, with a hook and line. Today, email can be routinely and automatically scanned for interesting keywords, on a vast scale, without detection. This is like driftnet fishing. And exponential growth in computer power is making the same thing possible with voice traffic.

Perhaps you think your email is legitimate enough that encryption is unwarranted. If you really are a law-abiding citizen with nothing to hide, then why don't you always send your paper mail on postcards? Why not submit to drug testing on demand? Why require a warrant for police searches of your house? Are you trying to hide something? If you hide your mail inside envelopes, does that mean you must be a subversive or a drug dealer, or maybe a paranoid nut? Do law-abiding citizens have any need to encrypt their email?

What if everyone believed that law-abiding citizens should use postcards for their mail? If a nonconformist tried to assert his privacy by using an envelope for his mail, it would draw suspicion. Perhaps the authorities would open his mail to see what he's hiding. Fortunately, we don't live in that kind of world, because everyone protects most of their mail with envelopes. So no one draws suspicion by asserting their privacy with an envelope. There's safety in numbers. Analogously, it would be nice if everyone routinely used encryption for all their email, innocent or not, so that no one drew suspicion by asserting their email privacy with encryption. Think of it as a form of solidarity.

Senate Bill 266, a 1991 omnibus anticrime bill, had an unsettling measure buried in it. If this non-binding resolution had become real law, it would have forced manufacturers of secure communications equipment to insert special "trap doors" in their products, so that the government could read anyone's encrypted messages. It reads, "It is the sense of Congress that providers of electronic communications services and manufacturers of electronic communications service equipment shall ensure that communications systems permit the government to obtain the plain text contents of voice, data, and other communications when appropriately authorized by law." It was this bill that led me to publish PGP electronically for free that year, shortly before the measure was defeated after vigorous protest by civil libertarians and industry groups.

The 1994 Communications Assistance for Law Enforcement Act (CALEA) mandated that phone companies install remote wiretapping ports into their central office digital switches, creating a new technology infrastructure for "point-and-click" wiretapping, so that federal agents no longer have to go out and attach alligator clips to phone lines. Now they will be able to sit in their headquarters in Washington and listen in on your phone calls. Of course, the law still requires a court order for a wiretap. But while technology infrastructures can persist for generations, laws and policies can change overnight. Once a communications infrastructure optimized for surveillance becomes entrenched, a shift in political conditions may lead to abuse of this new-found power. Political conditions may shift with the election of a new government, or perhaps more abruptly from the bombing of a federal building.

A year after the CALEA passed, the FBI disclosed plans to require the phone companies to build into their infrastructure the capacity to simultaneously wiretap 1 percent of all phone calls in all major U.S. cities. This would represent more than a thousandfold increase over previous levels in the number of phones that could be wiretapped. In previous years, there were only about a thousand court-ordered wiretaps in the United States per year, at the federal, state, and local levels combined. It's hard to see how the government could even employ enough judges to sign enough wiretap orders to wiretap 1 percent of all our phone calls, much less hire enough federal agents to sit and listen to all that traffic in real time. The only plausible way of processing that amount of traffic is a massive Orwellian application of automated voice recognition technology to sift through it all, searching for interesting keywords or searching for a particular speaker's voice. If the government doesn't find the target in the first 1 percent sample, the wiretaps can be shifted over to a different 1 percent until the target is found, or until everyone's phone line has been checked for subversive traffic. The FBI said they need this capacity to plan for the future. This plan sparked such outrage that it was defeated in Congress. But the mere fact that the FBI even asked for these broad powers is revealing of their agenda.

Advances in technology will not permit the maintenance of the status quo, as far as privacy is concerned. The status quo is unstable. If we do nothing, new technologies will give the government new automatic surveillance capabilities that Stalin could never have dreamed of. The only way to hold the line on privacy in the information age is strong cryptography.

You don't have to distrust the government to want to use cryptography. Your business can be wiretapped by business rivals, organized crime, or foreign governments. Several foreign governments, for example, admit to using their signals intelligence against companies from other countries to give their own corporations a competitive edge. Ironically, the United States government's restrictions on cryptography in the 1990's have weakened U.S. corporate defenses against foreign intelligence and organized crime.

The government knows what a pivotal role cryptography is destined to play in the power relationship with its people. In April 1993, the Clinton administration unveiled a bold new encryption policy initiative, which had been under development at the National Security Agency (NSA) since the start of the Bush administration. The centerpiece of this initiative was a government-built encryption device, called the Clipper chip, containing a new classified NSA encryption algorithm. The government tried to encourage private industry to design it into all their secure communication products, such as secure phones, secure faxes, and so on. AT&T put Clipper into its secure voice products. The catch: At the time of manufacture, each Clipper chip is loaded with its own unique key, and the government gets to keep a copy, placed in escrow. Not to worry, though–the government promises that they will use these keys to read your traffic only "when duly authorized by law." Of course, to make Clipper completely effective, the next logical step would be to outlaw other forms of cryptography.

The government initially claimed that using Clipper would be voluntary, that no one would be forced to use it instead of other types of cryptography. But the public reaction against the Clipper chip was strong, stronger than the government anticipated. The computer industry monolithically proclaimed its opposition to using Clipper. FBI director Louis Freeh responded to a question in a press conference in 1994 by saying that if Clipper failed to gain public support, and FBI wiretaps were shut out by non-government-controlled cryptography, his office would have no choice but to seek legislative relief. Later, in the aftermath of the Oklahoma City tragedy, Mr. Freeh testified before the Senate Judiciary Committee that public availability of strong cryptography must be curtailed by the government (although no one had suggested that cryptography was used by the bombers).

The government has a track record that does not inspire confidence that they will never abuse our civil liberties. The FBI's COINTELPRO program targeted groups that opposed government policies. They spied on the antiwar movement and the civil rights movement. They wiretapped the phone of Martin Luther King. Nixon had his enemies list. Then there was the Watergate mess. More recently, Congress has either attempted to or succeeded in passing laws curtailing our civil liberties on the Internet. Some elements of the Clinton White House collected confidential FBI files on Republican civil servants, conceivably for political exploitation. And some overzealous prosecutors have shown a willingness to go to the ends of the Earth in pursuit of exposing sexual indiscretions of political enemies. At no time in the past century has public distrust of the government been so broadly distributed across the political spectrum, as it is today.

Throughout the 1990s, I figured that if we want to resist this unsettling trend in the government to outlaw cryptography, one measure we can apply is to use cryptography as much as we can now while it's still legal. When use of strong cryptography becomes popular, it's harder for the government to criminalize it. Therefore, using PGP is good for preserving democracy. If privacy is outlawed, only outlaws will have privacy.

It appears that the deployment of PGP must have worked, along with years of steady public outcry and industry pressure to relax the export controls. In the closing months of 1999, the Clinton administration announced a radical shift in export policy for crypto technology. They essentially threw out the whole export control regime. Now, we are finally able to export strong cryptography, with no upper limits on strength. It has been a long struggle, but we have finally won, at least on the export control front in the US. Now we must continue our efforts to deploy strong crypto, to blunt the effects increasing surveillance efforts on the Internet by various governments. And we still need to entrench our right to use it domestically over the objections of the FBI.

PGP empowers people to take their privacy into their own hands. There has been a growing social need for it. That's why I wrote it.

Philip R. Zimmermann
Boulder, Colorado
June 1991 (updated 1999)[6]

Original Press Release for OpenSSH edit

Below is the original press release for OpenSSH sent back in 1999.[19]

Date: Mon, 25 Oct 1999 00:04:29 -0600 (MDT)
From: Louis Bertrand <louis>
To: Liz Coolbaugh <lwn>
Subject: OpenBSD Press Release: OpenSSH integrated into operating system


OpenSSH: Secure Shell integrated into OpenBSD operating system

Source: OpenBSD

Louis Bertrand, OpenBSD media relations
Bertrand Technical Services
Tel: (905) 623-8925 Fax: (905) 623-3852
Theo de Raadt, OpenBSD lead developer

Project Web site:

OpenSSH: Secure Shell integrated into OpenBSD Secure communications package no longer third-party add-on

[October 25, 1999: Calgary, Canada] -- The OpenBSD developers are pleased to announce the release of OpenSSH, a free implementation of the popular Secure Shell encrypted communications package. OpenSSH, to be released with OpenBSD 2.6, is compatible with both SSH 1.3 and 1.5 protocols and dodges most restrictions on the free distribution of strong cryptography.

OpenSSH is based on a free release of SSH by Tatu Ylonen, with major changes to remove proprietary code and bring it up to current security and functionality standards. Secure Shell operates like the popular TELNET remote terminal package but with an encrypted link between the user and the remote server. SSH also allows "tunnelling" of network services through the scrambled connection for added privacy. OpenSSH has been tested to interoperate with ssh-1.2.27 from SSH Communications, and the TTSSH and SecureCRT Windows clients.

"Network sessions involving strong cryptographic security are a requirement in the modern world," says lead developer Theo de Raadt. "Everyone needs this. People using the telnet or rlogin protocols are not aware of the extreme danger posed by password sniffing and session hijacking."

In previous releases of OpenBSD, users were urged to download SSH as soon as possible after installing the OS. Without SSH, terminal sessions transmitted in clear text allow eavesdroppers on the Internet to capture user names and password combinations and thus bypass the security measures in the operating system.

"I asked everyone `what is the first thing you do after installing OpenBSD?' Everyone gave me the same answer: they installed ssh," says de Raadt. "That's a pain, so we've made it much easier."

All proprietary code in the original distribution was replaced, along with some libraries burdened with the restrictive GNU Public License (GPL). Much of of the actual cryptographic code was replaced by calls to the crypto libraries built into OpenBSD. The source code is now completely freely re-useable, and vendors are encouraged to re-use it if they need ssh functionality.

OpenSSH relies on the Secure Sockets Layer library (libssl) which incorporates the RSA public-key cryptography system. RSA is patented in the US and OpenBSD developers must work around the patent restrictions. Users outside the US may download a libssl file based on the patent-free OpenSSL implementation. For US non-commercial users, OpenBSD is preparing a libssl based on the patented RSAREF code. Unfortunately, the US legal framework effectively bans US commercial users from using OpenSSH, and curtails freedom of choice in that market.

OpenSSH was developed and integrated into OpenBSD by Niels Provos, Theo de Raadt, Markus Friedl for cryptographic work; Dug Song, Aaron Campbell, and others for various non-crypto contributions; and Bob Beck for helping with the openssl library issues. The original SSH was written by Tatu Ylonen. Bjoern Groenvall and Holger Trapp did the initial work to free the distribution.

OpenBSD is an Internet-based volunteer effort to produce a secure multi-platform operating system with built-in support for cryptography. It has been described in the press as the world's most secure operating system. For more information about OpenSSH and OpenBSD, see the project Web pages at

Source: OpenBSD

The European Union (EU) on Encryption edit

During 2000, the European Commission investigated the state of international and industrial electronic espionage. Counter-measures and solutions were investigated as well as the risks. The result was a resolution containing a summary of the findings and a series of recommended actions for Member States to carry out and goals to meet. Recommendations to EU Member States from the European Parliament resolution ECHELON, A5-0264/2001 (emphasis added):

"29. Urges the Commission and Member States to devise appropriate measures to promote, develop and manufacture European encryption technology and software and above all to support projects at developing user-friendly open-source encryption software;"
. . .
"33. Calls on the Community institutions and the public administrations of the Member States to provide training for their staff and make their staff familiar with new encryption technologies and techniques by means of the necessary practical training and courses;"[20]

It was found during the investigation that businesses were the most at risk and the most vulnerable and that widespread use of open source encryption technology is to be encouraged. The same can be said even today.

SSH Protocols edit

OpenSSH uses the SSH protocol which connects over TCP. Normally, one SSH session per TCP connection is made, but multiple sessions can be multiplexed over a single TCP connection if planned that way. The current set of Secure Shell protocols is SSH2. It is a rewrite of the old, deprecated SSH1 protocol. It contains significant improvements in security, performance, and portability. The default is now SSH2 and SSH1 support has been removed from both the client and server.

Sequence Diagram for SSH Password Authentication

The Secure Shell protocol is an open standard. As such, it is vendor-neutral and maintained by the Internet Engineering Task Force (IETF). The current protocol is described in RFC 4250 through RFC 4256 and standardized by the IETF secsh working group. The overall structure of SSH2 is described in RFC 4251, The Secure Shell (SSH) Protocol Architecture.

The SSH protocol is composed of three layers: the transport layer, the authentication layer, and the connection layer.

SSH-CONNECT – The connection layer runs over the user authentication protocol. It multiplexes many different concurrent encrypted channels into logical channels over the authenticated connection. It allows for tunneling of login sessions and TCP-forwarding. It provides a flow control service for these channels. Additionally, various channel-specific options can be negotiated. This layer manages the SSH session, session multiplexing, X11 forwarding, TCP forwarding, shell, remote program execution, invoking SFTP subsystem.
SSH-USERAUTH – The user authentication layer authenticates the client-side to the server. It uses the established connection and runs on top of the transport layer. It provides several mechanisms for user authentication. These include password authentication, public-key or host-based authentication mechanisms, challenge-response, pluggable authentication modules (PAM), Generic Security Services API (GSSAPI) and even dongles.
SSH-TRANS – The transport layer provides server authentication, confidentiality and data integrity over TCP. It does this through algorithm negotiation and a key exchange. The key exchange includes server authentication and results in a cryptographically secured connection: it provides integrity, confidentiality and optional compression. [21]

Among the differences between the current protocol, SSH2, and the deprecated SSH1 protocol, is that SSH2 uses host keys for authentication. Whereas SSH1 used both server and host keys to authenticate. There's not much which can be added about the protocols which is not already covered with more detail and authority in RFC 4251 [22].

SSH File Transfer Protocol (SFTP) edit

The SSH File Transfer Protocol (SFTP) is a binary protocol to provide secure file transfer, access and management.

SFTP was added by Markus Friedl on the server side in time for the 2.3.0 release of OpenSSH in November 2000. Damien Miller added support for SFTP to the client side in time for 2.5.0. Since then, many have added to both the client and the server.

SFTP is not FTPS edit

For basic file transfer, nothing more is needed than an account on the machine with the OpenSSH server. SFTP support is built into the OpenSSH server package. The SFTP protocol, in contrast to old FTP, has been designed from the ground up to be as secure as possible for both login and data transfer.

Samba is an interoperability suite providing fast file and print services for any client which can use the SMB/CIFS protocol. It is most often used for file sharing at the level of the local area network.
AFS, or the Andrew File System, is a distributed file system providing file sharing at an institutional level across a geographically diverse institution or set of collaborating institutions. Notably it provides a set of trusted servers and a homogeneous, location-transparent file name space to all the client workstations.

Unless the use-case calls for publicly available, read-only, downloads, don't worry about trying to fiddle with FTP. It is the protocol FTP itself that is inherently insecure. It's great for read-only, public data transfer. The programs vsftpd and proftpd, for example, are secure insofar as the server software itself goes, although the protocol itself is still insecure. In other words the program itself is more or less fine and if you need to provide read-only, publicly available downloads then FTP maybe the right tool. Otherwise forget about FTP. Nearly always when users ask for "FTP" they don't mean specifically the old file transfer protocol from 1971 as described in RFC 114, but a generic means of file transfer and there are many ways to solve that problem. This is especially true since the next part of their request is usually how to make it secure. The name "FTP" is frequently mis-used generically to mean any file transfer utility, much the same way as the term "Coke" is used in some of Southern United States to mean any carbonated soft drink, not just Coca-Cola. Consider SFTP or, for larger groups, even SSHFS, Samba, or AFS. While old FTP succeeded very well in achieving its main goal to promote use of networked computers by allowing users at any host on the network to use the file system of any cooperating host, it cannot be made secure. There's nothing to be done about that, so it is past time to get over it.

Again, it is the protocol itself, FTP, which is the problem.[23] With FTP, the data, passwords and user name are all sent back and forth unencrypted.[24] Anyone on the client's subnet, the server's subnet or any subnet in between can 'sniff' the passwords and data when FTP is used. With extra effort it is possible to wrap FTP inside SSL or TLS, thus creating FTPS. However, tunneling FTP over SSL/TLS is complex to do and far from an optimum solution.

Unfortunately because of name confusion combined with the large number of posts and discussions created by complex, nit-picky tasks like wrapping FTP in SSL to provide FTPS, the wrong way still turns up commonly in web searches regarding file transfer. In contrast, easy, relatively painless solutions vanish because it is rarely necessary to post how to do those. Also, an easy solution can be summed up in very few lines and maybe a single answer. Thus, there is still a lot of talk online about 'securing' FTP and very little mention of using SFTP. It's a vicious cycle that this book hopes to help break: Difficult tasks mean lots of discussion and noise, lots of discussion and noise means strong web presence, strong web presence means high Google ranking.

SFTP tools are very common, but might be taken for granted and thus overlooked. SFTP tools are easy to use and more functional than old FTP clients. In fact a lot of improvements have been made in usability. There is no shortage of common, GUI-based SFTP clients to transfer files: Filezilla, Konqueror, Dolphin, Nautilus, Cyberduck, Fugu, and Fetch top the list but there are many more. Most are Free Software. Again, these SFTP clients are very easy to use. For example, in Konqueror, just type in the URL to the sftp server, where the server name or address is xx.yy.zz.aa.


If it is desirable to start with a particular directory, then that too can be specified.


One special client worth knowing about is sshfs. With sshfs as an SFTP client the other machine is accessible as an open folder on your machine's local file system. In that way any program you normally have to work with files, such as LibreOffice, Inkscape or Gimp can access the remote machine via that folder.

Background of FTP edit

FTP is from the 1970s. It's a well proven workhorse, but from an era when if you were on the net you were supposed to be there and if there was trouble it could usually be cleared up with a short phone call or an e-mail or two. It sends the login name, password and all of the data unencrypted for anyone to intercept. FTP clients can connect to the FTP server in either passive or active modes. Both active and passive modes for FTP[25] use two ports, one for control and one for data. In FTP Active mode, after the client makes a connection to the FTP server it then allows an incoming connection to be initiated from the server to for data transfer. In FTP Passive mode, after the client makes a connection to the FTP server, the server then responds with information about a second port for data transfer and the client initiates the second connection. FTP is most relevant now as Anonymous FTP, which is still excellent for read-only downloads without login. FTP is still one way to go for transferring read-only data, as would be using the web (HTTP or HTTPS), or a P2P protocol like Bittorrent. So there are other options than FTP for offering read-only downloads. Preference is given lately to HTTPS for small files and Bittorrent for large files or large groups of files.

Using tcpdump to show FTP activity edit

An illustration of how the old protocol, FTP, is insecure can be had from the utility tcpdump. It can show what is going over the network during an Anonymous FTP session, or for that matter any FTP session. Look at the manual page for tcpdump for an explanation of the individual arguments, the usage example below displays the first FTP or FTP-Data packets going from the client to the server and vice versa.

The output below shows an excerpt from the output of tcpdump which captured packets between an FTP client and the FTP server, one line per packet.

$ sudo tcpdump -q -s 0 -c 10 -A -i eth0 \
"tcp and (port ftp or port ftp-data)"
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
:18:36.010820 IP desk.55227 > server.ftp: tcp 16 E..D..@.@.....1.[.X.....G.r. ........l..... ."......USER anonymous 
:18:36.073192 IP server.ftp > desk.55227: tcp 0 E..4jX@.7.3.[.X...1..... ...G.r#........... .....".. 
:18:36.074019 IP server.ftp > desk.55227: tcp 34 E..VjY@.7.3.[.X...1..... ...G.r#....Y...... ....."..331 Please specify the password. 
:18:36.074042 IP desk.55227 > server.ftp: tcp 0 E..4..@.@..+..1.[.X.....G.r# ..)........... ."...... 
:18:42.098941 IP desk.55227 > server.ftp: tcp 23 E..K..@.@.....1.[.X.....G.r# ..)....gv..... .".w....PASS 
:18:42.162692 IP server.ftp > desk.55227: tcp 23 E..KjZ@.7.3.[.X...1..... ..)G.r:........... .....".w230 Login successful.
:18:43.431827 IP server.ftp > desk.55227: tcp 14 E..Bj\@.7.3.[.X...1..... ..SG.rF.....j..... ....."..221 Goodbye.

As can be seen in lines 3 and 7, data such as text from the server is visible. In lines 1 and 5, text entered by the user is visible and in this case it includes the user name and password used to log in. Fortunately the session is Anonymous FTP, which is read-only and used for downloading. Anonymous FTP is a rather efficient way to publish material for download. For Anonymous FTP, the user name is always "anonymous" and the password is the user’s e-mail address and the server's data always read-only.

If you have the package for the OpenSSH server already installed, no further configuration of the server is needed to start using SFTP for file transfers. Though comparatively speaking, FTPS is significantly more secure than FTP. If you want remote remote login access, then both FTP and FTPS should be avoided. A very large reason to avoid both is to save work.

On FTPS edit

FTPS is FTP tunneled over SSL or TLS. A goal of FTP was to encourage the use of remote computers which, along with the web, has succeeded. A goal of FTPS was to secure logins and transfers, and it was a necessary step in securing file transfers with the legacy protocol. However, since SFTP is so much easier to deploy and most systems now include both graphical and text-based SFTP clients, FTPS can really be considered deprecated for most occasions.

Some good background material can be found in the Request for Comments (RFCs) for FTP and FTPS. There, SFTP and even HTTPS are better matches and largely supersede FTPS. See the section on Client Applications for an idea of the SFTP clients available.

Privilege Separation edit

Sequence Diagram for OpenSSH Privilege Separation

Privilege separation is when a process is divided into sub-processes, each of which have just enough access to just the right services to do their part of the job. An underlying principle is that of least privilege, which is where each process has exactly enough privileges to accomplish a task, neither more nor less. The goal of privilege separation is to compartmentalize any corruption and prevent a corrupt process from accessing other parts of the system. Privilege separation is applied in OpenSSH by using several levels of access, some higher some lower, to run sshd(8) and its subsystems and components. The SSH server ➊ starts out with a privileged process ➋ which then creates an unprivileged process ➌ to work with the network traffic. Once the user has authenticated, another unprivileged process is created ➍ with the privileges of that authenticated user. See the "Sequence Diagram for OpenSSH Privilege Separation". As seen in the diagram, a total of four processes get run to create an SSH session. One process, the server, remains and listens for new connections and spawn new child processes.

$ ps -ax -o user,pid,ppid,state,start,command | awk '/sshd/ || NR==1' 
root      1473     1 I     05:44:01  sshd: /usr/sbin/sshd [listener] 0 of 10-10

It is this privileged process that listens for the initial connection from clients. Here it is seen waiting and listening on port 22.

$ netstat -ntlp | awk '/sshd/ || NR<=2'
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0    *               LISTEN      1473/sshd       
tcp6       0      0 :::22                   :::*                    LISTEN      1473/sshd

After the initial connection while waiting for password authentication from user 'fred', a privileged monitor process supervises an unprivileged process by user 'sshd' which handles the contact with the remote user's client.

$ ps -ax -o user,pid,ppid,state,start,command | awk '/sshd/ || NR==1' 
root      1473     1 S 05:44:12 sshd: /usr/sbin/sshd [listener] 1 of 10-10
root      9481  1473 S 14:40:37 sshd: fred [priv]   
sshd      9482  9481 S 14:40:37 sshd: fred [net]

Then after authentication is completed and a session established for user 'fred', a new privileged monitor process is created to supervise the process running as user 'fred'. At that point the other process running as user 'sshd' has gone away.

$ ps -ax -o user,pid,ppid,state,start,command | awk '/sshd/ || NR==1' 
root      1473     1 S 05:44:12 sshd: /usr/sbin/sshd [listener] 0 of 10-10
root      9481  1473 S 14:40:37 sshd: fred [priv]   
fred      9579  9481 S 14:42:02 sshd: fred@pts/30

Privilege separation has been the default in OpenSSH since version 3.3[26] Since version 5.9, privilege separation further applies mandatory restrictions on which system calls the privilege separated child can perform. The intent is to prevent a compromised privilege separated child from being used to attack other hosts either by opening sockets and proxying or by probing local kernel attack surface. [27] Since version 6.1, this sandboxing has been the default.

Other SSH Implementations edit

Dropbear edit

Dropbear is a smaller, modular, open source SSH2 client and server available for all regular POSIX platforms. Dropbear is partially a derivative of OpenSSH and it is often used in embedded systems because very small binaries can be produced. Functions that are not needed can be left out of the binary, leaving a lean executable. Thus a working SSH server can be boiled down to 110KB by trimming away various functions.

Many distributions and products use Dropbear. This includes OpenWRT, gumstix, Tomato Firmware, PSPSSH, DSLinux, Meego, OpenMoko, Ångström (for Zaurus), ttylinux, Sisela, Trinux, SliTaz, Netcomm, US Robotics, some Motorola phones, and many, many more.

Tectia edit

Tectia is from SSH Communications Security Corporation which is based in Finland. It is a closed-source SSH client and server with FIPS support.

Solaris Secure Shell (SunSSH) edit

Sun SSH is fork of OpenSSH 2.3, with many subsequent changes.

GlobalSCAPE EFT Server edit

EFT Server is a closed binary that can include SSH and SFTP modules as extensions.

Gravitational Teleport edit

Teleport provides an Apache-licensed SSH server and client written in Golang. It supports only IPv4 and not IPv6 at this time.

Client Applications edit

On the client side, ssh(1), scp(1), and sftp(1) provide a wide range of capabilities. Interactive logins and file transfers are just the tip of the iceberg.

ssh(1) - The basic login shell-like client program.
sftp(1) - FTP-like program that works using the SSH protocol.
scp(1) - File copy program that acts like rcp(1).
ssh_config(5) - The client configuration file.

The SSH client edit

ssh(1) is a program which provides the client side for secure, encrypted communications between hosts over an insecure network. Its main use is for logging into and running programs on a remote host. It can also be used to secure remote X11 connections and forward arbitrary TCP ports to secure legacy protocols. ssh was made, in part, to replace insecure tools like rsh and telnet. It has largely succeeded at this goal. rsh and telnet are rarely seen anymore for interactive sessions or anywhere else. ssh can authenticate using regular passwords or with the help of a public-private key pair. More options, such as use of Kerberos, smartcards, or one-time passwords can be configured.

Remote login, authenticating via password:

$ ssh

Another way of logging in to the same account:

$ ssh -l fred

Remote programs can be run interactively when the client is run via the shell on the remote host. Or they can be run directly when passed as an argument to the SSH client. They can even be pre-configured in the authentication key or the server configuration.

Run uname(1) on the remote machine:

$ ssh -l fred  "uname -a"

See what file systems are mounted and how much space is used there:

$ ssh -l fred  "mount; df -h"

It is possible to configure in great detail which programs are allowed by which accounts. There are many combinations of options that give extra capabilities, such as re-using a single connection for multiple sessions or passing through intermediary machines. The level of granularity can be increased even more with the help of sudo(8).

SSH Client Environment Variables -- Server Side edit

Of course the foundation of most SSH activity centers around use of the shell. Upon a successful connection, OpenSSH sets several environment variables.

SSH_CLIENT=' 36673 22'
SSH_CONNECTION=' 36673 22'

SSH_CLIENT shows the address of the client system, the outgoing port number on the client system, and the incoming port on the server. SSH_CONNECTION shows the address of the client, the outgoing port on the client, the address of the server and the incoming port on the server. SSH_TTY names the pseudo-terminal device, abbreviated PTY, on the server used by the connection. For more information on pseudo-terminals see ptm(4), tty(1) and tty(4).

The login session can be constrained to a single program with a predetermined set of parameters using ForceCommand in the server configuration or Command="..." in the authorized keys file. When that happens an additional environment variable SSH_ORIGINAL_COMMAND gets set.

SSH_ORIGINAL_COMMAND=echo "hello, world"

If the server has ExposeAuthInfo set, then the SSH_USER_AUTH environment variable points to a temporary file listing details about the authentication methods used to start the current session.


The file is removed when the session ends.

Other variables are set depending on the user's shell settings and the system's own settings.

SSH Client Configuration Options edit

GSSAPI, or Generic Security Service Application Program Interface, a standard interface defined by RFC 2743 to provide a means for independent modules with means for generic authentication and secure messaging. Kerberos V is one of the more common examples of a service using it.

Configuration options can be passed to ssh(1) as arguments, see the manual pages for ssh(1) and ssh_config(5) for the full list.

Connect very verbose output, GSSAPI authentication:

$ ssh -vv -K -l account

A subset of options can be defined on the server host in the user's own authorized keys file, in conjunction with specific keys. See sshd(8) for which subset exactly.

command="/usr/local/sbin/",no-pty ssh-rsa AAAAB3NzaC1yc2EAAAQEAsY6u71N...
command="/usr/games/wump",no-port-forwarding,no-pty ssh-ed25519 AAAAC3NzaC1lZDI1...
environment="gtm_dist=/usr/local/gtm/utf8",environment="gtm_principal_editing=NOINSERT:EDITING" ssh-rsa AAAA8a2s809poloh05yhh...

Note that some directives, like setting the environment variables, are disabled by default and must be named in the server configuration before available to the client. More configuration directives can be set by the user in ~/.ssh/config or by the system administrator in /etc/ssh/ssh_config. These same configuration directives can be passed as arguments using -o. See ssh_config(5) for the full list with descriptions.

$ ssh -o "ServerAliveInterval=60" -o "Compression=yes" -l fred

The system administrators of the client host can set some global defaults in /etc/ssh/config. Some of these global settings can be targeted to a specific group or user by using a Match directive.

For example, if a particular SSH server is available via port 2022, it may be convenient to have the client try that port automatically. Some of OpenBSD’s anonymous CVS servers accept SSH connections on this port. However, compression should not be used in this case because CVS already uses compression. So it should be turned off. So, one could specify something like the following in the $HOME/.ssh/config configuration file so that the default port is 2022 and the connection is made without compression:

Host anoncvs
        Compression no
        Port 2022

See ssh_config(5) for the client side and sshd_config(5) for the server side for the full lists with descriptions.

The SFTP client edit

sftp(1) is an interactive file transfer program which performs all its operations over an encrypted SSH transport channel. It may also use many features of ssh(1), such as public key authentication and compression. It is also the name of the protocol used.

The SFTP protocol is similar in some ways to the now venerable File Transfer Protocol (FTP), except that the entire session, including the login, is encrypted. However, SFTP is not FTPS. The latter old-fashioned FTP tunneled over SSH/SSL. In contrast, SFTP is actually a whole new protocol. sftp(1) can also be made to start in a specific directory on the remote host.

$ sftp

Frequently, SFTP is used to connect and log into a specified host and enter an interactive command mode. See the manual page for sftp(1) for the available interactive commands such as get, put, rename, and so on. Also, the same configuration options that work for ssh(1) also apply to sftp(1). sftp(1) accepts all ssh_config(5) options and these can be passed along as arguments at run time. Some have explicit shortcuts.

$ sftp -i ~/.ssh/some.key.ed25519

While others can be specified by naming them in full using the -o option.

$ sftp -o "ServerAliveInterval=60" -o "Compression=yes"

Another way to transfer is to send or receive files automatically. If a non-interactive authentication method is used, the whole process can be automatic using batch mode.

$ sftp -b session.batch -i ~/.ssh/some_key_rsa

Batch processing only works with non-interactive authentication.

The SCP client edit

scp(1) is used for encrypted transfers of files between hosts and is used a lot like regular cp(1). It is intended as a replacement for rcp from the original Berkeley Software Distribution (BSD). Since 9.0[28] it is based on the SFTP protocol under the hood while the old version is based on. Both the old and the new versions SSH to encrypt the connection.

The scp(1) client, unlike the SFTP client, is not based on any formal standard. It has aimed at doing more or less what old rcp did and responded the same way. Since with it the same program must be used at both ends of the connection and interoperability is required with other implementations of SSH. Changes in functionality would probably break that interoperability, so new features are more likely to be added to sftp(1) if at all. Thus, it is best to lean towards using sftp(1) instead when possible. However, recent versions are actually a front end for the SFTP protocol instead.

Copy from remote to local:

$ scp*.txt .

Copy from local to remote, recursively:

$ scp -r /etc/

Being a front end for the SFTP protocol now, the new scp(1) client can cover most but not all of the behavior of the old client and is not quite bug-for-bug compatible. One noticeable change which is fixed on the server side in OpenSSH 8.7 or later was the absence of tilde (~) expansion. So the "" protocol extension has been added to support this. Another area where there can potentially be trouble is when there are shell meta-characters, such as * or ?, in the file names.

See also the section for the SFTP client above.

GUI Clients edit

There are a great many graphical utilities that support SFTP and SSH. Many started out as transfer utilities with the outdated legacy protocol FTP and grew with the times to include SSH and SFTP support. Sadly, many retain the epithet FTP program despite modernization. Others are more general file managers that include SFTP support as one means of network transparency. Most if not all provide full SFTP support including Kerberos authentication.

Below is a partial list to give an idea of the range of options available.

Bluefish is a website management tool and web page editor with built in support for SFTP. Closed source competitors XMetaL and Dreamweaver are said to have at least partial support for SFTP. No support for SFTP is available for Quanta+ or Kompozer as of this writing.

Cyberduck is a remote file browser for the Macintosh. It supports an impressive range of protocols in addition to SFTP.

Dolphin is a highly functional file manager for the KDE desktop, but can also be run in other desktop environments. It includes SFTP support

Fetch, by Fetch Softworks, is a reliable and well-known SFTP client for the Macintosh. It has been around since 1989 and started life as just an FTP client. It has many useful features combined with ease of use. It is closed source, but academic institutions are eligible for a free of charge site license.

Filezilla is presented as a FTP utility, but it has built in support for SFTP. It is available for multiple platforms under the Free Software license, the GPL.

FireFTP is a SFTP plugin for Mozilla Firefox. Though it is presented as an FTP add-on, it supports SFTP. It is available under both the MIT license and the GPL.

Fugu, developed by the University of Michigan research systems unix group, is a graphical front-end for SFTP on the Macintosh.

gFTP is a multi-threaded file transfer client

JuiceSSH is an SSH Client for Android/Linux. It uses the jsch Java implementation of SSH2.

Konqueror is a file manager and universal document viewer for the KDE desktop, but can also be run in other environments. It includes SFTP support.

lftp is a file transfer program that supports multiple protocols.

Midnight Commander is a visual file manager based on a text interface and thus usable over a terminal or console. It includes SFTP support.

Nautilus is the default file manager for the GNOME desktop, but can also be run in other environments. It includes SFTP support

PCManFM is an extremely fast, lightweight, yet feature-rich file manager with tabbed browsing which is the default for LXDE. It includes SFTP support.

PuTTY is another FOSS implementation of Telnet and SSH for both legacy and Unix platforms. It is released under the MIT license and includes an SFTP client, PSFTP, in addition to an xterm terminal emulator and other tools like a key agent, Paegent. It is written and maintained primarily by Simon Tatham.

Remmina is a remote desktop client written in GTK+ which supports multiple network protocols, including SSH.

RemoteShell is the default SSH client for MorphOS, written in C using the GUI library Magic User Interface (MUI). The operating system also contains the command-line tools ssh(1), scp(1) and sftp(1).

SecPanel is a GUI for managing and running SSH and scp connections. It is not a new implementation of the protocol or software-suite, but sits on top of either of the SSH software-suites

Thunar is the default file manager for the XFCE desktop. It includes SFTP support.

Transfer is the default SFTP client for MorphOS, written in C using the GUI library Magic User Interface (MUI).

Yafc is Yet Another FTP Client and despite the name supports SFTP.

Client Configuration Files edit

Client configuration files can be per user or system wide, with the former taking precedence over the latter and run-time arguments in the shell overriding both. In these configuration files, one parameter per line is allowed. The syntax is the parameter name followed by its value or values. Empty lines and lines starting with the hash (#) are ignored. An equal sign (=) can be used instead of whitespace between the parameter name and the values. Values are case-sensitive, but parameter names are not. The first value assigned is used. For key files, the format is different.

With either type of file, there is no substitute for reading the relevant manual pages on the actual systems involved, especially because they match the specific versions which are in use.

System-wide Client Configuration Files edit

System-wide client files set the default configuration for all users of OpenSSH clients on that system. These defaults can be overridden in most cases by the user's own default settings in a local configuration file. Both can be overridden, in many cases, by specifying various options or parameters at run time. The prioritization is as follows:

  1. run time arguments via the shell
  2. user's own configuration
  3. system-wide configuration

The first value obtained is used. The user's own configuration file and the system-wide configuration file can also point to additional configuration files to be included using the Include directive starting with OpenSSH 7.3. The Include directive can be specified anywhere in the configuration file even inside a Match or Host block. Care should be used when nesting configurations.

/etc/ssh/ssh_config edit

This file defines all the default settings for the client utilities for all users on that system. It must be readable by all users. The configuration options are described in detail in ssh_config(5).

Below a shortcut is made for connecting to

Host arc
        Port 2022
        User fred
        IdentityFile ~/.ssh/id_rsa_arc

So with that configuration, it is enough to enter ssh arc and the rest of the information gets filled in automatically.

/etc/ssh/ssh_known_hosts edit

This contains the system-wide list of known host keys used to verify the identity of the remote host and thus hinder impersonation or eavesdropping. This file should be prepared by the system administrator to contain the public host keys of all necessary hosts. It should be world-readable.

See ~/.ssh/known_hosts below for more explanation or see sshd(8) for further details of the format of this file.

/etc/ssh/sshrc edit

This file resides on the server and programs in this file are executed there by ssh(1) when the user logs in, just before the user's shell or designated program is started. It is not run as root, but instead as the user who is logging in. See the sshd(8) manual page in the section "SSHRC" for more information. If it sends anything to stdout that will interfere with SFTP sessions, among others. So if any output is produced at all, it should be sent to stderr or else a log file.

User-specific Client Configuration Files edit

Users can override the default system-wide client settings and choose their own defaults. For situations where the same change is made repeatedly it is recommended to add it to the user's local configuration.

Client-Side Files edit

These files reside on the client machine.

~/.ssh/config edit

The user's own configuration file which, where applicable, overrides the settings in the global client configuration file, /etc/ssh/ssh_config. The configuration options are described in detail in ssh_config(5).

This file must not be accessible to other users in any way. Set strict permissions: read/write for the user, and not accessible by others. It may group-writable if and only if that user is the only member of the group in question.

Local Override of Client Defaults edit

The file is usually named ~/.ssh/config. However, a different configuration file can be specified at runtime using the -F option. General options intended to apply to all hosts can be set by matching all hosts and should be done at the end of the configuration file. The first match takes precedence, therefore more specific definitions must come first and more general overrides at the end of the file.

Host server1
        ServerAliveInterval	200

Host *
        ExitOnForwardFailure	yes
        Protocol	2
        ServerAliveInterval	400

Options given as runtime arguments will override even those in the configuration file. However, not all options can be set or overriden by the user. Those options which may not be set or overridden will be ignored.

~/.ssh/known_hosts edit

This file is local to the user account and contains the known keys for remote hosts. Often these are collected from the hosts when connecting for the first time, but they can be added manually. As with those keys stored in the global file, /etc/ssh/ssh_known_hosts, these keys are used to verify the identity of the remote host, thus protecting against impersonation or man-in-the-middle attacks. With each subsequent connection the key will be compared to the key provided by the remote server. If there is a match, the connection will proceed. If the match fails, ssh(1) will fail with an error message. If there is no key at all listed for that remote host, then the key's fingerprint will be displayed and there will be the option to automatically add the key to the file. This file can be created and edited manually, but if it does not exist it will be created automatically by ssh(1) when it first connects to a remote host.

The ~/.ssh/known_hosts file can use either hashed or clear text host names. Even with hashed names, it can still be searched using ssh-keygen(1) using the -F option.

$ ssh-keygen -F

The default file to be searched will be ~/.ssh/known_hosts and the key is printed if found. A different file can be searched using the -f option. If a key must be removed from the file, the -R option works similarly to search by host and then remove it if found even if the host name is hashed.

$ ssh-keygen -R -f ~/.ssh/known_hosts

When a key is removed, it will then be appended to the file ~/.ssh/known_hosts.old in case it is needed later. Again, see the manual page for sshd(8) for the format of these known_host files.

If a non-default file is used with either -F or -R then the name including the path must be specified using -f. But -f is optional if the default file is intended.

If the global file /etc/ssh/ssh_known_hosts is used then it should be prepared by the system administrator to contain the public host keys of all necessary hosts and it should be world-readable.

Manually Adding Public Keys to ~/.ssh/known_hosts edit

Manually adding public host keys to known_hosts is a matter of adding one unbroken line per key. How the key is obtained is not important, as long as it is complete, valid, and guaranteed to be the real key and not a fake. The utility ssh-keyscan(1) can fetch a key and ssh-keygen(1) can be used to show the fingerprint for verification. See examples in the cookbook chapter on Public Key Authentication for methods of verification. Again, the corresponding system-wide file is /etc/ssh/ssh_known_hosts

About the Contents of the known_hosts Files edit

The known_hosts file is for verifying the identity of other systems. ssh(1) can automatically add keys to the user's file, but they can be added manually as well. The file contains a list of public keys for all the hosts which the user has connected to. It can also include public keys for hosts that the user plans to log into but are not already in the system-wide list of known host keys. Usually when connecting to a host for the first time, ssh(1) adds the remote host's public key to the user's known_hosts file, but this behavior can be tuned.

The format is one public key or certificate per unbroken line. Each line in contains a host name, number of bits, exponent, and modulus. At the beginning of the line is either the host name or a hash representing the host name. An optional comment can follow at the end of the line. These can be preceded by an optional marker to indicate a certificate authority, if an SSH certificate is used instead of a SSH key. These fields are separated by spaces. It is possible to use a comma-separated list of hosts in the host name field if a host has multiple names or if the same key is used on multiple machines in a server pool. Here are two examples for hosts with the basic host names:, ssh-rsa AAAA...njvPw== ssh-rsa AAAAB3Nz...cTqGvaDhgtAhw==

Non-standard ports can be indicated by enclosing the host name with square brackets and following with a colon and the port number. Here are three examples referring to hosts listening for SSH on non-standard ports:

[]:2222 ssh-rsa AAAAB3Nz...AKy2R2OE=	
[]:4922 ssh-rsa AAAAB4mV...1d6j=	
[]:2022,[]:2022 ssh-rsa AAAAB...fgTHaojQ==	

Host name patterns can be created using "*" and "?" as wildcards and "!" to indicate negation.

Up to one optional marker per line is allowed. If present it must be either @cert-authority or @revoked. The former shows that the key is a certificate authority key, the latter flags the key as revoked and not acceptable for use.

See sshd(8) for further details on the format of this file and ssh-keygen(1) for managing the keys.

Server-Side Client Files edit

These client files reside on the server. By default they are kept in the user's directory. However, the server can be configured to look for them in other locations if needed.

~/.ssh/authorized_keys edit

authorized_keys is a one-key-per-line register of public ECDSA, RSA, and ED25519 keys that this account can use to log in with. The file's contents are not highly sensitive, but the recommended permissions are read/write for the user and not accessible by others. As always, the whole key including options and comments must be on a single, unbroken line.

ssh-rsa AAAAB3NzaC1yc2EAAA...41Ev521Ei2hvz7S2QNr1zAiVaOFy5Lwc8Lo+Jk=

Lines starting with a hash (#) are ignored and can be used as comments. Whitespace separates the key's fields, which are in sequence an optional list of login options, the key type (usually ssh-rsa or better like ecdsa-sha2-nistp256), the key itself encoded as base64, and an optional comment.

If a key is followed by an annotation, the comment does not need to be wrapped in quotes. It has no effect on what the key does or how it works. Here is an annotated key, the comment having been generated with the -C option ssh-keygen(1):

ssh-rsa AAAAB3NzaC1yc2EAAA...zAiVaOFy5Lwc8Lo+Jk=  Fred @ Project FOOBAR

Keys can be preceded by a comma-separated list of options to affect what happens upon successful login. The first key below forces the session to launch tinyfugue automatically, the second forcibly sets the PATH environment variable:

command="/usr/bin/tinyfugue" ssh-rsa AAAAB3NzaC1yc2EAAA...OFy5Lwc8Lo+Jk=
environment="PATH=/bin:/usr/bin/:/opt/gtm/bin" ssh-rsa AAAAB3N...4Y2t1j=

The format of authorized_keys is described in the sshd(8) manual page. Old keys should be deleted from the file when no longer needed. The server can specify multiple locations for authorized_keys. See the next section, Server-Side Client Key Login Options, for details.

~/.ssh/authorized_principals edit

By default this file does not exist. If it is specified in sshd_config(5), it contains a list of names which can be used in place of the username when authorizing a certificate. This option is useful for role accounts, disjoint account namespaces and "user@realm"-style naming policies in certificates. Principals can also be specified in authorized_keys.

~/.ssh/environment edit

If the server is configured to accept user-supplied, automatic changes to environment variables as part of the login process, then these changes can be set in this file.

If the server, the environment file and an authorization key all try to change the same variable, the file environment takes precedence over what a key might contain. Either one will override any environment variables that might have been passed by ssh(1) using SendEnv.

Authentication keys stored in authorized_keys can also be used to set variables. See also the AcceptEnv and PermitUserEnvironment directives in the manual page for sshd_config(5).

~/.ssh/rc edit

This is a script which is executed by sh(1) just before the user's shell or command is started. It is not run if ForceCommand is used. The script is run after reading the environment variables. The corresponding global file, /etc/ssh/sshrc, is not run if the user's rc script exists.

Local Account Public / Private Key Pairs edit

People might have a variety ECDSA, Ed25519, and RSA keys stored in the file system. Since version 8.2, two new key types ECDSA-SK and Ed25519-SK, along with corresponding certificate types are available for keys tied to FIDO/U2F tokens. Though individual accounts can maintain their own list of keys or certificates for authentication or to verify the identity of remote hosts in any directory, the most common location is in the ~/.ssh/ directory. The naming convention for keys is only a convention but recommended to follow anyway. Public keys usually have the same name as the private key, but with .pub appended to the name. Trouble can arise if the names of the public and private keys do not match. If there is more than one key pair, then ssh-keygen(1) can use the -f option when generating keys to produce a useful name along with the -C option which embeds a relevant comment inside the key pair.

People, programs, and scripts can authenticate using a private key stored on the system, or even a private key fetched from a smartcard, if the corresponding public key is stored in authorized_keys on the remote system. The authorized_keys file is not highly sensitive, but the recommended permissions are read/write for the user, and not accessible by others. The private keys, however, are very sensitive and should not be readable or even visible to other accounts. They should never leave the client and should certainly never be put on the server. See the chapter on Public Key Authentication for more discussion and examples.

The keys can be preceded by a comma-separated list of options. The whole key must be on a single, unbroken line. No spaces are permitted, except within double quotes. Any text after the key itself is considered a comment. The authorized_keys file is a one-key-per line register of public RSA, Ed25519, ECDSA, Ed25519-SK, and ECDSA-SK keys that can be used to log into a particular account. See the section above on the authorized_keys file for more discussion.

DSA is considered deprecated. The time has passed for DSA keys and they are no longer considered safe and should be replaced with better keys.

Per-account Host-based Authentication Configuration is also possible using the ~/.shosts, ~/.rhosts, ~/.ssh/environment, and ~/.ssh/rc files.

Public Keys: ~/.ssh/ ~/.ssh/ ~/.ssh/ ~/.ssh/ ~/.ssh/ ~/.ssh/ ~/.ssh/ edit

These are only the default names for the public keys. Again, it can be a good idea to give more relevant names to keys. The * keys are those bound with a hardware security token and the * keys are those generated from resident keys stored within hardware security token.

Public keys are mainly used on the remote server for key-based authentication. Public keys are not sensitive and are allowed to be readable by anyone, unlike the private keys, but don't need to be. A public key, minus comments and restriction options, can be regenerated from a private key if lost. So while it can be useful to keep backups of the public key, it is not essential unlike for private keys.

Private Keys: ~/.ssh/id_ecdsa ~/.ssh/id_ed25519 ~/.ssh/id_rsa ~/.ssh/id_ecdsa-sk ~/.ssh/id_ed25519-sk ~/.ssh/id_ecdsa-sk_rk ~/.ssh/id_ed25519-sk_rk edit

These are only the default names for private keys. Private keys are always considered sensitive data and should be readable only by the user and not accessible by others. In other words, they use mode 0600. The directory they are in should also have mode 0700 or 0500. If a private key file is accessible by others, ssh(1) will ignore it.

It is possible to specify a passphrase when generating the key which will be used to encrypt the sensitive part of this file using AES128. Until version 5.3, the cipher 3DES was used to encrypt the passphrase. Old keys using 3DES that are given new passphrases will use AES128 when they are modified.

Private keys stored in hardware tokens as resident keys can be extracted and automatically used to generate their corresponding public key using ssh-keygen(1) with the -K option. Such keys will default to being named id_ecdsa-sk_rk or id_ed25519-sk_rk, depending on the key type, though the file names can be changed after extraction. A passphrase can be assigned to the private key upon extraction from the token to a file.

While public keys can be generated from private keys, new private keys cannot be regenerated from public keys if the private keys are lost. Nor can a new passphrase be set if the current one is forgotten. Gone is gone, unlike the public keys, which can be regenerated from an existing private key if the private key is lost or forgotten then a whole new key pair must be generated and deployed.

Legacy Files edit

These files might be encountered on very old or out of date systems but not on up-to-date ones.

~/.shosts edit

~/.rhosts edit

.rhosts is a legacy from rsh containing a local list of trusted host-user pairs that are allowed to log in. Login requests matching an entry were granted access.

See also the global list of trusted host-user pairs, /etc/hosts.equiv

rhosts can be used as part of host-based authentication. Otherwise it is recommended not to use rhosts for authentication, there are a lot of ways to misconfigure the .rhosts file.

Legacy DSA Keys ~/.ssh/id_dsa ~/.ssh/ edit

Deprecated DSA keys might be found named as id_dsa and, but regardless of the name any usage should be tracked down. Support for DSA both on the server and client was discontinued in OpenSSH 7.0. If DSA keys are found, the pair should be removed and replaced with a better type of key.

Legacy SSH1 Protocol Keys ~/.ssh/identity ~/.ssh/ edit

The files identity and were for SSH protocol version 1, and thus deprecated. If found they should be investigated as to what, if anything, uses them and why. Then once any remaining usage is resolved they should be removed and replaced with newer key types.

Mapping Client Options And Configuration Directives edit

Many run-time options for the SSH client have corresponding configuration directives and vice versa. The following is a quick overview. It is not a substitute for getting familiar with the relevant manual pages, ssh(1) and ssh_config(5) which are the relevant, authoritative, up-to-date resources on the matter.

Lookup Table of OpenSSH Client Options Versus Configuration Directives
Directive Option Description
AddressFamily -4 / -6 Limit connections to IPv4 or IPv6 only.
ForwardAgent -A / -a Forward or block forwarding from the authentication agent.
BindInterface -B Bind the outgoing connection to this network interface.
BindAddress -b Bind the outgoing connection to this network address.
Compression -C Specify whether to compress the data using gzip(1).
Ciphers -c Specify which cipher to use.
DynamicForward -D Designate a local port to be forwarded, say for SOCKS5.
EscapeChar -e Specify an escape character for PTY sessions.
ForkAfterAuthentication -f Drop client to background right before command execution.
GatewayPorts -g Whether other hosts are allowed to connect to local forwarded ports.
PKCS11Provider -I Specify the path to the shared PKCS#11 library.
IdentityFile -i Specify a particular certificate or private key to use for authentication.
ProxyJump -J Connect to the destination via this host or hosts first.
GSSAPIAuthentication -K / -k Enable or disable Generic Security Services Application Program Interface (GSSAPI) authentication.
LocalForward -L Specify which local port or socket to forward to the specified remote system.
User -l Designate which account on the remote system to try.
ControlMaster -M Allow multiplexing of SSH sessions over a single TCP connection.
MACs -m Designate which message authentication code (MAC) algorithms to try.
SessionType -N / -s Invoke a designated subsystem or even prevent any command execution at all.
StdinNull -n Prevent reading from stdin.
Tag -P Tag for use within Match.
Port -p Connect to this port on the remote system.
LogLevel -q / -v Adjust the verbosity of logging messages from the client.
RemoteForward -R Specify which remote port or socket to forward to the specified local system.
ControlPath -S Designate the control socket for multiplexing over this connection.
RequestTTY -T / -t Prohibit or request a pseudo-TTY for the session.
ForwardX11 -X / -x Enable or prohibit X11 forwarding.

As of version 8.7 the -f, -N, and -n options also have corresponding client configuration directives in ssh_config(5).

Server-Side Client Key Login Options edit

The login options available for use in the local user authorized keys file might be overridden or blocked by the server's own settings. However, within that constraint, the following options can be used.


Specifies that the listed key is a certification authority (CA) trusted to validate signed certificates for user authentication. Certificates may encode access restrictions similar to key options. If both certificate restrictions and key restrictions are present, then the most restrictive union of the two is applied.


Specifies a program and its options to be executed when the key is used for authentication. This is a good way of forcing a program to restrict a key to a single, specific operation such as a remote backup. However, TCP and X11 forwarding are still allowed unless explicitly disabled elsewhere.

The program is run on a PTY if the client requests it, otherwise the default is to run without a TTY. The default, running without a TTY, provides an 8-bit clean channel. If the default has been changed, specify no-pty to get an 8-bit clean channel. If no programs are allowed, then use an empty string "" to prevent anything from running.

no-pty,command="" ssh-rsa AAAAB3NzaC1yc2EAAA...OFy5Lwc8Lo+Jk=

If only one program is allowed, with specific options, then it can be spelled out explicitly.

restrict,command="/usr/bin/svnserve -t --tunnel-user=fred" ssh-ed25519 AAAAC3NzaC1lZDI1NT...skSUlrRPoLyUq

Quotes provided in the program's options must be escaped using a backslash. ('\')

command="sh -c \"mysqldump db1 -u fred1 -p\"" ssh-rsa AAAAB3NzaC1yc...Lwc8OFy5Lo+kU=

This option applies to execution of the shell, another program, or a subsystem. Thus any other programs specified by the user are ignored when command is present. However, the program originally specified by the client remains available as the environment variable SSH_ORIGINAL_COMMAND. That can be used by a script in a multiple-choice case statement, for example, to allow the account to select from a limited range of actions.


Sets the value of an environment variable when this key is used to log in. It overrides default values of the variable, if they exist. This option can be repeated to set multiple variables up to 1024 discrete names. First match wins in the case of repetition. This option is only allowed if the PermitUserEnvironment option is set in the SSH server's configuration. The default is that it is disabled. This option used to be disabled automatically when UseLogin was enabled, but UseLogin has been deprecated.


Sets a date or date-time, either as a YYYYMMDD date or a YYYYMMDDHHMM[SS], after which the key will not be allowed to authenticate. Otherwise the key will be considered valid indefinitely. The system time zone is used.


Either the canonical name of the remote host or its IP address required in addition to the key. Addresses and host names can be listed using a comma-separated list of patterns, see PATTERNS in ssh_config(5) for more information on patterns, or use the CIDR address/masklen notation.

no-agent-forwarding / agent-forwarding

This option forbids the authentication agent from forwarding the key when it is used for authentication. Alternately, it allows agent forwarding even if it was otherwise previously disabled by the restrict option.

no-port-forwarding / port-forwarding

Forbids TCP forwarding and any port forward requests by the client will return an error when this key is used for authentication. Alternately, override the restrict option and allow port forwarding. See also permitopen.

no-pty / pty

TTY allocation is prohibited and any request to allocate a PTY will fail. Alternately, TTY allocation is permitted, even if previously disabled by the restrict option.


FIDO keys which have been created with the -O no-touch-required can use this method which makes the client skip the check for user presence.

no-user-rc / user-rc

Use the no-user-rc option in authorized_keys to disable execution of ~/.ssh/rc . Alternately, use user-rc to override the restrict option.

no-X11-forwarding / x11-forwarding

Prevent X11 forwarding when this key is used for authentication and requests to forward X11 will return an error. Alternately, override the restrict option and allow X11 forwarding.



The permitlisten setting limits remote port forwarding (ssh -R) to only the specified port and, optionally, host. In contrast, permitopen limits local port forwarding (ssh -L) to only the specified host and port. IPv6 addresses can be specified with an alternative syntax: host/port. Multiple permitopen or permitlisten options may be used and must be separated by commas. No pattern matching is performed on the specified host names, they must be literal host names or IP addresses. Can be used in conjunction with agent-forwarding.


Specify a list of names that may be used in place of the username when authorizing a certificate trusted via the TrustedCAKeys option described in sshd_config(5).


Disable all options, such as TTY allocation, port forwarding, agent forwarding, user-rc, and X11 forwarding all at once. Specific options can then be explicitly allowed on an individual basis.


Select a specific tun(4) device on the server. Otherwise when a tunnel device is requested without this option the next available device will be used.


Require user-verification, such as with a PIN, with FIDO keys.

Managing Keys edit

When working with keys there are some basic, hopefully common sense, actions that should take place to prevent problems. The two most beneficial approaches are to use sensible names for the key files and to embed comments. The -f option for ssh-keygen(1) allows a custom name to be set. The -C option allows a comment to be embedded in both the public and private keys. With the comment inside the private key, it can be regenerated automatically if a replacement public key is ever made using the -y option.

Other than that, there are three main rules of thumb for managing keys:

  • Keys should use strong passphrases. If autonomous logins are required, then the keys should be first loaded into an agent and used from there. See ssh-add(1) to get started there. It uses ssh-agent(1) which many systems have installed and some have running by default.
  • Keys should always be stored in protected locations, even on the client side. This is especially important for private keys. The private keys should not have read permissions for any user or group other than their owner. They should also be kept in a directory that is not accessible by anyone other than the owner in order to limit exposure.
  • Old and unused keys should be removed from the server. In particular, keys without a known, valid purpose should be removed and not allowed to accumulate. Using the comment field in the public key for annotation can help eliminate some of the confusion as to the purpose and owner once some time has passed. Along those lines, keys should be rotated at intervals. Rotation means generating new key pairs and removing the old ones. This gives a chance to remove old and unused keys. It is also an opportunity to review access needs, whether access is required and if so at what level.

Following the principle of least privilege can limit the chance for accidents or abuse. If a key is only needed to run a specific application or script, then its login options should be limited to just what is needed. See sshd(8) for the "AUTHORIZED_KEYS FILE FORMAT" section on key login options. For root level access, it is important to remember to configure /etc/sudoers or /etc/doas.conf appropriately. Access there can be granted to a specific application and even limit that application to specific options.

One major drawback to keys is that they never expire and are valid indefinitely in principle. In contrast, certificates can be assigned a validity interval with an end date, after which they can no longer be used.

The Server edit

The OpenSSH Server, sshd(8), listens for connections from clients and starts a new process or two for each new incoming connection to handle key exchange, encryption, authentication, program execution, and data exchange. In the case of multiplexing, some processes are reused. It can run standalone and wait in the background, be run in the foreground, or it can be loaded on demand by any Internet services daemon.

Since version 8.2, the listening process title shown in ps(1) also shows the number of connections pending authentication.

$ ps -p $(pgrep -u root sshd) -o pid,user,args 
44476 root     sshd: /usr/sbin/sshd [listener] 0 of 10-100 startups (sshd)

Note that this is the number pending authentication, not the number which of those which have already been authenticated. Those each have their own separate handler process owned by the account which has authenticated.

sshd edit

sshd(8) is the secure shell daemon and it listens for incoming connections. The standard port for ssh(1) as specified by IANA is 22 [29]. If sshd(8) does not listen to a privileged port, it does not have to be launched by root. However there are few, if any occasions where a non-standard port should be considered. sshd(8) can be bound to multiple addresses or just certain ones. Multiple instances of sshd(8), each with a different configuration, can be run on the same machine, something which may be useful on multi-homed machines. An absolute path must be given to launch sshd(8), i.e. /usr/sbin/sshd

Configuration data is parsed first from the arguments and options passed by the shell, the user-specific file, and lastly the system-wide configuration file.

sshd(8) - The SSH daemon that permits you to log in.
sftp-server(8) - SFTP server subsystem, started automatically by sshd(8) when needed.
ssh-keysign(8) - Helper program for hostbased authentication.
sshd_config(5) - The server configuration file.

The sshd(8) daemon can be made to parse the configuration file, test it for validity, and then report the effective configuration settings. This is done by running the extended test mode (-T). The extended test will print out the actual server settings. It can also report modifications to the settings through use of the Match directive when combined with the connection specification (-C) parameter. The options for -C are user, host, and addr. This is where host and addr refer to the host running sshd(8) and the address from which the connection is being made, respectively.

The following will print out the configurations that will be applied if the user 'fred' tries to log in to the host from the address

$ /usr/sbin/sshd -TC user=fred,,addr=

The output is long, so it might be sensible to pipe it through sort(1) and a pager like less(1). See the section on Debugging a Server Configuration for more options.

By default, login is allowed for all groups. However, if either AllowGroups or AllowUsers is specified, then all users or groups not listed are prohibited from logging in. The allow/deny directives are processed in the following order:

  1. DenyUsers,
  2. AllowUsers,
  3. DenyGroups, and finally,
  4. AllowGroups.

The first pattern matched takes effect, so if AllowUsers exists it will completely override AllowGroups regardless of the order in which they appear in the configuration file. So for the most flexibility, it is recommended to use AllowGroups. In contrast, DenyUsers and DenyGroups do not interfere with each other and may be used together. List group names or patterns of group names, separated by spaces. If specified, login is allowed or denied only for users who are members of a group that matches a group or pattern on the list. Only group or user names are valid; numerical group or user IDs are not recognized.

sshd under inetd / xinetd edit

An Internet services daemon is a server to launch other servers on demand. xinetd(8) and inetd(8) are two variants, either of which can be used to specify additional parameters and constraints, including running the launched service as a particular user and group. By having a single daemon active, which invokes others as needed, demands on the system can be reduced. Launching sshd(8) this way means inetd(8) waits for an incoming request, launches sshd(8) and then when the SSH session is over, closes sshd(8).

 Internet --> Filter --> tcpwrappers --> (x)inetd --> sshd
             (firewall)  (aka tcpd)

Either can be used for additional logging such as successful or unsuccessful login, access restriction even including time of day, cpu priority, and number of connections. There are many more possibilities. See the manual pages for xinetd.conf(5) or inetd.conf(5) for a full overview of configuration options.

inetd(8) was tcpd-aware and could make use of tcpd's tcpwrappers to further control access or logging. So was sshd(8) by itself, up through 6.6. See the manual pages for [ht ormation about how to use the configuration files hosts.allow and hosts.deny. Since 6.7, OpenSSH itself no longer supports tcpwrappers because current packet filters filters made it mostly redundant.

The two main disadvantages of using inetd(8) or xinetd(8) are that there can be a slight increase in the delay during the start of the connection and that sshd(8) must be configured to allow launching from the services daemon. The delay only affects the initial connection and thus does not get in the way of actual operation. An Internet services daemon should not be used for stateless services like HTTP and HTTPS, where every action is essentially a new connection. Again, see the manual page for xinetd.conf(5) or inetd.conf(5) for more details.

Example from xinetd.conf(5)

service ssh
	socket_type     = stream
	protocol        = tcp
	wait            = no
	user            = root
	server          = /usr/sbin/sshd
	server_args     = -i
	per_source      = UNLIMITED
	log_on_failure  = USERID HOST
	# instances       = 10
	# nice            = 10
	# bind            =
	# only_from       =
	# access_times    = 08:00-15:25
	# no_access       =
	# no_access       +=
	# banner          = /etc/banner.inetd.connection.txt
	# banner_success  = /etc/banner.inetd.welcome.txt
	# banner_fail     = /etc/banner.inetd.takeahike.txt

Example from inetd.conf(5)

ssh    stream  tcp     nowait  root /usr/sbin/sshd -i
ssh    stream  tcp6    nowait  root /usr/sbin/sshd -i

There are several advantages with xinetd(8) over inetd(8) in capabilities but use-cases where either would be useful are rare.

The SFTP Server Subsystem edit

The SFTP subsystem first appeared in OpenBSD 2.8 / OpenSSH 2.3[30]. It is called by sshd(8) as needed using the Subsystem configuration directive and not intended to operate standalone. There are two forms of the subsystem. One is the regular sftp-server(8). The other is an in-process SFTP server, which requires no support files when used with the ChrootDirectory directive. The Subsystem configuration directive can be used to pass options:

-d specifies an alternate starting directory for users, the default is the user's home directory. (First in 6.2)

Subsystem sftp internal-sftp -d /var/www

-e causes logging information to be sent to stderr instead of syslog(3).

Subsystem sftp internal-sftp -e

-f specifies the syslog(3) facility code that is used when logging messages from sftp-server(8). The possible values are: DAEMON, USER, AUTH, LOCAL0, LOCAL1, LOCAL2, LOCAL3, LOCAL4, LOCAL5, LOCAL6, LOCAL7.

Subsystem    sftp    /usr/libexec/sftp-server -f LOCAL0

-l Specifies which messages will be logged by sftp-server(8). The default is AUTH. The other possible values are: QUIET, FATAL, ERROR, INFO, VERBOSE, DEBUG, DEBUG1, DEBUG2, and DEBUG3. INFO and VERBOSE log transactions that sftp-server performs on behalf of the client. DEBUG and DEBUG1 are equivalent while DEBUG2 and DEBUG3 each specify higher levels of debugging output. Log levels DEBUG through DEBUG3 will violate user privacy and should not be used for regular operation. The default log level is ERROR. The actual path will vary depending on distro or operating system.

Subsystem    sftp    /usr/libexec/sftp-server -l VERBOSE

-p and -P specify whitelisted and blacklisted protocol requests, respectively. The comma separated lists are permitted or prohibited accordingly, the blacklist is applied first if both are used. -Q provides a list of protocol features supported by the server. All three are available as of version 6.5. The actual path will vary depending on distro or operating system.

In version 6.5 requests are the only protocol features queriable.

$ /usr/libexec/sftp-server -Q requests

-R places the SFTP subsystem in read-only mode. Attempts to change the filesystem, including opening files for writing, will fail.

-u overrides the user's default umask and explicitly sets the umask(2) to be used for creating files and directories. See the manual page for syslog.conf(5) for more information about log level or log facility. sshd(8) must be able to access /dev/log for logging to work. Using the sftp-server(8) subsystem in conjunction with the main SSH server's ChrootDirectory option therefore requires that syslogd(8) establish a logging node inside the chrooted directory.

Subsystem sftp internal-sftp -u 0002

That sets the umask for the SFTP subsystem in OpenSSH 5.4 and later.

Environment Variables edit

ssh(1) and sshd(8) set some environment variables automatically when logging in. Other variables can be explicitly defined by users in the ~/.ssh/environment file if the file exists and if the user is allowed to change the environment. Variables can also be set on a key by key basis in the authorized_keys file, again only if the user is allowed to change the environment.

In ~/.ssh/environment, the format NAME=value is used to set the variable. In ~/.ssh/authorized_keys and /etc/ssh/authorized_keys the format is environment="NAME=value" For more information, see the PermitUserEnvironment and AcceptEnv configuration directives in sshd_config(5) and the SendEnv directive in ssh_config(5).

The following variables can be set by ssh(1), depending on the situation.

DISPLAY If X11 is tunneled, this is set so that the DISPLAY variable indicates the location of the X11 server. When it is automatically set by ssh(1) it points to a value in the form hostname:n, where hostname indicates the host where the shell runs, and n is an integer greater than or equal to one. ssh(1) uses this special value to forward X11 connections over the secure channel. The user should normally not set DISPLAY explicitly, as that will render the X11 connection insecure and will require the user to manually copy any required authorization cookies.

HOME The path of the user's home directory.

LOGNAME Synonym for USER. This is set for compatibility with systems that use this variable.

MAIL The path of the user's mailbox.

PATH The default PATH, as specified when compiling ssh(1).

SSH_ASKPASS If DISPLAY and SSH_ASKPASS are both set, and the SSH session does not have an associated terminal or pseudo-terminal, the program specified by SSH_ASKPASS will execute and open an X11 window to read the passphrase when one is needed. This is particularly useful when calling ssh(1) from an xsession or related script. On some machines it may be necessary to redirect the input from /dev/null to make this work.

SSH_AUTH_SOCK The path on the client machine to tell ssh(1) the UNIX-domain socket used to communicate with an SSH key agent.

SSH_CLIENT Identifies the client end of the connection. It contains three space-separated values: the client IP address, client port number and the server port number.

SSH_CONNECTION Identifies the client and server ends of the connection. The variable contains four space-separated values: client IP address, client port number, server IP address, and server port number.

SSH_ORIGINAL_COMMAND If the ForceCommand directive was used, or Command="..." in a key, then this variable contains the original command including the original options. It can be used to extract the original arguments.

SSH_TTY This is set to the name of the TTY (path to the device) associated with the current shell or command. If the current session has no TTY, this variable is not set.

SSH_USER_AUTH This will contain the name of a temporary file containing the authentication methods used for this particular session if ExposeAuthInfo is set in sshd_config(5).

TZ This variable is set to indicate the present time zone if it was set when the daemon was started. The SSH daemon passes this value on to new connections.

USER Set to the name of the user logging in.

Pattern Matching in OpenSSH Configuration edit

A pattern consists of zero or more non-whitespace characters. An asterisk (*) matches zero or more characters in a row, and a question mark (?) matches exactly one character. For example, to specify a set of declarations that apply to any host in the "" set of domains in ssh_config(5), the following pattern could be used:

Host *

The following pattern would match any host in the - range:

Host 192.168.0.?

A pattern-list is a list of patterns separated by whitespace. The following list of patterns match hosts in both the "" or "" domains.

Host * *

Individual patterns by themselves or as part of a pattern-lists may be negated by preceding them with an exclamation mark (!). The following will match any host from except for gamma.

Host * !

Pattern lists in ssh_config(5) do not use commas. Pattern lists in keys need commas.

For example, to allow a key to be used from anywhere within an organisation except from the dialup pool, the following entry in authorized_keys could be used:


See also glob(7)

Utilities edit

ssh-agent(1) - An authentication agent that can store private keys.
ssh-add(1) - A tool which adds or removes keys to or from the above agent.
ssh-keygen(1) - A key generation tool.
ssh-keyscan(1) - A utility for gathering public host keys from a number of hosts.
ssh-copy-id(1) - Install a public key in a remote machine's authorized_keys register.
ssh-vulnkey(1) - Check a key against blacklist of compromised keys

ssh-agent edit

ssh-agent(1) is a tool to hold private keys in memory for re-use during a session. Usually it is started at the beginning of a session and subsequent windows or programs run as clients to the agent. The environment variable SSH_AUTH_SOCK points applications to the socket used to communicate with the agent.

ssh-add edit

ssh-add(1) is a tool to load key identities into an agent for re-use. It can also be used to remove identities from the agent. The agent holds the private keys used for authentication.

ssh-keyscan edit

ssh-keyscan(1) has been part of the OpenSSH suite since OpenSSH version 2.5.1 and is used to retrieve public keys. Keys retrieved using ssh-keyscan(1), or any other method, must be verified by checking the key fingerprint to ensure the authenticity of the key and reduce the possibility of a man-in-the-middle attack. The default is to request a ECDSA key using SSH protocol 2. David Mazieres wrote the initial version of ssh-keyscan(1) and Wayne Davison added support for SSH protocol version 2.

ssh-keygen edit

ssh-keygen(1) is for generating key pairs or certificates for use in authentication, update and manage keys, or to verify key fingerprints. It works with SSH keys and can do the following activities:

  • generate new key pairs, either ECDSA, Ed25519, RSA, ECDSA-SK or Ed25519-SK.
  • remove keys from known hosts
  • regenerate a public key from a private key
  • change the passphrase of a private key
  • change the comment text of a private key
  • show the fingerprint of a specific public key
  • show ASCII art fingerprint of a specific public key
  • load or read a key to or from a smartcard, if the reader is available

If the legacy protocol, SSH1, is used, then ssh-keygen(1) can only generate RSA keys. However, SSH1 is long since deprecated and the systems should be re-tooled if found in use. Also, although DSA keys can be generated, they are deprecated and should be replaced if found.

One important use for key fingerprints is when connecting to a machine for the first time. A fingerprint is a hash or digest of the public key. Fingerprints can be transferred out of band and loaded into either the ~/.ssh/known_hosts or /etc/ssh/ssh_known_hosts files in advance of the first connection. The verification data for the key should be sent out of band. It can be sent ahead of time by post, fax, SMS or a phone call instead or otherwise communicated in some way such that you can be sure it is authentic and unchanged.

$ ssh -l fred
The authenticity of host ' (' can't be established.
RSA key fingerprint is SHA256:DnCHntWa4jeadiUWLUPGg9FDTAopFPR0c5TgjU/iXfw.
Are you sure you want to continue connecting (yes/no)?

If you see that message and the key's fingerprint matches the one you were given in advance, then the connection is probably good. If you see that and the key's fingerprint is different than what you were given in advance, then stop and disconnect and get on the phone or VoIP to work out the mistake. Once the SSH client has accepted the key from the server, it is saved in known_hosts.

$ ssh -l fred
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
Please contact your system administrator.
Add correct host key in /home/fred/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/fred/.ssh/known_hosts:1
ECDSA host key for has changed and you have requested strict checking.
Host key verification failed.

If you start to connect to a known host and you get an error like the one above, then either the first connection was to an impostor or the current connection is to an impostor, or something very foolish was done to the machine. Regardless, disconnect and don't try to log in. Contact the system administrator out of band to find out what is going on.[31] It is possible that the server was reinstalled, either the whole operating system or just the OpenSSH server, without saving the old keys. That would result in new keys being generated and explain their presence. Either way, check with the system administrator before connecting to be sure.

Hashed host names and addresses can be looked up in known_hosts using -F. Or else -R can be used to delete them.

$ ssh-keygen -F -f ~/.ssh/known_hosts
# Host found: line 7 type RSA
|1|slYCk3msDPyGQ8l0lq82IbUTzBU=|KN7HPqVnJHOFX5LFmTXS6skjK4o= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA3cqqA6fZtgexZ7+4wxoLN1+YDvPfBtt4/m+N/RI8o95CXqvqZMIQjuVarVKjwRwt9pTJIVzf6bwjcNkrUx9dQqZNpNBkcvBRdmd775opWCAfkHEueKxkNx3Kb1yitz0dUaFkRwfTsXAjh+NleBq2ofAfjowu/zzCnnbAKy2R2OE=

ssh-copy-id edit

ssh-copy-id is included in some distros to install a public key into a remote machine's authorized_keys file. It is a simple shell script and the authorized_keys file should still be checked manually after first login to verify that everything went ok and that the key was copied as it should be.

ssh-vulnkey edit

ssh-vulnkey was included in some versions of some GNU/Linux distros to check a key against a blacklist of compromised keys. The blacklist was made necessary when a broken version of OpenSSL was distributed by some distros[32], resulting in bad keys that were easily predicted and compromised. Keys made while that broken version was in use that are found to have been compromised cannot be repaired and must be replaced. The problem has since been fixed and new keys should be all right.

Third-party Utilities edit

autossh - Automatically restart SSH sessions and tunnels
scanssh - a scanner for SSH hosts and some kinds of proxies
sshfs - a user-space file system client based on SFTP
sshfp - generates SSHFP DNS records from known_hosts files or ssh-keyscan(1)
keychain - re-use ssh-agent and/or gpg-agent between logins
rsync - synchronizes files and directories using delta encoding
gstm - a graphical front-end for managing SSH-tunneled port redirects
sslh - a protocol demultiplexer
sshguard - an intrusion detection system with packet filtering
ssh-audit - identifies the server's banner, key exchange, encryption, MAC, compression, compatibility, and other information.
webcat - can use websockets[33] for tunneling, is otherwise very similar to netcat and curl.

scanssh edit

scanssh scans hosts and networks for running services[34]. It checks the version number of the server and displays the results in a list. It detects ssh, sftp and several kinds of SOCKS, HTTP, and telnet proxies.

Scan a small subnet for ssh servers:

$ sudo scanssh -n 22 -s ssh

Scan the same small network for SOCKS proxies:

$ sudo scanssh -s socks5,socks4

Variable scanning speeds can be set as well as random sampling. Open proxy detection scans to detect open proxies on common ports.

Scan 1000 hosts randomly selected from through, at a rate of 200 per second :

$ sudo scanssh -r 200 -p random(1000)/

The hosts and networks to be scanned can be either specified as an IPv4 address or an CIDR like IP prefix with ip address and network mask. Ports can be appended by adding a colon at the end of address specification. The sequence of hosts scanned is random, but that can be modified by the following two parameters, random and split:

random(n[,seed])/ selects a sample of n random addresses from the range specified as targets for scanning. n is the number of address to randomly create in the given network and seed is an optional seed for the pseudo random number generator. For example, it is possible to sample 10000 random IPv4 hosts from the Internet by specifying 'random(10000)/' as the address.

split(s,e)/ selects a specific segment of the address range for use. e specifies the number of segments in parallel and s is the segment number used by this particular scan. This can be used to scan from several hosts in parallel by scanning a different segment from each host.

-n Specifies the port numbers to scan. Ports are separated by commas. Each specified scanner is run for each port in this list. The default port is 22.

Scan for SSH servers on both port 22 and 2022:

$ sudo scanssh -s ssh -n 22,2022

sshfs edit

sshfs builds on the Filesystem in Userspace (FUSE) interface to allow non-privileged users to create a secure, reliable file system framework. It allows a remote file system to be mounted as a local folder by taking advantage of the SFTP subsystem. It uses SFTP to mount a directory from a remote server as a local directory. In that way, all use applications can interact with that directory and its contents as if it were local.As the name implies, this is done in user space and not the kernel as is usually required for file systems. FUSE has a stable API library and bindings to C, C++, and Java. In this case it is specifically the SFTP client that is run over ssh(1) and is then mounted as a file system.

See the Cookbook section on SFTP for more regarding sshfs.

sshfp edit

sshfp generates SSHFP NS records using the public keys stored in a known_hosts file or provided by ssh-keyscan(1) as a means to use DNS to publish SSH key fingerprints. That in turn allows DNSSEC lookups to verify SSH keys before use. SSHFP resource records in DNS are used to store fingerprint of SSH public host keys that are associated with the host names. A record itself consists of an algorithm number, fingerprint type, and the fingerprint of the public host key.

See RFC 4255 for details on SSHFP.

keychain edit

keychain is another manager for ssh-agent(1) to allow multiple shells and processes, including cron(8) jobs, to use the keys held by the agent. It is often integrated into desktop-specific tools like Apple Keychain on OS X or kdewallet for KDE.

rsync edit

rsync is a file transfer utility to transfer files between computers very efficiently. It can run on top of SSH or use its own protocol. SSH is the default.

See the Cookbook section on Automated Backup for examples on using rsync live or in scripts.

gstm (Gnome SSH Tunnel Manager) edit

gstm is a graphical front-end for managing SSH connections and especially port forwarding.

sslh edit

sslh is a protocol demultiplexer. It accepts connections on specified ports and forwards them based on the first packet sent by the client. It can be used to share a single port between SSH, SSL, HTTP, OpenVPN, tinc, and XMPP.

See also the section on Multiplexing for a discussion with examples.

sshguard edit

sshguard is an intrusion prevention system. It monitors logs to detect undesirable patterns of activities and triggers corresponding packet filter rules for increasing periods of time. It can be used with a few other services besides SSH.

ssh-audit edit

ssh-audit is a python script to gather information about SSH servers. It can identify banners used, key exchange, encryption, Message Authentication Code (MAC) algorithms, compression, compatibility settings, and several other security-related aspects.

Additional Third Party Utilities edit

The following are useful in working with OpenSSH, but outside the scope of this book to go into detail. They are nevertheless worth mentioning enough to warrant a list:

netstat – Show network connections, routing tables, interface statistics, masquerade connections, and multicast memberships
nc or netcat – Netcat, the TCP/IP swiss army knife.
socat – SOcket CAT, a multipurpose relay similar to netcat.
nmap – Network exploration tool and security scanner.
tcpdump – Display network traffic real time.
telnet – Unencrypted interaction with another host.
pagsh – Creates a new credential cache sandbox and process authentication group (PAG).
nohup – Invoke a process that ignores HANGUP signals
sudo – Execute programs as another user
lftp – A handy interactive multi-protocol file transfer text-based client supporting SFTP.
curl – A multi-protocol file transfer text-based client supporting SCP and SFTP.
tmux – A terminal multiplexer.

Logging and Troubleshooting edit

Both the OpenSSH client and server offer a lot of choice as to where the logs are written and how much information is collected.

A prerequisite for logging is having an accurate system clock using the Network Time Protocol, NTP, or equivalent, service which provides ongoing time synchronization with rest of the world. The more accurate the time stamp in the log is, the faster it is to coordinate forensics between machines or sites or service providers. If you have to contact outside parties like a service provider, progress can usually only be made with very exact times.

Server Logs edit

By default sshd(8) sends logging information to the system logs using the log level INFO and the system log facility AUTH. So the place to look for log data from sshd(8) is in /var/log/auth.log. These defaults can be overridden using the SyslogFacility and LogLevel directives. Below is a typical server startup entry in the authorization log.

Mar 19 14:45:40 eee sshd[21157]: Server listening on port 22.
Mar 19 14:45:40 eee sshd[21157]: Server listening on :: port 22.

In most cases the default level of logging is sufficient, but during initial testing of new services or activities it is sometimes necessary to have more information. Debugging info usually goes to stderr. Starting with OpenSSH 7.6, Match blocks can set alternate log levels for specific conditions.

The log excerpt below show the same basic server start up with increased detail. Contrast the log level DEBUG1 below with the default above:

debug1: sshd version OpenSSH_6.8, LibreSSL 2.1
debug1: private host key #0: ssh-rsa SHA256:X9e6YzNXMmr1O09LVoQLlCau2ej6TBUxi+Y590KVsds
debug1: private host key #1: ssh-dss SHA256:XcPAY4soIxU2IMtYmnErrVOjKEEvCc3l5hOctkbqeJ0
debug1: private host key #2: ecdsa-sha2-nistp256 SHA256:QIWi4La8svQSf5ZYow8wBHN4tF0jtRlkIaLCUQRlxRI
debug1: private host key #3: ssh-ed25519 SHA256:fRWrx5HwM7E5MRcMFTdH95KwaExLzAZqWlwULyIqkVM
debug1: rexec_argv[0]='/usr/sbin/sshd'
debug1: rexec_argv[1]='-d'
debug1: Bind to port 22 on
Server listening on port 22.
debug1: Bind to port 22 on ::.
Server listening on :: port 22.

And here is the same startup using the most verbose level, DEBUG3:

debug2: load_server_config: filename /etc/ssh/sshd_config
debug2: load_server_config: done config len = 217
debug2: parse_server_config: config /etc/ssh/sshd_config len 217
debug3: /etc/ssh/sshd_config:52 setting AuthorizedKeysFile .ssh/authorized_keys
debug3: /etc/ssh/sshd_config:86 setting UsePrivilegeSeparation sandbox          
debug3: /etc/ssh/sshd_config:104 setting Subsystem sftp internal-sftp 
debug1: sshd version OpenSSH_6.8, LibreSSL 2.1
debug1: private host key #0: ssh-rsa SHA256:X9e6YzNXMmr1O09LVoQLlCau2ej6TBUxi+Y590KVsds
debug1: private host key #1: ssh-dss SHA256:XcPAY4soIxU2IMtYmnErrVOjKEEvCc3l5hOctkbqeJ0
debug1: private host key #2: ecdsa-sha2-nistp256 SHA256:QIWi4La8svQSf5ZYow8wBHN4tF0jtRlkIaLCUQRlxRI
debug1: private host key #3: ssh-ed25519 SHA256:fRWrx5HwM7E5MRcMFTdH95KwaExLzAZqWlwULyIqkVM
debug1: rexec_argv[0]='/usr/sbin/sshd'
debug1: rexec_argv[1]='-ddd'
debug2: fd 3 setting O_NONBLOCK
debug1: Bind to port 22 on
Server listening on port 22.
debug2: fd 4 setting O_NONBLOCK
debug1: Bind to port 22 on ::.

Every failed login attempt is recorded, once the value in directive MaxAuthTries is exceeded the connection is broken. Below is a log excerpt showing how the default log looks after some failed attempts:

Mar 19 11:11:06 server sshd[54798]: Failed password for root from port 59928 ssh2
Mar 19 11:11:06 server sshd[54798]: Failed password for root from port 59928 ssh2
Mar 19 11:11:07 server sshd[54798]: Failed password for root from port 59928 ssh2
Mar 19 11:11:08 server sshd[54798]: Failed password for root from port 59928 ssh2
Mar 19 11:11:09 server sshd[54798]: Failed password for root from port 59928 ssh2
Mar 19 11:11:10 server sshd[54798]: Failed password for root from port 59928 ssh2
Mar 19 11:11:10 server sshd[54798]: error: maximum authentication attempts exceeded for root from port 59928 ssh2 [preauth]
Mar 19 11:11:10 server sshd[54798]: Disconnecting authenticating user root port 59928: Too many authentication failures [preauth]

It is not usually a good idea to allow root login, at least not root login with authentication via password. Blocking password authentication for root simplifies log analysis greatly, and in particular it eliminates the time consuming question of who is trying to get in and why. People that need full root level access can gain it through su(1) for general activities. Or for specific tasks which need root level access, those can be given those privileges through custom-made entries for sudo(8) or doas(1). Note that in those cases, only specific services and programs should be allowed, not blanket access which is an all too common misconfiguration seen with sudo(8). Alternatively, a single-purpose key made using forced-commands-only could be used since some argue that providing extra means of privilege escalation, such as su(1), sudo(8), or doas(1), is more dangerous than carefully providing remote root access through a key or certificate tied to a specific function.

Successful logins edit

By default, the server does not store much information about user transactions. That is a good thing. It is also a good thing to recognize when the system is operating as it should. So here is an example of a successful SSH login:

Mar 14 19:50:59 server sshd[18884]: Accepted password for fred from port 6647 ssh2

And here is an example using a key for authentication. It shows the key fingerprint as a SHA256 hash in base64.

Mar 14 19:52:04 server sshd[5197]: Accepted publickey for fred from port 59915 ssh2: RSA SHA256:5xyQ+PG1Z3CIiShclJ2iNya5TOdKDgE/HrOXr21IdOo

And here is an example of successful authentication with a user certificate. The certificate's identification string is "foobar" and the serial number is "9624". In this example the certificate is using ECDSA and the key itself is using Ed25519. The certificate, being a different key of its own, has a different SHA256 fingerprint from the authentication key itself.

May 15 16:28:17 server sshd[50140]: Accepted publickey for fred from port 44456 ssh2: ECDSA-CERT SHA256:qGl9KiyXrG6mIOo1CT01oHUvod7Ngs5VMHM14DTbxzI ID foobar (serial 9624) CA ED25519 SHA256:fZ6L7TlBLqf1pGWzkcQMQMFZ+aGgrtYgRM90XO0gzZ8

Prior to 6.8, the key's fingerprint was a hexadecimal MD5 hash.

Jan 28 11:51:43 server sshd[5104]: Accepted publickey for fred from port 60594 ssh2: RSA e8:31:68:c7:01:2d:25:20:36:8f:50:5d:f9:ee:70:4c

In older versions of OpenSSH prior to 6.3 the key fingerprint is completely missing from authentication logging.

Jan 28 11:52:05 server sshd[1003]: Accepted publickey for fred from port 20042 ssh2

Here is an example of password authentication for an SFTP session, using the server's internal-sftp subsystem. The logging for that subsystem set to INFO.

Mar 14 20:14:18 server sshd[19850]: Accepted password for fred from port 59946 ssh2
Mar 14 20:14:18 server internal-sftp[11581]: session opened for local user fred from []

Here is an example of a successful SFTP login using an RSA key for authentication.

Mar 14 20:20:53 server sshd[10091]: Accepted publickey for fred from port 59941 ssh2: RSA SHA256:LI/TSnwoLryuYisAnNEIedVBXwl/XsrXjli9Qw9SmwI
Mar 14 20:20:53 server internal-sftp[31070]: session opened for local user fred from []

Additional data, such as connection duration, can be logged with the help of xinetd.

Logging Problems from SSH Certificate Authentication edit

Usually, not much information is given about which certificate failed, just why it failed authentication. Finding the account or actual certificate in question can require some sleuthing. Generally no client side information is disclosed and all investigation must occur server side.

If the authentication is attempted again by other means, such as a password, then when the connection is closed there will be a log entry noting which account was involved. That is because so early in the connection sequence the process ID for the disconnection is the same and the account name and the originating address is included, giving a bit of a clue in pursuit of a solution. Sometimes even the account name will be included.

May  5 16:31:38 server sshd[252]: Connection closed by authenticating user fred port 44470 [preauth]

However, if the connection is allowed to timeout without first making any other authentication attempts by some other means, then there will be nothing to go on except maybe the time of day.

May  5 16:33:00 server sshd[90593]: fatal: Timeout before authentication for port 44718

Below are some common examples of log entries for failed certificate-based log in attempts. There can be more than one problem with a certificate but only one error will get logged at a time.

Expired or Not-yet-valid Certificate edit

Certificates which have not yet become valid or which have already expired get a log entry as to the reason but neither the account or certificate involved.

May  5 16:35:20 server sshd[252]: error: Certificate invalid: expired

Above is an expired certificate, below is a certificate which has not yet become valid.

May  5 16:58:00 server sshd[90593]: error: Certificate invalid: not yet valid

Neither type of event gives more information.

Valid Certificate but Invalid Principal edit

Like with expired certificates, very little information is given about the actual account or certificate. Here the certificate was tried with the wrong account, one not listed among the certificate's principals.

May  5 17:29:52 server sshd[98884]: error: Certificate invalid: name is not a listed principal
May  5 17:29:56 server sshd[98884]: Connection closed by authenticating user fred port 45114 [preauth]

If the client closes the connection on purpose, there may be some information in the connection closed entry.

Valid Certificate but Invalid Source Address edit

If the certificate is limited to connecting from specific addresses or host names, the log will complain if the connection comes from a different address or host and identify the incorrect source address.

May  5 17:48:54 server sshd[2420]: cert: Authentication tried for fred with valid certificate but not from a permitted source address (
May  5 17:48:54 server sshd[2420]: error: Refused by certificate options

However, it will not be possible to identify the specific certificate directly.

Logging SFTP File Transfers edit

SFTP file transfers can be logged using LogLevel INFO or VERBOSE. The log level for the SFTP server can be set in sshd_config(5) separately from the general SSH server settings.

Subsystem internal-sftp -l INFO

By default the SFTP messages will also end up in auth.log but it is possible to filter these messages to their own file by reconfiguring the system logger, usually rsyslogd(8) or syslogd(8). Sometimes this is done by changing the log facility code from the default of AUTH. Available options are LOCAL0 through LOCAL7, plus, less usefully, DAEMON and USER.

Subsystem internal-sftp -l INFO -f LOCAL6

If new system log files are assigned, it is important to remember them in log rotation, too. Again, the Match directive can be use to change the log level for certain connections.

The following log excerpts are generated from using the log level INFO. A session starts with an open and end with a close. The number in the brackets is the process id for the SFTP session and is the only way to follow a session through the logs.

Oct 22 11:59:45 server internal-sftp[4929]: session opened for local user fred from []
Oct 22 12:09:10 server internal-sftp[4929]: session closed for local user fred from []

Here is an SFTP upload of a small file of 928 bytes named foo to the home directory for user 'fred'.

Oct 22 11:59:50 server internal-sftp[4929]: open "/home/fred/foo" flags WRITE,CREATE,TRUNCATE mode 0664
Oct 22 11:59:50 server internal-sftp[4929]: close "/home/fred/foo" bytes read 0 written 928

And a directory listing in the same session in the directory /var/www.

Oct 22 12:07:59 server internal-sftp[4929]: opendir "/var/www"
Oct 22 12:07:59 server internal-sftp[4929]: closedir "/var/www"

And lastly here is a download of the same small 928-byte file called foo from the home directory for the user 'fred'.

Oct 22 12:08:03 server internal-sftp[4929]: open "/home/fred/foo" flags READ mode 0666
Oct 22 12:08:03 server internal-sftp[4929]: close "/home/fred/foo" bytes read 928 written 0

Successful transfers will be noted by a close message. Attempts to download (open) files that do not exist will be followed by a sent status No such file message on a line of its own instead of a close. Files that exist but that the user is not allowed to read will create a sent status Permission denied message.

Logging Chrooted SFTP edit

Logging with the built-in sftp-subsystem inside a chroot jail, defined by ChrootDirectory, needs a ./dev/log node to exist inside the jail. This can be done by having the system logger such as syslogd(8) add additional log sockets inside the chrooted directory when starting up. On some systems that is as simple as adding more flags, like "-u -a /chroot/dev/log", in /etc/rc.conf.local or whatever the equivalent startup script may be.

Here is an example of an SFTP login with password to a chroot jail using log level DEBUG3 for the SFTP-subsystem. The log shows a file upload:

Jan 28 12:42:41 server sshd[26299]: Connection from port 47366
Jan 28 12:42:42 server sshd[26299]: Failed none for fred from port 47366 ssh2
Jan 28 12:42:44 server sshd[26299]: Accepted password for fred from port 47366 ssh2
Jan 28 12:42:44 server sshd[26299]: User child is on pid 21613
Jan 28 12:42:44 server sshd[21613]: Changed root directory to "/home/fred"
Jan 28 12:42:44 server sshd[21613]: subsystem request for sftp
Jan 28 12:42:44 server internal-sftp[2084]: session opened for local user fred from []
Jan 28 12:42:58 server internal-sftp[2084]: open "/docs/somefile.txt" flags WRITE,CREATE,TRUNCATE mode 0644
Jan 28 12:42:58 server internal-sftp[2084]: close “/docs/somefile.txt” bytes read 0 written 400

Remember that SFTP is a separate subsystem and that like the file creation mode, the log level and log facility are set separately from the SSH server in sshd_config(5):

Subsystem internal-sftp -l ERROR

Logging Stability of Client Connectivity edit

When ClientAliveInterval is set in the server's configuration, the server makes periodic probes of the clients which have established connections. At normal log levels, these are not noted in the log until something goes wrong.

If the ClientAliveInterval is exceeded more times in a row than allowed by ClientAliveCountMax the client is officially declared disconnected and the connection dropped. At the default log level of INFO a brief message is logged, identifying the client which has been dropped.

Sep  6 14:42:08 eee sshd[83709]: packet_write_poll: Connection from port 57608: Host is down

At log level DEBUG, the client's responses to the polls will be logged by the server showing that the session is still connected.

Sep  6 14:27:52 eee sshd[9075]: debug1: Got 100/147 for keepalive

Log level DEBUG2 and DEBUG3 will give even more information about the connection. However, even at log level DEBUG3, the specific client being polled will not be identified directly in the log messages and will have to be inferred from the process id of the daemon if such information is needed.

Sep  6 14:30:59 eee sshd[73960]: debug2: channel 0: request confirm 1
Sep  6 14:30:59 eee sshd[73960]: debug3: send packet: type 98
Sep  6 14:30:59 eee sshd[73960]: debug3: receive packet: type 100
Sep  6 14:30:59 eee sshd[73960]: debug1: Got 100/22 for keepalive

Again, when the ClientAliveCountMax is exceeded, the connection is broken after the final failure of the client to respond. Here is how that looks with the log level set to DEBUG2.

Sep  6 14:17:55 eee sshd[15780]: debug2: channel 0: request confirm 1
Sep  6 14:17:55 eee sshd[15780]: debug1: Got 100/22 for keepalive
Sep  6 14:18:37 eee sshd[15780]: debug2: channel 0: request confirm 1
Sep  6 14:18:37 eee sshd[15780]: packet_write_poll: Connection from port 57552: Host is down
Sep  6 14:18:37 eee sshd[15780]: debug1: do_cleanup
Sep  6 14:18:37 eee sshd[48675]: debug1: do_cleanup
Sep  6 14:18:37 eee sshd[48675]: debug1: session_pty_cleanup: session 0 release /dev/ttyp0

The directives ClientAliveInterval and ClientAliveCountMax normally apply to all clients connecting to the server. However, they can be used inside a Match block and thus applied only to specific connections.

Logging Revoked Keys edit

If the RevokedKeys directive is used to point to a list of public keys that have been revoked, sshd(8) will make a log entry when access is attempted using a revoked key. The entry will be the same whether a plaintext list of public keys is used or if a binary Key Revocation List (KRL) has been generated.

If password authentication is allowed, and the user tries it, then after the key authentication fails there will be a record of password authentication.

Mar 14 20:36:40 server sshd[29235]: error: Authentication key RSA SHA256:jXEPmu4thnubqPUDcKDs31MOVLQJH6FfF1XSGT748jQ revoked by file /etc/ssh/ssh_revoked_keys
Mar 14 20:36:45 server sshd[29235]: Accepted password for fred from port 59967 ssh2

If password authentication is not allowed, sshd(8) will close the connection as soon as the key fails.

Mar 14 20:38:27 server sshd[29163]: error: Authentication key RSA SHA256:jXEPmu4thnubqPUDcKDs31MOVLQJH6FfF1XSGT748jQ revoked by file /etc/ssh/ssh_revoked_keys

The account trying the revoked key remains a mystery though, so it will be necessary to try to look up the key by its fingerprint from your archive of old keys using ssh-keygen -lf and read the key's comments. Although if a valid account cancels the connection without trying a password after the key attempt fails, the usual message will still be posted to the log.

Mar 14 20:44:04 server sshd[14352]: Connection closed by authenticating user fred port 55051 [preauth]

That may provide some clue and allow filtering with a short AWK script, if the messages are all in the same log file.

$ awk '/revoked by file/ {
        pid[$5]++; key[$5]=$9; hash[$5]=$10; next;
    pid[$5] && /closed by authenticating user/ {
        print key[$5], hash[$5], $10, $11;
        delete key[$5]; delete hash[$5]; delete pid[$5];
    }' /var/log/authlog

Similarly, if the client makes no attempt at logging in and just times out, the message will say just that.

Mar 18 21:40:25 server sshd[9942]: fatal: Timeout before authentication for port 53728

On the client side, no warning or error will be given if a revoked key is tried. It will just fail and the next key or method will be tried.

Brute force and Hail Mary attacks edit

It’s fairly common to see failed login attempts almost as soon as the server is connected to the net. Brute force attacks, where one machine hammers on a few accounts trying to find a valid password, are becoming rare. In part this is because packet filters, like NFables for Linux and PF for the BSDs, can limit the number and rate of connection attempts from a single host. The server configuration directive MaxStartups can limit the number of simultaneous, unauthenticated connections.

Mar 18 18:54:44 server sshd[54939]: Failed password for root from port 52404 ssh2
Mar 18 18:54:48 server sshd[54939]: Failed password for root from port 52404 ssh2
Mar 18 18:54:49 server sshd[54939]: Failed password for root from port 52404 ssh2
Mar 18 18:54:49 server sshd[54939]: error: maximum authentication attempts exceeded for root from port 52404 ssh2 [preauth]
Mar 18 18:54:49 server sshd[54939]: Disconnecting authenticating user root port 52404: Too many authentication failures [preauth]

Note the "authenticating user" is present in the logs from OpenSSH 7.5 and onward when a valid user name is attempted. When an invalid user name is attempted, that is written too.

Mar 18 18:55:05 server sshd[38594]: Invalid user ubnt from port 52471
Mar 18 18:55:05 server sshd[38594]: Failed password for invalid user ubnt from port 52471 ssh2
Mar 18 18:55:09 server sshd[38594]: error: maximum authentication attempts exceeded for invalid user ubnt from port 52471 ssh2 [preauth]
Mar 18 18:55:09 server sshd[38594]: Disconnecting invalid user ubnt port 52471: Too many authentication failures [preauth]

The way to deal with brute force attacks coming from a single machine or network is to customize the server host’s packet filter to limit the attacks or even temporarily block machines that overload the maximum number or rate of connections. Optionally, one should also contact the attacker’s net block owner with the IP address and exact date and time of the attacks.

A kind of attack common at the time of this writing is one which is distributed over a large number of compromised machines, each playing only a small role in attacking the server.

To deal with Hail Mary attacks, contact the attacker’s net block owner. A form letter with a cut-and-paste excerpt from the log is enough if it gives the exact times and addresses. Alternately, teams of network or system administrators can work to pool data to identify and blacklist the compromised hosts participating in the attack.

Failed None For Invalid User edit

The SSH protocol specifies a number of possible authentication methods[35]. The methods password, keyboard-interactive, and publickey are fairly common. A lesser known authentication method is none, which will only succeed if the server requires no further authentication such as if PermitEmptyPassword is set and the account does not actually have a password[36]. Some SSH clients including OpenSSH's start by asking for none authentication and then use the list of remaining possible authentication methods to decide what to do next if that doesn't work.

Aug 10 19:09:05 server sshd[93126]: Failed none for invalid user admin from port 27586 ssh2

So in other words, that is a brute force attack trying the none authentication method. It is an attack which will only get into accounts which have been explicitly set with an empty password and furthermore have also been set up specifically to allow access by having both the authentication method of none and the PermitEmptyPasswords configuration directive enabled on the server. Most brute force attacks try only password authentication, and some of those even check for the password method and then give up if it is not available. Other attackers may just hammer away pointlessly even if the method is not available.

Connections Seemingly From, ::1, or Other localhost Addresses edit

When the SSH server is accessed via a reverse tunnel to another machine, the incoming connections will appear to be from the localhost address, which is usually or ::1.

Mar 23 14:16:16 server sshd[9265]: Accepted password for fred from port 40426 ssh2

If the port at the other end of the reverse tunnel is publicly accessible, it will be probed and possibly attacked. Because of the reverse tunnel, the attacks will then also appear to be coming from the server's own loopback address:

Mar 23 14:20:17 server sshd[5613]: Invalid user cloud from ::1 port 57404
Mar 23 14:20:21 server sshd[5613]: Failed password for invalid user cloud from ::1 port 57404 ssh2
Mar 23 14:20:26 server sshd[5613]: Failed password for invalid user cloud from ::1 port 57404 ssh2
Mar 23 14:20:32 server sshd[5613]: Failed password for invalid user cloud from ::1 port 57404 ssh2
Mar 23 14:20:35 server sshd[5613]: Connection closed by invalid user cloud ::1 port 57404 [preauth]

Therefore the usual countermeasures like SSHGuard or Fail2Ban or other similar intrusion detection systems cannot be used because the localhost address is used by the tunnel for all login attempts, regardless of their real origins.

A partial solution would be to bind the incoming connections to a different IP address. The loopback interface would need an additional permanent address, an alias, for that. That alias could then be assigned when establishing the reverse tunnel:

$ ssh -R 2022:

Thus it would designate the source address for all logins coming in over that tunnel. So, in that way, the alias would then show up in the logs instead of the default loopback address when that reverse tunnel is used:

Mar 23 18:00:13 server sshd[8525]: Invalid user cloud from port 17271
Mar 23 18:00:15 server sshd[8525]: Failed password for invalid user cloud from port 17271 ssh2
Mar 23 18:00:19 server sshd[8525]: Failed password for invalid user cloud from port 17271 ssh2
Mar 23 18:01:23 server sshd[8525]: Failed password for invalid user cloud from port 17271 ssh2
Mar 23 18:01:26 server sshd[8525]: Connection closed by invalid user cloud port 17271 [preauth]

If the ports do not need to be available to the open Internet, a full solution would be just to ensure that they are not accessible from the outside. This would be done by not using the -g option on the client when making the reverse tunnel or else by setting the GatewayPorts directive in sshd_config(5) back to the default of no, or both. The system's built-in packet filter can also be used. Then, even with the forwarded ports closed off from the outside, the ProxyJump option can still be used to skip through the jump host and use the setup for SSH access. However, since it is sometimes necessary that these ports be accessible to the outside world, this approach is not always an option.

Client Logging edit

The OpenSSH client normally sends log information to stderr. The -y option can be used to send output to the system logs, managed by syslogd(8) or something similar. Client log verbosity can be increased or decreased by changing the LogLevel directive and the log facility changed with the SyslogFacility directive in ssh_config(5). Both require use of the -y run time option and do nothing without it.

Alternatively, instead of using the -y option, using the -E option sends log output to a designated file instead of stderr. Working with the system logs or separate log files are something which can be useful when running ssh(1) in automated scripts. Below is an example of a connection to an interactive shell with the normal level of client logging:

$ ssh -l fred‘s password: 
Last login: Thu Jan 27 13:21:57 2011 from

The same connection at the first level of verbosity gives lots of debugging information, 42 lines more.

$ ssh -v -l fred
OpenSSH_6.8, LibreSSL 2.1
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Connecting to [] port 22.
debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file /home/fred/.ssh/id_rsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/fred/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/fred/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/fred/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/fred/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/fred/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/fred/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/fred/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.8
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7
debug1: match: OpenSSH_6.7 pat OpenSSH* compat 0x04000000
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr none
debug1: kex: client->server aes128-ctr none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:CEXGTmrVgeY1qEiwFe2Yy3XqrWdjm98jKmX0LK5mlQg
debug1: Host '' is known and matches the ECDSA host key.
debug1: Found key in /home/fred/.ssh/known_hosts:2
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password,keyboard-interactive
debug1: Next authentication method: publickey
debug1: Trying private key: /home/fred/.ssh/id_rsa
debug1: Trying private key: /home/fred/.ssh/id_dsa
debug1: Trying private key: /home/fred/.ssh/id_ecdsa
debug1: Trying private key: /home/fred/.ssh/id_ed25519
debug1: Next authentication method: keyboard-interactive
debug1: Authentications that can continue: publickey,password,keyboard-interactive
debug1: Next authentication method: password
debug1: Authentication succeeded (password).
Authenticated to ([]:22).
debug1: channel 0: new [client-session]
debug1: Requesting
debug1: Entering interactive session.
debug1: client_input_global_request: rtype want_reply 0
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: client_input_channel_req: channel 0 rtype reply 0
debug1: channel 0: free: client-session, nchannels 1
debug1: fd 2 clearing O_NONBLOCK
Last login: Sat Mar 14 21:31:33 2015 from


The same login with the maximum of verbosity, -vvv, gives around 150 lines of debugging information. Remember that debugging information is sent to stderr rather than stdout. This will only capture the session in a file, debugging info goes only to the screen, not to the output log:

$ ssh -vvv -l fred  | tee ~/ssh-output.log

The tool tee(1) is like a T-pipe and sends output two directions, one to stdout and one to a file.

The following will capture both the debugging info and the session text:

$ ssh -vvv -l fred  2>&1  | tee ~/ssh-output.log

Capturing Client Debugging Information Separately edit

Regular pipes and redirects work only with stdout so that if -E is not used to capture debugging output the output on stderr must instead be sent to stdout if one is going to capture it at the same time as the actual session. That is done with an extra redirect, 2>&1 to capture stderr. Mind the spaces, or lack of them.

Changing Client Debugging Levels At Runtime or On The Fly edit

At run time, when establishing a new connection, just use the -v option.

$ sftp -v -o "IdentityFile=~/.ssh/weblog.key_rsa"

The debugging verbosity on the client can be increased just like on the server.

$ sftp -vvv -o "IdentityFile=~/.ssh/weblog.key_rsa"

The extra information can be useful to see exactly what is being sent to or requested of the server.

After the fact, once a connection is established, the escape sequences ~v and ~V can be used to increase or decrease the verbosity on the fly. Through them it is possible to change the verbosity of the client in an established connection. When increasing, the client raises its log level to VERBOSE, DEBUG, DEBUG2, and DEBUG3, in that order, if starting from the default of INFO. Conversely, when lowering the log level, the client will descend through ERROR, FATAL, to QUIET if starting from the default of INFO.

Debugging and Troubleshooting edit

The server logs are your best friend when troubleshooting. It may be necessary to turn up the log level there temporarily to get more information. It is then also necessary to turn them back to normal after things are fixed to avoid privacy problems or excessively use of disk space.

For example, the SFTP-subsystem logging defaults to ERROR, reporting only errors. To track transactions made by the client, change the log level to INFO or VERBOSE:

Subsystem internal-sftp  -l INFO

Caution. Again, operating with elevated logging levels would violate the privacy of users, in addition to filling a lot of disk space, and should generally not be used in production once the changes are figured out. Elevated log messages should really be sent to a separate log file during the time they are collected.

By default, some systems send only the normal messages to the regular system log files and ignore the elevated messages. Some save all the messages by default. If the elevated system log messages are not showing up in any of the system logs, the former may be the reason. Either way, check the system log configuration and make sure that the extra messages are only sent to a separate log file and not mixed with the regular system logs. Change the configuration if necessary. This helps keep the logs tidy as well as protect privacy. The system log settings are found in the system log daemon's configuration file, the exact name of which will vary depending on what is installed, but common ones are syslog.conf(5) and rsyslog.conf(5). Notice that this machine configuration below has the more detailed DEBUG messages for the AUTH facility going to a separate log file from the regular AUTH messages:

$ grep '^auth\.' /etc/syslog.conf                                               /var/log/authlog
auth.debug                                              /var/log/authdebug

See syslog(3) for the log facilities and log levels. It is best to limit the time the debugging information is collected and to actively watch while it is collected. However, if it is running for any length of time, and especially if it is left unattended even for a short while, be sure to remeber to add the special log file to the log rotation schedule so that it cannot fill up the partition.

Match blocks can help further by setting log levels for specific situations and avoid a situation where everything is logged intensely.

Also, the manual pages for OpenSSH are very well written and many times problems can be solved by finding the right section within the right manual page. At the very minimum, it is important to skim through the four main manual pages for both the programs and their configuration and become familiar with at least the section headings.

Then once the right section is found in the manual page, go over it in detail and become familiar with its contents. The same goes for the other OpenSSH manual pages, depending on the activity. Be sure to use the version of OpenSSH available for your system and the corresponding manual pages, preferably those that are installed on your system to avoid a mismatch. In some cases, the client and the server will be of different versions, so the manual pages for each must be looked up separately. It is also a good idea to review OpenSSH's release notes when a new version is published.

With a few exceptions below, specific examples of troubleshooting are usually given in the cookbook section relevant to a particular activity. So, for example, sorting problems with authentication keys is done in the section on Public Key Authentication itself.

Debugging a script, configuration or key that uses sudo(8) edit

Usually log levels only need to be changed when writing and testing a script, a new configuration, some new keys, or all three at once. When working with sudo(8), it is especially important to see exactly what the client is sending so as to enter the right pattern into /etc/sudoers for safety. Using the lowest level of verbosity, the exact string being sent by the client to the remote server is shown in the debugging output:

$ rsync -e "ssh -v -i /home/webmaint/.ssh/bkup_key -l webmaint" \
        -a var/backup/www/
debug1: Authentication succeeded (publickey).
Authenticated to ([]:22).
debug1: channel 0: new [client-session]
debug1: Requesting
debug1: Entering interactive session.
debug1: Sending command: rsync --server --sender -vlogDtpre.if . /var/www/
receiving incremental file list

What sudoers then needs is something like the following, assuming account 'webmaint' is in the group 'webmasters':

%webmasters ALL=(ALL) NOPASSWD: /usr/local/bin/rsync --server \
--sender -vlogDtpre.if . /var/www/

The same method can be used to debug new server configurations or key logins. Once things are set to run as needed, the log level settings can be lowered back to INFO for sshd(8) and to ERROR for internal-sftp. Additionally, once the script is left to run in fully automated mode, the client logging information can be set use the syslog(3) system module instead of stderr by setting the -y option when it is launched.

Debugging a server configuration edit

Running the server in debug mode provides a lot of information about the connection and a smaller amount about the server configuration. The server's debugging level (-d) can be raised once, twice (-dd) or thrice (-ddd).

$ /usr/sbin/sshd -d

Note that the server in this case does not detach and become a daemon, so it will terminate when the SSH connection terminates. The server must be started again in order to make a subsequent connection from the client. Though in some ways this is a hassle, it does make sure that session data is a unique set and not mixes of multiple sessions and thus possibly different configurations. Alternately, another option (-e) when debugging sends the debugging data to stderr to keep the system logs clean.

In recent versions of OpenSSH, it is also possible to log the debug data from the system logs directly to a separate file and keep noise out of the system logs. Since OpenSSH 6.3, the option -E will append the debug data to a particular log file instead of sending it to the system log. This facilitates debugging live systems without cluttering the system logs.

$ /usr/sbin/sshd -E /home/fred/sshd.debug.log

On older versions of OpenSSH, if you need to save output to a file while still viewing it live on the screen, you can use tee(1).

$ /usr/sbin/sshd -ddd 2>&1 | tee /tmp/foo

That will save output to the file foo by capturing what sshd(8) sent to stderr. This works with older versions of OpenSSH, but the -E option above is preferable.

If the server is remote and it is important to reduce the risk of getting locked out, the experiments on the configuration file can be done with a second instance of sshd(8) using a separate configuration file and listening to a high port until the settings have been tested.

$ /usr/sbin/sshd -dd -p 22222 -f /home/fred/sshd_config.test

It is possible to make an extended test (-T) of the configuration file. If there is a syntax error, it will be reported, but remember that even sound configurations can still lock you out. The extended test mode can be used by itself, but it is also possible to specify particular connection parameters to use with -C. sshd(8) will then process the configuration file in light of the parameters passed to it and output the results. Of particular use, the results of Match directives will be shown. So the -T option can be supplemented with the -C option to show precisely which configuration will be used for various connections.

When passing specific connection parameters to sshd(8) for evaluation, user, host, and addr are the minimum required for extended testing. The following will print out the configurations that will be applied if the user fred tries to log in to the host from the address

$ /usr/sbin/sshd -T -C user=fred,,addr=

Two more parameters, laddr and lport, may also be passed. They refer to the server's IP number and port connected to.

$ /usr/sbin/sshd -T -C user=fred,,addr=,laddr=,lport=2222

Those five variables should be able to describe any possible incoming connection.

Debugging a client configuration edit

Sometimes when debugging a server configuration it is necessary to track the client, too. Since OpenSSH 6.8, the -G option makes ssh(1) print its configuration after evaluating Host and Match blocks and then exit. That allows viewing of the exact configuration options that will actually be used by the client for a particular connection.

$ ssh -G -l fred

Client configuration is determined in three ways. The first is by run-time options, then by the account's own configuration file, or lastly the system-wide client configuration file. The priority is in that order and whichever value is found first gets used. With sftp(1) the options are also passed to ssh(1).

Invalid or Outdated Ciphers or MACs edit

A proper client will show the details of the failure. For a bad Message Authentication Code (MAC), a proper client might show something like the following when trying to foist a bad MAC like hmac-md5-96 onto the server:

no matching mac found: client hmac-md5-96 server,,,,,,,hmac-sha2-256,hmac-sha2-512,hmac-sha1

And for a bad cipher, a proper client might show something like this when trying to foist an arcfour cipher on the server:

no matching cipher found: client arcfour server,aes128-ctr,aes192-ctr,aes256-ctr,,

Sometimes when troubleshooting a problem with the client it is necessary to turn to the server logs. In OpenSSH 6.7 unsafe MACs were removed and in OpenSSH 7.2 unsafe ciphers were removed, but some third-party clients may still try to use them to establish a connection. In that case, the client might not provide much information beyond a vague message that the server unexpectedly closed the network connection. The server logs will, however, show what happened:

fatal: no matching mac found: client hmac-sha1,hmac-sha1-96,hmac-md5 server,,,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160 [preauth]

More recent versions would show a simpler error for a bad MAC.

fatal: Unable to negotiate with port 55044: no matching MAC found. Their offer: hmac-md5-96 [preauth]

A bad cipher would be reported like this:

fatal: Unable to negotiate with port 55046: no matching cipher found. Their offer: arcfour [preauth]

The error message in the server log might not say which MACs or ciphers are actually available. For those, the extended test mode can be used to show the server settings and, in particular, the MACs or ciphers allowed. In its most basic usage the extended test mode would just be -T, as in /usr/sbin/sshd -T | grep -E 'cipher|macs' with no other options. For more details and options, see the previous section on "Debugging a server configuration" above.

One solution there is to upgrade the client to one that can handle the right ciphers and MACs. Another option is to switch to a different client, one that can handle the modern ciphers or MACs.

Debugging Key-Based Authentication edit

The most common causes of failure for public key authentication seem to be for either of two reasons:

  • Mangling the public key on the way to getting it into authorized_keys on the server
  • Incorrect permissions for the files and directories involved, ether on the client or the server. These are the directory for the keys, usually ~/.ssh/, or its parent directories, or the authorized_keys file, or the private key itself.

As of the time of this writing, it looks like pretty much every failure of key-based authentication described on mailing lists and forums is solved by addressing either or both of those two situations. So, when encountering the error message "Permission denied (publickey,keyboard-interactive)", or similar, see the section on Public Key Authentication. Then see the manual page for sshd(8) and its section on authorized keys. Usually, though not always, it will be obvious when the private key's permissions are incorrect:

$ ssh -i ~/.ssh/fred-193.ed25519 fred@
Permissions 0664 for '/home/fred/.ssh/fred-193.ed25519' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/home/fred/.ssh/fred-193.ed25519": bad permissions

In addition mangled public keys in the authorized keys file and to incorrect permissions, a very rare third case is if the public and private key files don't match and are not from the same key pair. As mentioned in the section on Public Key Authentication, the public and private keys need to match and be part of the same key pair. That is because even before the SSH client uses private key cryptographically, it looks at the file name of the proposed private key and then sends the public key matching that same name, if it exists. If the public key that is on the client does not match the public key on the server in authorized_keys then the connection will be denied with the error "Permission denied (publickey,keyboard-interactive)" or similar. That alone is a very good reason to give unique descriptive file names. Note that as mentioned above there are usually other causes to that same error message besides having mismanaged the public key file on the client machine.

The file names for both parts of each key pair have to be kept organized so that the contents match. As for a solution, the way out in the long term is to more carefully manage the keys and their file names. It is solved on the short term by deleting the offending public key file or using the private key to regenerate a new one, overwriting the offending one. Again, this is an unusual edge case and not a common cause of that error.

SSH Too Many Authentication Failures edit

When there are multiple keys in the authentication agent, the client will try them against the server in an unpredictable order. If the client happens to cycle through enough of the wrong keys first and hits the server's MaxAuthTries limit before finding the right key, the server will naturally break off the connection with an error message about too many authentication failures:

"Received disconnect from port 22:2: Too many authentication failures 
Authentication failed."

With increased verbosity, the keys tested and rejected will also be shown:

$ ssh -v
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,keyboard-interactive
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /home/fred/.ssh/key.06.rsa
debug1: Authentications that can continue: publickey,keyboard-interactive
debug1: Offering RSA public key: /home/fred/.ssh/key.02.rsa
debug1: Authentications that can continue: publickey,keyboard-interactive
debug1: Offering RSA public key: /home/fred/.ssh/key.03.rsa
debug1: Authentications that can continue: publickey,keyboard-interactive
debug1: Offering RSA public key: /home/fred/.ssh/key.04.rsa
debug1: Authentications that can continue: publickey,keyboard-interactive
debug1: Offering RSA public key: /home/fred/.ssh/key.01.rsa
debug1: Authentications that can continue: publickey,keyboard-interactive
debug1: Offering RSA public key: /home/fred/.ssh/key.05.rsa
Received disconnect from port 22:2: Too many authentication failures
Authentication failed.

Each key in the agent gets an annotation which says whether or not the key file was supplied by the user, either in the configuration file or as a run-time argument. The client prefers keys that were specified in the configuration and are also currently in the agent. Then it will try try them in the order in which they were supplied. [37]

There are two solutions if you see the "Too many authentication failures" error:

One way around this error is to remove keys from the agent one at a time using ssh-add(1) with the -d option until there is just the right key left. Refer to each key by its file system path, for example: ssh-add -d ~/.ssh/some.key.rsa Because the private key to be removed is looked up in the agent based on the corresponding public key both files must exist. Without matching a public key file, the private key cannot be removed individually from the authentication agent. Instead the whole lot may be removed all at once using the -D option. However, is not always practical to do either when many remote systems are used frequently and the agent needs to be kept well-stocked. This is probably not the most practical way.

Another way around this error, and probably the most practical method, is to limit the client to trying only a specific key using the IdentitiesOnly configuration directive in conjunction with the IdentityFile configuration directive. The latter points explicitly to the right key. Both can be added either as run-time options or in the client's configuration file. As a run-time option, they can be used like this:

$ ssh -o IdentitiesOnly=yes -i ~/.ssh/ -l fred

Or these two options could be added to the client configuration file in something like the following way instead.

Host server14
        IdentitiesOnly yes
        IdentityFile /home/fred/.ssh/
        User fred

In that way, the server could be reached with either the short name or the fully qualified domain name, whatever names are listed under the Host directive.

$ ssh server14

Remember that options are selected from the client configuration file on a first-match basis. Because the first-match wins, specific rules must come before more general rules.

Signing Failed for ... Agent Refused Operation edit

As mentioned, incorrect permissions on client files or directories are a common cause of authentication failure when attempting key-based authentication. Not all the errors are clear about that, however. Here is an example of a misleading error message which is actually caused by incorrect permissions:

$ ssh -i ~/.ssh/key-ed25519
sign_and_send_pubkey: signing failed for ED25519 "/home/fred/.ssh/key-ed25519" from agent: agent refused operation Permission denied (publickey).

The solution for that is to ensure that no other accounts can read or write to the private key, nor should others be able to write to the .ssh directory or its parent.

Debugging Chrooted SFTP Accounts edit

The most common problem seems to be bad directory permissions. The chroot directory, and all directories above it, must be owned by root and not writable by any other user or group. Even though these directories' group memberships do not have to be root, if any of them is not root then it must not be group writable either. Failure to use the correct ownership will result in not being able to log in with the affected accounts. The errors when login is attempted will look like this from the client side:

$ sftp fred@
fred@192.02.206's password: 
packet_write_wait: Connection to Broken pipe
Couldn't read packet: Connection reset by peer

The error message is much clearer on the server side:

Aug  4 23:52:38 server sshd[7075]: fatal: bad ownership or modes for chroot directory component "/home/fred/"

Check the directory permissions for the chroot target and all directories above it. If even one is off it must be fixed so that it is owned by root and not writable by any others. There are many, many routes to get there. Here are two was to set up chroot permissions:

  • One quick way to fix the permissions is to change both ownership and group membership of the directory to root. Same for all directories above the chroot target.
$ ls -lhd /home/ /home/fred/
drwxr-xr-x 3 root  root  4.0K Aug  4 20:47 /home/
drwxr-xr-x 8 root  root  4.0K Aug  4 20:47 /home/fred/

That will work with the ChrootDirectory directive set to %h but has some drawbacks that will quickly become obvious when adding files or directories.

  • Another easy way to fix the permissions is to change both the account's home directory and the ChrootDirectory directive. Arrange the account's home directory so that it is under a unique directory owned by root, such as the user name itself:
$ ls -lhd /home/ /home/fred/ /home/fred/fred/
drwxr-xr-x 3 root  root  4.0K Aug  4 20:47 /home/
drwxr-x--- 3 root  fred  4.0K Aug  4 20:47 /home/fred/
drwxr-x--- 8 fred  fred  4.0K Aug  4 20:47 /home/fred/fred/

Then chroot the account to the parent directory and combine that with an alternate starting directory working from the user name token with the -d option for the SFTP server.

ChrootDirectory /home/%u
ForceCommand internal-sftp -d %u

Then when the account connects it will see only its own directory and no other parts of the system.

Debugging RC Scripts Interfering with SFTP Sessions edit

The SFTP connection will drop if there are any extraneous data either direction on stdin, from the client or the server. A common mistake in that area is if /etc/ssh/sshrc or ~/.ssh/rc send anything at all to stdout instead of being quiet. There the output, which would be stdout on the server, is received by the client on stdin, but matches no correct protocol and thus causes the client to disconnect. So, even in the case of using the RC scripts, the response from the server must remain 8-bit clean or an error will occur:

$ sftp
Received message too long 1400204832

That one message will be the main clue. Increasing the verbosity of the SFTP client with -v won't provide more relevant information.

Also, the standard logs on the server will only show that the client disconnected and not provide any information why. At higher levels of logging, some extraneous reads and corresponding discards might be noticed but that is all. Below is a log sample recorded at the verbosity DEBUG3 showing such an example.

debug2: subsystem request for sftp by user fred
debug1: subsystem: exec() /usr/libexec/sftp-server
Starting session: subsystem 'sftp' for fred from port 37446 id 0
debug2: channel 0: read 13 from efd 12
debug3: channel 0: discard efd
debug2: channel 0: read 12 from efd 12
debug3: channel 0: discard efd
debug2: channel 0: read 15 from efd 12
debug3: channel 0: discard efd
debug2: channel 0: read 18 from efd 12
debug3: channel 0: discard efd

Again, neither RC script is allowed produce any output on stdout during use of SFTP or it will ruin the connection. If an RC script does produce output, it must be redirected to a system log, to a file, or sent to stderr instead of stdout. Regular interactive SSH connections are not disturbed by use of stdout and the client will just display whatever is sent. See the manual page for sshd(8) in the section "SSHRC" for more.

The same restriction goes for any other part of the SSH service which runs over stdin and stdout, such as ProxyJump or some uses of ProxyCommand. So another example of potential interference would be when using LocalCommand with the client to specify a command to execute on the local machine after successfully connecting to the server. Any output from it also needs to be redirected to stderr. If LocalCommand ends up interfering with ProxyJump then the connection will appear to hang at the stage when stdout gets used.

Debugging When An SSH Agent Has The Correct Private Key But Does Not Use It edit

In older versions of OpenSSH, the public key must also be available on the client at the time the private key is loaded into the agent. If it is not then without the matching public key the agent will not be able to use a private key unless other arrangements are made. A symptom of this is that while specifying the key via a run-time argument works, the same key does not work via the agent.

Upgrading to a more recent version of OpenSSH is a better option. Otherwise a work-around is to specify the private key either as a run-time argument to the client or in the ssh_config(5) file, from there the client will find the correspondingly named public key file. Importantly the client will still use the key in the agent yet use the designated matching public key file, so the private key file does not have to contain anything at all and could even be empty.

$ ssh -i some_key_ed25519

However, if it is undesirable to have the private key accessible on the file system or if the private key is only in the agent and not itself available via the file system, then the public key can be specified directly instead.

$ ssh -i

Either way, another way is to name the key in ssh_config(5) instead using the IdentityFile configuration directive. If the file with the public key is missing, it can be regenerated from the private key using ssh-keygen(1) with the -y option.

SSH Client Error Messages And Common Causes edit

As rehash of the above, below are some client-side error messages with some of the more common reasons[38] for those messages. Neither list of errors nor the reasons for the errors are comprehensive. The suggestions for causes just touch on one or two of the commonly seen reasons. There is no substitute for checking the actual logs, especially on the server. The server log files are usually /var/log/auth.log on most system or a variant of that name. Occasionally you will find the information in the file /var/log/secure instead.

No address associated with name:

$ ssh 
ssh: Could not resolve hostname no address associated with name

The destination's host name does not exist in DNS. Was it spelled correctly?

Operation timed out:

$ ssh 
ssh: connect to host port 22: Operation timed out

There's not a system associated with that IP address, or else a packet filter is causing trouble.

Connection timed out:

$ ssh 
ssh: connect to host port 22: Connection timed out

You can't get there from here. It is probable that the destination machine is disconnected from the network or that the network which it is on is not reachable.

Connection refused:

$ ssh
ssh: connect to host port 22: Connection refused

The destination system exists but there is no SSH service available at the port being tried. Is the destination right? It might be the wrong system, SSH might not be running at all, SSH might be listening on another port, or a packet filter might be blocking connections.

Permission denied:

$ ssh Permission denied (publickey,keyboard-interactive).

That is usually a sign of a problem cause by the wrong user name, wrong authentication method, wrong SSH key or SSH certificate, or wrong file permissions in the authorized keys file on the destination system. If it is a matter of SSH key based authentication, see the chapter on Public Key Authentication for more thorough coverage. If it is a question of SSH certificates, see the corresponding Certificate-based Authentication chapter. It can also be a matter of server-side settings with AllowGroups, DenyGroups, or similar configuration directives at the destination. Any of those possibilities can really only be identified and solved by checking the log output from sshd(8).

No matching host key type found:

$ ssh
Unable to negotiate with port 22: no matching host key type found. Their offer: ssh-rsa,ssh-dss

The SSH daemon on that server is really outdated. Contact the system administrator about an upgrade and get them moving. More details can be seen with the -v option on the client.

$ ssh -v
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: diffie-hellman-group-exchange-sha256
debug1: kex: host key algorithm: (no match)
Unable to negotiate with port 22: no matching host key type found. Their offer: ssh-rsa,ssh-dss

In the above shell session, it is the server which is badly in need of an update as shown by the deprecated key exchange algorithms which it tries to use. Again, for emphasis, the solution is to update the outdated software. If a specific, old version of an particular operating system absolutely must be used for a while longer, see about getting a back port of the SSH daemon. Many GNU/Linux distros even have specific back port repositories.

If you would like to see which key exchange algorithms the client supports try using the -Q option.

$ ssh -Q kex

Development edit

It is possible to advance OpenSSH through donations of hardware or money. See the OpenSSH project web site at for details.

OpenSSH is a volunteer project with the goal of making quality software. In that way it relies upon hardware and cash donations to keep the project rolling. Funds are needed for daily operation to cover network line subscriptions and electrical costs. If two dollars were given for every download of the OpenSSH source code from the master site in 2015, ignoring the mirrors, or if a penny was donated for every instance of PF or OpenSSH installed with a mainstream operating system or phone in 2015[39], then funding goals for the year would be met. Hardware is needed for development and porting to new architectures and platforms always requires new hardware.

OpenSSH is currently developed by two teams. The first team works to provide code that is as clean, simple and secure as possible. It is part of the OpenBSD project. The second team works using this core version and ports it to a great many other operating systems. Thus there are two development tracks, the OpenBSD core and the portable version. All the work is done in countries that permit export of cryptography.

Use the Source, Luke edit

The main development branch of OpenSSH is part of the OpenBSD project. So the source code for the "-current" branch of OpenBSD is where to look for latest activity. Nightly, bleeding-edge snapshots of OpenSSH itself are thus publicly available from OpenBSD's CVS tree. Use a mirror when possible.

The source code for the portable releases of OpenSSH are published using anonymous Git, so no password is needed to download source from the read-only repository. The repository is provided and maintained by Damien Miller.


We ask anyone wishing to report security bugs in OpenSSH to please use the contact address given in the source and to practice responsible disclosure.

libssh edit

libssh is an independent project that provides a multiplatform C library implementing the SSHv2 and SSHv1 protocols for client and server implementations. With libssh, developers can remotely execute programs, transfer files and use a secure and transparent tunnel for your remote applications.

libssh is available under LGPL 2.1 license, on the web page


  • Key Exchange Methods:, ecdh-sha2-nistp256, diffie-hellman-group1-sha1, diffie-hellman-group14-sha1
  • Hostkey Types: ecdsa-sha2-nistp256, ssh-dss, ssh-rsa
  • Ciphers: aes256-ctr, aes192-ctr, aes128-ctr, aes256-cbc, aes192-cbc, aes128-cbc, 3des-cbc, des-cbc-ssh1, blowfish-cbc
  • Compression Schemes: zlib,, none
  • MAC hashes: hmac-sha1, none
  • Authentication: none, password, public-key, hostbased, keyboard-interactive, gssapi-with-mic
  • Channels: shell, exec (incl. SCP wrapper), direct-tcpip, subsystem,
  • Global Requests: tcpip-forward, forwarded-tcpip
  • Channel Requests: x11, pty, exit-status, signal, exit-signal,,
  • Subsystems: sftp(version 3), publickey(version 2), OpenSSH Extensions
  • SFTP:,
  • Thread-safe: Just don’t share sessions
  • Non-blocking: it can be used both blocking and non-blocking
  • Your sockets: the app hands over the socket, or uses libssh sockets
  • OpenSSL or gcrypt: builds with either

Additional Features:

  • Client and server support
  • SSHv2 and SSHv1 protocol support
  • Supports Linux, UNIX, BSD, Solaris, OS/2 and Windows
  • Full API documentation and a tutorial
  • Automated test cases with nightly tests
  • Event model based on poll(2), or a poll(2)-emulation.

libssh2 edit

libssh2 is another independent project providing a lean C library implementing the SSH2 protocol for embedding specific SSH capabilities into other tools. It has a stable, well-documented API for working on the client side with the different SSH subsystems: Session, Userauth, Channel, SFTP, and Public Key. The API can be set to either blocking or non-blocking. The code uses strict name spaces, is C89-compatible and builds using regular GNU Autotools.

libssh2 is available under a modified BSD license. The functions are each documented in their own manual pages. The project web site contains the documentation, source code and examples:

There is a mailing list for libssh2 in addition to an IRC channel. The project is small, low-key and, as true to the spirit of the Internet, a meritocracy. Hundreds of specific functions allow specific activities and components to be cherry-picked and added to an application:

  • Shell and SFTP sessions
  • Port forwarding
  • Password, public-key, host-based keys, and keyboard-interactive authentication methods.
  • Key Exchange Methods diffie-hellman-group1-sha1, diffie-hellman-group14-sha1, diffie-hellman-group-exchange-sha1
  • Host Key Types: ssh-rsa and ssh-dss
  • Ciphers: aes256-ctr, aes192-ctr, aes128-ctr, aes256-cbc (, aes192-cbc, aes128-cbc, 3des-cbc, blowfish-cbc, cast128-cbc, arcfour, arcfour128, or without a cipher.
  • Compression Scheme zlib or without compression
  • Message Authentication Code (MAC) algorithms for hashes: hmac-sha1, hmac-sha1-96, hmac-md5, hmac-md5-96, hmac-ripemd160 (, or none at all
  • Channels: Shell, Exec – including the SCP wrapper, direct TCP/IP, subsystem
    • Channel Requests: x11, pty
  • Subsystems: sftp version 3, public-key version 2
  • Thread-safe, blocking or non-blocking API
  • Your sockets: the app hands over the socket, calls select() etc.
  • Builds with either OpenSSL or gcrypt

See also the library libcurl which supports SFTP and SCP URLs.

Thrussh edit

Thrussh is an SSH library written in Rust and available under the Apache License version 2.0. It is a full implementation of the SSH 2 protocol. The only non-Rust part is the crypto back end, which uses ring instead. It is designed to work on any platform and to use asynchronous I/O. The project web site contains the documentation, source code, and examples. The code is accessible using darcs:

darcs get

It is not an implementation of an actual server or client, but instead contains all the elements needed to write custom clients and servers using Rust.

Other language bindings for the SSH protocols edit

What follows is a list of additional independent resources by programming language:

Perl edit

  • Net::SSH2: a wrapper module for libssh2.
  • Net::SSH::Perl: a full SSH/SFTP implementation in pure Perl. Unfortunately this module is not being maintained any more and has several open bugs. Also, installing it can be a daunting task due to some of its dependencies.
  • Net::OpenSSH: a wrapper for OpenSSH binaries and other handy programs (scp, rsync, sshfs). It uses OpenSSH multiplexing feature in order to reuse connections.
  • Net::OpenSSH::Parallel a module build on top of Net::OpenSSH that allows to transfer files and run programs on several machines in parallel efficiently.
  • SSH::Batch another module build on top of Net::OpenSSH that allows to run programs on several hosts in parallel.
  • Net::SSH::Expect: this module uses Expect to drive interactive shell sessions run on top of SSH.
  • Net::SSH: a simple wrapper around any SSH client. It does not support password authentication and is very slow as it establishes a new SSH connection for every remote program invoked.
  • Net::SCP and Net::SCP::Expect: modules wrapping the scp program. Note that Net::SSH2, Net::SSH::Perl and Net::OpenSSH already support file transfers via scp natively.
  • Net::SFTP::Foreign: a full SFTP client written in Perl with lots of bells and whistles. By default is uses ssh to connect to the remote machines but it can also run on top of Net::SSH2 and Net::OpenSSH.
  • GRID::Machine, IPC::PerlSSH and SSH::RPC: these modules allow to distribute and run Perl code on remote machines through SSH.

Python edit





Ruby edit



Java edit


JSch - a pure Java implementation of SSH2.

Cookbook edit

Remote Processes edit


One of the main functions of OpenSSH is that of accessing and running programs on other systems. That is, after all, one of the main purposes of the program. There are several ways to expand upon that, either interactively or as part of unattended scripts. So in addition to an interactive login, ssh(1) can be used to simply execute a program or script. Logout is automatic when the program or script has run its course. Some combinations are readily obvious. Others require more careful planning. Sometimes it is enough of a clue just to know that something can be done, at other times more detail is required. A number of examples of useful combinations of using OpenSSH to run remote tasks follow.

Run a Remote Process edit

An obvious use of ssh(1) is to run a program on the remote system and then exit. Often this is a shell, but it can be any program available to the account. For feedback, ssh(1) passes whatever exit value was returned by the remote process. When a remote process is completed, ssh(1) will terminate and pass on the exit value of the last remote process to complete. So in this way it can be used in scripts and the outcome of the remote processes can be used by the client system.

The following will run true(1) on the remote system and return success, exit code 0, to the local system where ssh(1) was run.

$ ssh -l fred /bin/true
$ echo $?

The following will run false(1) on the remote system and return failure, exit code 1, to the local system where ssh(1) was run.

$ ssh -l fred /bin/false
$ echo $?

If any other values, from 0 to 255, were returned, ssh(1) will pass them back the local host from the remote host.

Run a Remote Process and Capture Output Locally edit

Output from programs run on the remote machine can be saved locally using a normal redirect. Here we run dmesg(8) on the remote machine:

$ ssh -l fred dmesg > dmesg.from.server.log

Interactive processes will be difficult or impossible to operate in that manner because no output will be seen. For interactive processes requiring any user input, output can be piped through tee(1) instead to send the output both to the file and to stdout. This example runs an anonymous FTP session remotely and logs the output locally.

$ ssh -l fred "ftp -a anotherserver" | tee ftp.log

It may be necessary to force pseudo-TTY allocation to get both input and output to be properly visible.

$ ssh -t -l fred "ftp -a anotherserver" | tee /home/fred/ftp.log

The simplest way to read data on one machine and process it on another is to use pipes.

$ ssh 'cat /etc/ntpd.conf' | diff /etc/ntpd.conf -

Run a Local Process and Capture Remote Data edit

Data can be produced on one system and used on another. This is different than tunneling X, where both the program and the data reside on the other machine and only the graphical interface is displayed locally. Again, the simplest way to read data on one machine and use it on another is to use pipes.

$ cat /etc/ntpd.conf | ssh 'diff /etc/ntpd.conf -'

In the case where the local program expects to read a file from the remote machine, a named pipe can be used in conjunction with a redirect to transfer the data. In the following example, a named pipe is created as a transfer point for the data. Then ssh(1) is used to launch a remote process which sends output to stdout which is captured by a redirection on the local machine and sent to the named pipe so a local program can access the data via the named pipe.

In this particular example, it is important to add a filter rule to tcpdump(8) itself to prevent an infinite feedback loop if ssh(1) is connecting over the same interface as the data being collected. This loop is prevented by excluding either the SSH port, the host used by the SSH connection, or the corresponding network interface.

$ mkfifo -m 600 netdata

$ ssh -fq -i /home/fred/.ssh/key_rsa \
        'sudo tcpdump -lqi eth0 -w - "not port 22"' > netdata

$ wireshark -k -i netdata &

Any sudo(8) privileges for tcpdump(8) also need to operate without an interactive password, so great care and precision must be exercised to spell out in /etc/sudoers exactly which program and parameters are to be permitted and nothing more. The authentication for ssh(1) must also occur non-interactively, such as with a key and key agent. Once the configurations are set, ssh(1) is run and sent to the background after connecting. With ssh(1) in the background the local application is launched, in this case wireshark(1), a graphical network analyzer, which is set to read the named pipe as input.

On some systems, process substitution can be used to simplify the transfer of data between the two machines. Doing process substitution requires only a single line.

$ wireshark -k -i <( ssh -fq -i /home/fred/.ssh/key_rsa \
        'sudo tcpdump -lqi eth0 -w - "not port 22"' )

However, process substitution is not POSIX compliant and thus not portable across platforms. It is limited to bash(1) only and not present in other shells. So, for portability, use a named pipe.

Run a Remote Process While Either Connected or Disconnected edit

There are several different ways to leave a process running on the remote machine. If the intent is to come back to the process and check on it periodically then a terminal multiplexer is probably the best choice. For simpler needs there are other approaches.

Run a Remote Process in the Background While Disconnected edit

Many routine tasks can be set in motion and then left to complete on their own without needing to stay logged in. When running remote process in background it is useful to spawn a shell just for that task.

$ ssh -t -l fred 'sh -c "tar zcf /backup/usr.tgz /usr/" &'

Another way is to use a terminal multiplexer. An advantage with them is being able to reconnect and follow the progress from time to time, or simply to resume work in progress when a connection is interrupted such as when traveling. Here tmux(1) reattaches to an existing session or else, if there is none, then creates a new one.

$ ssh -t -l fred "tmux a -d || tmux"

On older systems, screen(1) is often available. Here it is launched remotely to create a new session if one does not exists, or re-attach to a session if one is already running. So if no screen(1) session is running, one will be created.

$ ssh -t -l fred "screen -d -R"

Once a screen(1) session is running, it is possible to detach it and close the SSH connection without disturbing the background processes it may be running. That can be particularly useful when hosting certain game servers on a remote machine. Then the terminal session can then be reattached in progress with the same two options.

$ ssh -t -l fred "screen -d -R"

There is more on using terminal multiplexers tmux(1) or on below. In some environments it might be necessary to also use pagsh(1), especially with Kerberos, see below. Or nohup(1) might be of use.

Keeping Authentication Tickets for a Remote Process After Disconnecting edit

Authentication credentials are often deleted upon logout and thus any remaining processes no longer have access to whatever the authentication tokens were used for. In such cases, it is necessary to first create a new credential cache sandbox to run an independent process in before disconnecting.

$ pagsh
$ /usr/local/bin/

Kerberos and AFS are two examples of services that require valid, active tickets. Using pagsh(1) is one solution for those environments.

Automatically Reconnect and Restore an SSH Session Using tmux(1) or screen(1) edit

Active, running sessions can be restored after either an intentional or accidental break by using a terminal multiplexer. Here ssh(1) is to assume that the connection is broken after 15 seconds (three tries of five seconds each) of not being able to reach the server and to exit. Then the tmux(1) session is reattached or, if absent, created.

$ while ! ssh -t -o 'ServerAliveInterval 5' \
	'tmux attach -d || tmux new-session';  
$ do true;
$ done

Then each time ssh(1) exits, the shell tries to connect with it again and when that happens to look for a tmux(1) session to attach to. That way if the TCP or SSH connections are broken, none of the applications or sessions stop running inside the terminal multiplexer. Here is an example for screen(1) on older systems.

$ while ! ssh -t -o 'ServerAliveInterval 5' \
	'screen -d -R;'
$ do true;
$ done

The above examples give only an overly simplistic demonstration where at their most basic they are useful to resume a shell where it was after the TCP connection was broken. Both tmux(1) and screen(1) are quite capable of much more and worth exploring especially for travelers and telecommuters.

See also the section on "Public Key Authentication" to integrate keys into the process of automatically reconnecting.

Sharing a Remote Shell edit

Teaching, team programming, supervision, and creating documentation are some examples of when it can be useful for two people to share a shell. There are several options for read-only viewing as well as for multiple parties being able to read and write.

Read-only Monitoring or Logging edit

Pipes and redirects are a quick way to save output from an SSH session or to allow additional users to follow along read-only.

One sample use-case is when a router needs to be reconfigured and is available via serial console. Say the router is down and a consultant must log in via another user's laptop's connection to access the router's serial console and it is necessary to supervise what is done or help at certain stages. It is also very useful in documenting various activities, including configuration or installation.

Read-only Using tee(1) edit

Capture shell activity to a log file and optionally use tail to watch it real time. The utility tee(1), like a t-joint in plumbing, is used here to send output to two destinations, both stdout and a file.

$ ssh | tee /tmp/session.log

The resulting file can be monitored live in another terminal using tail(1) or after the fact with a pager like less(1).

Force Serial Session with Remote Logging Using tee(1) edit

The tee(1) utility can capture output from any program that can write to stdout. It is very useful for walking someone at a remote site through a process, supervising, or building documentation.

This example uses chroot(8) to keep the choice of actions as limited as possible. Actually building the chroot jail is a separate task. Once built, the guest user is made a member of the group 'consult'. The serial connection for the test is on device ttyUSB0, which is a USB to serial converter and cu(1) is used for the connection. tee(1) takes the output from cu(1) and saves a copy to a file for logging while the program is used. The following is would go in sshd_config(5)

Match Group consult
        ChrootDirectory /var/chroot-test
        AllowTCPForwarding no
        X11Forwarding no
        ForceCommand cu -s 19200 -l /dev/ttyUSB0 | tee /var/tmp/cu.log

With that one or more people can follow the activity in cu(1) as it happens using tail(1) by pointing it at the log file on the remote server.

$ tail -f /var/tmp/cu.log

Or the log can be edited and used for documentation. It is also possible for advanced users of tmux(1) or screen(1) to allow read-only observers.

Scripting edit

It is possible to automate some of the connection. Make a script, such as /usr/local/bin/screeners, then use that script with the ForceCommand directive. Here is an example of a script that tries to reconnect to an existing session. If no sessions already exist, then a new one is created and automatically establishes a connection to a serial device.


# try attaching to an existing screen session,
# or if none exist, make a new screen session

/usr/bin/screen -d -R || \
        /usr/bin/screen \
            /bin/sh -c "/usr/bin/cu -s 19200 -l /dev/ttyUSB0 | \
            /usr/bin/tee /tmp/consultant.log"

Interactive Sharing Using a Terminal Multiplexer edit

The terminal multiplexers tmux(1) or screen(1) can attach two or more people to the same session.[40] The session can either be read-only for some or read-write for all participants.

tmux(1) edit

If the same account is going to be sharing the session, then it's rather easy. In the first terminal, start tmux(1) where 'sessionname' is the session name:

$ tmux new-session -s sessionname

Then in the second terminal:

$ tmux attach-session -t sessionname

That's all that's needed if the same account is logged in from different locations and will share a session. For different users, you have to set the permissions on the tmux(1) socket so that both users can read and write it. That will first require a group which has both users as members.

Then after both accounts are in the shared group, in the first terminal, the one with the main account, start tmux(1) as before but also assign a name for the session's socket. Here 'sessionname' is the session name and 'sharedsocket' is the name of the socket:

$ tmux -S /tmp/shareddir/sharedsocket new-session -s sessionname

Then change the group of the socket and the socket's directory to a group that both users share in common. Make sure that the socket permissions allow the group to write the socket. In this example the shared group is 'foo' and the socket is /tmp/shareddir/sharedsocket.

$ chgrp foo /tmp/shareddir/
$ chgrp foo /tmp/shareddir/sharedsocket
$ chmod u=rwx,g=rx,o= /tmp/shareddir/
$ chmod u=rw,g=rw,o=  /tmp/shareddir/sharedsocket

Finally, have the second account log in attach to the designated session using the shared socket.

$ tmux -S /tmp/shareddir/sharedsocket attach-session -t sessionname

At that point, either account will be able to both read and write to the same session.

screen(1) edit

If the same account is going to share a screen(1) session, then it's an easy procedure. In the one terminal, start a new session and assign a name to it. In this example, 'sessionname' is the name of the session:

$ screen -S sessionname

In the other terminal, attach to that session:

$ screen -x sessionname

If two different accounts are going to share the same screen(1) session, then the following extra steps are necessary. The first user does this when initiating the session:

$ screen -S sessionname
^A :multiuser on
^A :acladd user2

Then the second user does this:

$ screen -x user1/sessionname

In screen(1), if more than one user account is used the aclchg command can remove write access for the other user: ^A :aclchg user -w "#". Note that screen(1) must run as SUID for multiuser support. If it is not set, you will get an error message reminding you when trying to connect the second user. You might also have to set permissions for /var/run/screen to 755.

Display Remote Graphical Programs Locally Using X11 Forwarding edit

It is possible to run graphical programs on the remote machine and have them displayed locally by forwarding X11, the current implementation of the X Window system. X11 is used to provide the graphical interface on many systems. See the website for its history and technical details. It is built into most desktop operating systems. It is even distributed as part of Macintosh OS X, though there it is not the default method of graphical display.

X11 forwarding is off by default and must be enabled on both the SSH client and server if it is to be used.

X11 also uses a client server architecture and the X server is the part that does the actual display for the end user while the various programs act as the clients and to the server. Thus by putting the client and server on different machines and forwarding the X11 connections, it is possible to run programs on other computers but have them displayed and available as if they were on the user's computer.

A note of caution is warranted. Allowing the remote machine to forward X11 connections will allow it and its applications to access many devices and resources on the machine hosting the X server. Regardless of the intent of the users, these are the devices and other resources accessible to the user account. So forwarding should only be done when the other machine, that user account, and its applications are reliable.

On the server side, to enable X11 forwarding by default, put the line below in sshd_config(5), either in the main block or a Match block:

X11Forwarding yes

On the client side, forwarding of X11 is also off by default, but can be enabled using three different ways. It can be enabled in ssh_config(5) or else using either the -X or -Y run-time arguments.

$ ssh -l fred -X

The connection will be slow, however. If responsiveness is a factor, it may be relevant to consider a SOCKS proxy instead or some other technology all together like FreeNX.

Using ssh_config(5) to Specify X11 Forwarding edit

X11 forwarding can be enabled in /etc/ssh_config for all accounts for all outgoing SSH connections or for just specific hosts by configuring ssh_config(5).

X11Forwarding yes

It is possible to apply ssh_config(5) settings to just one account in ~/.ssh/config to limit forwarding by default to an individual host by hostname or IP number.

        X11Forwarding yes

And here it is enabled for a specific machine by IP number

        X11Forwarding yes

Likewise, use limited pattern matching to allow forwarding for a subdomain or a range of IP addresses. Here it is enabled for any host in the domain, any host from to, and any host from through

Host * 
        X11Forwarding yes

        X11Forwarding yes

Host 192.168.123.*
        X11Forwarding yes

Again, X11 is built-in to most desktop systems. There is an optional add-on for OS X which has its roots in NextStep. X11 support may be missing from some particularly outdated, legacy platforms, however. But even there it is often possible to retrofit them using the right tools, one example being the tool Xming.

Unprivileged sshd(8) Service edit

It is possible to run sshd(8) on a high port using an unprivileged account. It will not be able to fork to other accounts, so login will be only possible on the same account which is running the unprivileged service.

There are three general steps to running an unprivileged SSH service on a normal, unprivileged account.

1) Prior to launching the SSH daemon under an unprivileged account, some unprivileged host keys must be created. These keys will be used to identify this service to connecting clients, especially on return visits. Their creation only needs to be done once per account and the passphrase must be left empty.

$ ssh-keygen -q -t ed25519 -N '' \
        -f /home/fred/.ssh/ssh_host_key_ed25519 \
        -C 'unprivileged SSH host key for fred'

If needed, do likewise for ECDSA and RSA keys. Be sure that that the permissions are correct. The directory which the keys are in and its parents should not be writable by other accounts and the private SSH host key itself must not be readable by any other accounts.

2) When the keys are in place, test them by starting the SSH server manually. One ways is by using the -h option to point to these alternative host key files. Note that because the account is unprivileged only the ports from 1024 on up are available, in this example port 2222 is set using the -p option.

$ /usr/sbin/sshd -D \
        -h /home/fred/.ssh/ssh_host_key_ed25519 \
        -h /home/fred/.ssh/ssh_host_key_ecdsa -p 2222

Verify that the new service is accessible from another system. Especially make sure that any packet filters, if they are any along the way, are set to pass the right port. Incoming connections, and any packet filters, are going to have to take the alternative port into consideration in normal uses. Another approach would be to add the unprivileged SSH service as an onion service. See the chapter on Proxies and Jump Hosts about that.

Note that the above examples use the default configuration file for the daemon. If a different configuration file is needed for special settings, then add the -f option to the formula.

$ /usr/sbin/sshd -D -f /home/fred/.ssh/sshd_config

Specific configuration directives can be set for the unprivileged account by using an alternative configuration file. Unprivileged host keys can be identified in that file along with a designated alternative listening port and many other customizations. It might also be helpful to log the unprivileged service separately from any other SSH services already on the same system by changing the SyslogFacility directive to something other than AUTH which is the default.

3) Automate the startup, if that is a goal.

The -D option keeps the process from detaching and becoming a daemon. During a test that might be useful, but later in actual use it might not be, it depends on how the process is launched.

What Unprivileged sshd(8) Processes Look Like edit

Because an unprivileged account is being used, privilege separation will not occur and all the child process will run in the same account as the original sshd(8) process. Here's how that can look while listening on port 2222 for an incoming connection using an unprivileged account, using the default configuration file with a few overrides:

$ pgrep -d , sshd | COLUMNS=200 xargs ps -w -o user,pid,args -p
fred     1997992 sshd: /usr/sbin/sshd -h /home/fred/.ssh/ssh_host_key_ed25519 -h /home/fred/.ssh/ssh_host_key_ecdsa -p 2222 [listener] 0 of 10-100 startups

Then once someone has connected but not finished logging in yet, it can look like this. A monitor is forked but even though it is labeled [priv] it is not privileged, and instead still running in the same unprivileged account as the parent process. It in turn forks a child, here process 1998150, to manage the authentication:

$ pgrep -d , sshd | COLUMNS=200 xargs ps -w -o user,pid,args -p
fred     1997992 sshd: /usr/sbin/sshd -h /home/fred/.ssh/ssh_host_key_ed25519 -h /home/fred/.ssh/ssh_host_key_ecdsa -p 2222 [listener] 1 of 10-100 startups
fred     1998149 sshd: fred [priv]
fred     1998150 sshd: fred [net]

Finally, once login has been successful, that unprivileged process is dropped and a new unprivileged process spun up to manage the interactive session.

$ pgrep -d , sshd | COLUMNS=200 xargs ps -w -o user,pid,args -p
fred     1997992 sshd: /usr/sbin/sshd -h /home/fred/.ssh/ssh_host_key_ed25519 -h /home/fred/.ssh/ssh_host_key_ecdsa -p 2222 [listener] 0 of 10-100 startups
fred     1998149 sshd: fred [priv]
fred     1998410 sshd: fred@pts/20

Above, process 1998410 is managing the interactive session.

Locking Down a Restricted Shell edit

A restricted shell sets up a more controlled environment than what is normally provided by a standard interactive shell. Though it behaves almost identically to a standard shell, it has many exceptions regarding capabilities that are whitelisted, leaving the others disabled. The restrictions include, but are not limited to, the following:

  • The SHELL, ENV, and PATH variables cannot be changed.
  • Programs can't be run with absolute or relative paths.
  • Redirections that create files can't be used (specifically >, >|, >>, <>).

Common high-end shells like bash(1), ksh(1), and zsh(1) all can be launched in restricted mode. See the manual pages for the individual shells for the specifics of how to invoke restrictions.

Even with said restrictions, there are several ways by which it is trivial to escape from a restricted shell: If normal shells are available anywhere in the path, they can be launched instead. If regular programs in the available path provide shell escapes to full shells, they too can be used. Finally, if sshd(8) is configured to allow arbitrary programs to be run independently of the shell, a full shell can be launched instead. So there's more to safely using restricted shells than just setting the account's shell to /bin/rbash and calling it a day. Several steps are needed to make that as difficult as possible to escape the restrictions, especially over SSH.

(The following steps assume familiarity with the appropriate system administration tools and their use. Their selection and use are not covered here.)

First, create a directory containing a handful of symbolic links that point to white-listed programs. The links point to the programs that the account should be able to run when the directory is added to the PATH environment variable. These programs should have no shell escape capabilities and, obviously, they should not themselves be unrestricted shells.

If you want to prevent exploration of the system at large, remember to also lock the user into a chroot or jail. Even without programs like ls(1) and cat(1), exploration is still possible (see below: "ways to explore without ls(1) and cat(1)").

Symbolic links are used because the originals are, hopefully, maintained by package management software and should not be moved. Hard links cannot be used if the original and whitelisted directories are in separate filesystems. Hard links are necessary if you set up a chroot or jail that excludes the originals.

$ ls -l /usr/local/rbin/
total 8
lrwxr-xr-x  1 root  wheel    22 Jan 17 23:08 angband -> /usr/local/bin/angband
lrwxr-xr-x  1 root  wheel     9 Jan 17 23:08 date -> /bin/date
-rwxr-xr-x  1 root  wheel  2370 Jan 17 23:18 help
lrwxr-xr-x  1 root  wheel    12 Jan 17 23:07 man -> /usr/bin/man
lrwxr-xr-x  1 root  wheel    13 Jan 17 23:09 more -> /usr/bin/more
lrwxr-xr-x  1 root  wheel    28 Jan 17 23:09 nethack -> /usr/local/bin/nethack-3.4.3

Next, create a minimal .profile for that account. Set its owner to 'root'. Do this for its parent directory too (which is the user's home directory). Then, allow the account's own group to read both the file and the directory.

$ cd /home/fred

$ cat .profile

$ ls -ld . .profile
drwxr-xr-x  3 root  fred    512 Jan 17 23:20 .
-rw-r--r--  1 root  fred     48 Jan 17 23:20 .profile

Next, create a group for the locked down account(s) and populate it. Here the account is in the group games and will be restricted through its membership in that group.

$ groups fred
fred games

Next, lock down SSH access for that group or account through use of a ForceCommand directive in the server configuration and apply it to the selected group. This is necessary to prevent trivial circumvention through the SSH client by calling a shell directly, such as with ssh -t /bin/sh or similar. Remember to disable forwarding if it is not needed. For example, the following can be appended to sshd_config(5) so that any account in the group 'games' gets a restricted shell no matter what they try with the SSH client.

Match Group games
        X11Forwarding no
        AllowTcpForwarding no
        ForceCommand rksh -l

Note that the restricted shell is invoked with the -l option by the ForceCommand so that it will be a login shell that reads and executes the contents of /etc/profile and $HOME/.profile if they exist and are readable. This is necessary to set the custom PATH environment variable. Again, be sure that $HOME/.profile is not in any way editable or overwritable by the restricted account. Also note that this disables SFTP access by that account, which prevents quite a bit of additional mischief.

Last but not least, set the account's login shell to the restricted shell. Include the full path to the restricted shell. It might also be necessary to add it to the list of approved shells found in /etc/shells first.

Beware: ways to explore without ls(1) and cat(1) edit

# To see a list of files in the current working directory:
$ echo *

# To see the contents of a text file:
$ while read j; do echo "$j"; done  <.profile

Tunnels edit


In tunneling, or port forwarding, a local port is connected to a port on a remote host or vice versa. So connections to the port on one machine are in effect connections to a port on the other machine.

The ssh(1) options -f (go to background), -N (do not execute a remote program) and -T (disable pseudo-tty allocation) can be useful for connections that are used only for creation of tunnels.

Tunneling edit

In regular port forwarding, connections to a local port are forwarded to a port on a remote machine. This is a way of securing an insecure protocol or of making a remote service appear as local. Here we forwarded VNC in two steps. First make the tunnel:

$ ssh -L 5901:localhost:5901 -l fred

In that way connections on the local machine made to the forwarded port will in effect be connecting to the remote machine.

Multiple tunnels can be specified at the same time. The tunnels can be of any kind, not just regular forwarding. See the next section below for reverse tunnels. For dynamic forwarding see the section Proxies and Jump Hosts.

$ ssh -L 5901:localhost:5901 \
      -L 5432:localhost:5432 \
      -l fred

If a connection is only used to create a tunnel, then it can be told not to execute any remote programs (-N), making it a non-interactive session, and also to drop to the background (-f).

$ ssh -fN -L 3128:localhost:3128 -l fred

Note that -N will work even if the authorized_keys forces a program using the command="..." option. So a connection using -N will stay open instead of running a program and then exiting.

The three connections above could be saved in the SSH client's configuration file, ~/.ssh/config and even given shortcuts.

Host desktop
        User fred
        LocalForward 5901 localhost:5901

Host postgres
        User fred
        LocalForward 5901 localhost:5901
        LocalForward 5432 localhost:5432

Host server
        User fred
        ExitOnForwardFailure no
        LocalForward 3128 localhost:3128

Host *
        ExitOnForwardFailure yes

With those settings, the tunnels listed are added automatically when connecting to desktop,, postgres, server, or The catchall configuration at the end applies to any of the above hosts which have not already set ExitOnForwardFailure to 'no' and the client will refuse to connect if a tunnel cannot be made. The first obtained value for any given configuration directive will be used, but the file's contents can be overidden with run-time options passed on the command line.

Tunneling Via A Single Intermediate Host edit

Tunneling can go via one intermediate host to reach a second host, and the latter does not need to be on a publicly accessible network. However, the target port on the second remote machine does have to be accessible on the same network as the first. Here, and must be on the same network and, in addition, has to be directly accessible to the client machine running ssh(1). So, port 80 on has to be available to the machine

$ ssh -fN -L 1880: -l fred

Thus, once the tunnel is made, to connect to port 80 on via the host, connect to port 1880 on localhost. This way works for one or two hosts. It is also possible to chain multiple hosts, using different methods.

Securing a Hop, Tunneling Via One Or More Intermediate Hosts edit

Here, the idea is to limit the ability of a group of users to the bare minimum needed to pass through a jump host yet still be able to forward ports onward to other machines. If the account is sufficiently locked down then the bastion can only be used for forwarding and not shell access, scripts, or even SFTP. The following settings on the bastion host in sshd_config(5) prevent either shell access or SFTP but still allow port forwarding.

Match Group tunnelers
        ForceCommand /bin/false
        PasswordAuthentication no
        ChrootDirectory %h
        PermitTTY no
        X11Forwarding no
        AllowTcpForwarding yes
        PermitTunnel no
        Banner none

Note that their home directories, but not the files within them, must be owned by root and writable by only root because of the ChrootDirectory configuration directive there in sshd_config(5). Also, because of the PasswordAuthentication configuration directive keys will have to be set up in the home directory in ~/.ssh/authorized_keys, if an alternate location is not already configured.

$ ssh -N -L 9980:localhost:80 -J fred@

In that way port 9980 on the client is directed via through to port 80 on

In cases where the bastion must have a reverse tunnel from the inner host in order to reach it, then the same method works but with the prerequisite of a reverse tunnel from the inner host to the bastion first.

For more about passing through intermediate computers, see the Cookbook section on Proxies and Jump Hosts.

Finding The Process ID (PID) Of A Tunnel Which Is In The Background edit

When a tunneled connection is sent to the background execution using the -f option for the client, there is not currently an automatic way to find the process ID (PID) of the task sent to the background. Background processes are often used for port forwarding or reverse port forwarding. Here is an example of port forwarding, also called tunneling. The connection is made and then the client goes away, leaving the tunnel available in the background, connecting port 2194 on the local host to port 194 of the remote system's local host.

$ ssh -Nf -L 2194:localhost:194 fred@

The special $! variable remains empty, even if $? reports success or failure of the action. The reason is that the shell's job control did not put the client into the background. Instead the client runs in the foreground for a moment and then exits normally, after leaving a different process to run in the background via a fork. The process ID of the original client vanishes since that client is gone.

Finding the process ID usually takes at least two steps. Some of the ways to retroactively identify the process involve trying ps(1) and rummaging through the output for all that account's processes, but that is unnecessary effort. If the background SSH client is the most recent one, then pgrep(1) can be used, or else the output needs to be comma delimited and fed into ps(1) via xargs(1) or process substitution.

$ ps uw | less

$ pgrep -n -x ssh

$ pgrep -d, -x ssh | xargs ps -p

$ ps -p $(pgrep -d, -x ssh)

Some variations on the above might be needed depending on operating system if the -d option is not supported.

$ pgrep -x ssh | xargs -n 1 ps -o user,pid,ppid,args -p | awk 'NR==1 || $3==1'
fred     97778     1 ssh -fN -L 8008:localhost:80 fred@
fred     14026     1 ssh -fN -L 8183:localhost:80 fred@
fred     79522     1 ssh -fN -L 8228:localhost:80 fred@
fred     49773     1 ssh -fN -L 8205:localhost:80

Either way, note that all the connections running in the background have a Parent Process ID (PPID) of 1, the process control initialization system for the operating system.

Being aware of this shortcoming, a proactive approach can be used with the ControlMaster and ControlPath configuration directives in order to leave a socket to read to get the background task's process ID.

$ ssh -M -S ~/.ssh/sockets/pid.%C -fN -L 5901:localhost:5901 fred@

$ ssh -S ~/.ssh/sockets/pid.%C -O check

The -M option causes the client to go into Master mode for multiplexing using the socket designated by the -S option. Then, after -f forks the client into the background, The control command check will then use the socket to check that the master process is running and report the Process ID.

It is a good idea for the socket to be in an isolated directory not readable or writable by other accounts. Aside from the complexity, a noticeable down side is that it can be possible for the socket to be reused for additional connections. See the Cookbook section on Multiplexing for more about the risks and additional uses.

Reverse Tunneling edit

A reverse tunnel forwards connections in the opposite direction of a regular tunnel, that is to say the opposite direction from that which the SSH session is initiated. With remote forwarding as it is also called, an SSH session begins from the local host to a remote host while a port is forwarded from the remote host to one the local host. There are two stages in using reverse tunneling: first connect from endpoint A to endpoint B using SSH with remote forwarding enabled; second connect other systems to the designated port on endpoint B and that port is then forwarded to endpoint A. So while system A initiated the SSH connection to system B, the connections to the designated port on B are sent over to A over the reverse tunnel. Once the SSH connection is made, the reverse tunnel can be used on endpoint B the same as a regular tunnel, even though the endpoint A initiated the SSH connection.

Remote forwarding is method which can be used to forward SSH over SSH in order to work on an otherwise inaccessible system, such as an always-on SBC behind a home router. First, open an SSH session from the inaccessible system to another which is accessible including a designated reverse tunnel. In this example, while the SSH connection is from a local system (endpoint A) to a remote system (endpoint B), that connection contains a reverse tunnel from port 2022 on that remote system (endpoint B) to port 22 over on the local system:

$ ssh -fNT -R 2022:localhost:22 -l fred

Lastly, using the example above, a connection is made on the remote machine,, to the reverse tunnel connection on port 2022. Thus even though the connection is made to port 2022 on locahost on, the packets end up over on port 22 on the system which initiated the SSH initial connection carrying the reverse tunnel.

$ ssh -p 2022 -l fred localhost

Thus that example allows SSH access to an otherwise inaccessible system via SSH. SSH goes out, makes a reverse tunnel, then the second system can connect at will to the first via SSH as long as the tunnel persists. If keys and a loop are used to generate the SSH connection with the remote forwarding then the reverse tunnel can be maintained automatically.

The next example makes VNC available over SSH from an otherwise inaccessible system via a second system, Starting from the system with a running VNC server, reverse forward the port for the first VNC display locally to the third VNC display over on

$ ssh -fNT -R 5903:localhost:5901 -l fred

Then on the system, people can connect to that system's localhost address on its third VNC display and be patched through to the originating system:

$ xvncviewer :3

That also is an example of how the forwarded ports don't have to be the same.

Remote forwarding can be included in the SSH client's configuration file using the RemoteForward directive.

A common use-case for reverse tunneling is when you have to access a system or service which is behind either NAT or a firewall or both and thus incoming SSH connections are blocked, but you have direct access to a second system outside the firewall which can accept incoming connections. In such cases it is easy to make a reverse tunnel from the internal system behind the firewall to the second system on the outside. Once the SSH connection has made the reverse tunnel, to connect to the internal system from outside, other systems can connect to the forwarded port on the remote system. The remote system on the outside then acts as a relay server to forward connections to the initiating system on the inside.

Adding or Removing Tunnels within an Established Connection edit

It is possible to add or remove tunnels, reverse tunnels, and SOCKS proxies to or from an existing connection using escape sequences. The default escape character is the tilde (~) and the full range of options is described in the manual page for ssh(1). Escape sequences only work if they are the first characters entered on a line and followed by a return. When adding or removing a tunnel to or from an existing connection, ~C, the command line is used.

To add a tunnel in an active SSH session, use the escape sequence to open a command line in SSH and then enter the parameters for the tunnel:

L 2022:localhost:22

To remove a tunnel from an active SSH session is almost the same. Instead of -L, -R, or -D we have -KL, -KR, and -KD plus the port number. Use the escape sequence to open a command line in SSH and then enter the parameters for removing the tunnel.


Adding or Removing Tunnels within a Multiplexed Connection edit

There is an additional option for forwarding when multiplexing. More than one SSH connection can be multiplexed over a single TCP connection. Control commands can be passed to the master process to add or drop port forwarding to the master process.

First a master connection is made and a socket path assigned.

$ ssh -S '/home/fred/.ssh/%h:%p' -M

Then using the socket path, it is possible to add port forwarding.

$ ssh -O forward -L 2022:localhost:22 -S '/home/fred/.ssh/%h:%p'

Since OpenSSH 6.0 it is possible to cancel specific port forwarding using a control command.

$ ssh -S  "/home/fred/.ssh/%h:%p" -O cancel -L 2022:localhost:22

For more about multiplexing, see the Cookbook section on Multiplexing.

Restricting Tunnels to Specific Ports edit

By default, port forwarding will allow forwarding to any port if it is allowed at all. The way to restrict which ports a user can use in forwarding is to apply the PermitOpen option on the server side either in the server's configuration or inside the user's public key in authorized_keys. For example, with this setting in sshd_config(5) all users can forward only to port 7654 on the server, if they try forwarding:

PermitOpen localhost:7654

Multiple ports may be specified on the same line if separated by whitespace.

PermitOpen localhost:7654 localhost:3306

If the client tries to forward to a disallowed port, there will be a warning message that includes the text "open failed: administratively prohibited: open failed" while the connection otherwise continues as normal. However, even if the client has ExitOnForwardFailure in its configuration a connection will still succeed, despite the warning message.

However, if shell access is available, it is possible to run other port forwarders, so without further restrictions, PermitOpen is more of a reminder or recommendation than a restriction. But for many cases, such a reminder might be enough.

For reverse tunnels, the PermitListen option is available instead. It determines which port on the remote system is available. So the following, for example, allows ssh -R 2022:localhost:xxxx, where xxxx can be any available port number at the origin of the reverse tunnel, but only 2022 on the far end of the tunnel.

PermitListen localhost:2022

The PermitOpen or PermitListen options can be used as part of one or more Match blocks if forwarding options need to vary depending on various combinations of user, group, client address or network, server address or network, and/or the listening port used by sshd(8). If using the Match criteria to selectively apply rules for port forwarding, it is also possible to prevent the account from getting an interactive shell by setting PermitTTY to no. That will prevent the allocation of a pseudo-terminal (PTY) on the server thus prevent shell access, but allow other programs to still be run unless an appropriate forced command is specified in addition.

Match Group mariadbusers
        PermitOpen localhost:3306
        PermitTTY no
        ForceCommand /usr/bin/true

With that stanza in sshd_config(5) it is only possible to connect by adding the -N option to avoid executing a remote command.

$ ssh -N -L 3306:localhost:3306

The -N option can be used alone or with the -f option which drops the client to the background once the connection is established.

Without the ForceCommand option in a Match block in the server configuration, if an attempt is made to get a PTY by a client that is blocked from getting one, the warning "PTY allocation request failed on channel n" will show up on the client, with n being the channel number, but otherwise the connection will succeed without a remote shell and the port(s) will still be forwarded. Various programs, including shells, can still be specified by the client, they just won't get a PTY. So to really prevent access to the system other than forwarding, a forced command is needed. The tool true(1) comes in handy for that. Note that true(1) might be in a different location on different systems.

Limiting Port Forwarding Requests Using Keys Only edit

The following authorized_keys line shows the PermitOpen option prepended to a key in order to limit a user connecting with that particular key to forwarding to just port 8765:

permitopen="localhost:8765" ssh-ed25519 AAAAC3NzaC1lZDI1NT...

Multiple PermitOpen options may be applied to the same public key if they are separated by commas and thus a key can allow multiple ports.

By default, shell access is allowed. With shell access it is still possible to run other port forwarders. The no-pty option inside the key can facilitate making a key that only allows forwarding and not an interactive shell if combined with an appropriate forced command using the command option. Here is an example of such a key as it would be listed in authorized_keys:

no-pty,permitopen="localhost:9876",command="/usr/bin/true" ssh-ed25519 AAAAC3NzaC1lZDI1NT...

The no-pty option blocks interactive shell. The client will still connect to the remote server and will allow forwarding but will respond with an error including the message "PTY allocation request failed on channel n". But as mentioned in the previous subsection, there are still a lot of ways around that and adding the command option hinders them.

This method is awkward to lock down. If the account has write access in any way, directly or indirectly, to the authorized_keys file, then it is possible for the user to add a new key or overwrite the existing key(s) with more permissive options. In order to prevent that, the server has to be configured to look for the file in a location outside of the user's reach. See the section on Public Key Authentication for details on that. The methods described above in the previous subsection using sshd_config(5) might be more practical in many cases.

Automated Backup edit

Using OpenSSH with keys can facilitate secure automated backups. rsync(1)[41], tar(1), and dump(8) are the foundation for most backup methods. It's a myth that remote root access must be allowed. If root access is needed, sudo(8) works just fine or, in the case of zfs(8), the OpenZFS Delegation System. Remember that until the backup data has been tested and shown to restore reliably it does not count as a backup copy.

Backup with rsync(1) edit

rsync(1) is often used to back up both locally and remotely. It is fast and flexible and copies incrementally so only the changes are transferred, thus avoiding wasting time re-copying what is already at the destination. It does that through use of its now famous algorithm. When working remotely, it needs a little help with the encryption and the usual practice is to tunnel it over SSH.

The rsync(1) utility now defaults to using SSH and has since 2004[42]. Thus the following connects over SSH without having to add anything extra:

$ rsync -a \

But use of SSH can still be specified explicitly if additional options must be passed to the SSH client:

$ rsync -a -e 'ssh -v' \ \

For some types of data, transfer can sometimes be expedited greatly by using rsync(1) with compression, -z, if the CPUs on both ends can handle the extra work. However, it can also slow things down. So compression is something which must be tested in place to find out one way or the other whether adding it helps or hinders.

Rsync with Keys edit

Since rsync(1) uses SSH by default it can even authenticate using SSH keys by using the -e option to specify additional options. In that way it is possible to point to a specific SSH key file for the SSH client to use when establishing the connection.

$ rsync --exclude '*~' -avv \
    -e 'ssh -i ~/.ssh/key_bkup_rsa' \ \

Other configuration options can also be sent to the SSH client in the same way if needed, or via the SSH client's configuration file. Furthermore, if the key is first added to an agent, then the key's passphrase only needs to be entered once. This is easy to do in an interactive session within a modern desktop environment. In an automated script, the agent will have to be set up with explicit socket names passed along to the script and accessed via the SSH_AUTH_SOCK variable.

Root Level Access for rsync(1) with sudo(8) edit

Sometimes the backup process needs access to a different account other than the one which can log in. That other account is often root which for reasons of least privilege is usually denied direct access via SSH. rsync(1) can invoke sudo(8) on the remote machine, if needed.

Say you're backing up from the server to the client. rsync(1) on the client uses ssh(1) to make the connection to rsync(1) on the server. rsync(1) is invoked from client with -v passed to the SSH client to see exactly what parameters are being passed to the server. Those details will be needed in order to incorporate them into the server's configuration for sudo(8). Here the SSH client is run with a single level of increased verbosity in order to show which options must be used:

$ rsync \
  -e 'ssh -v \
          -i ~/.ssh/key_bkup_rsa  \
          -t             \
          -l bkupacct'   \
  --rsync-path='sudo rsync' \ 
  --delete   \
  --archive  \
  --compress \
  --verbose  \
  bkupacct@server:/var/www/ \

There the argument --rsync-path tells the server what to run in place of rsync(1). In this case it runs sudo rsync. The argument -e says which remote shell tool to use. In this case it is ssh(1). For the SSH client being called by the rsync(1) client, -i says specifically which key to use. That is independent of whether or not an authentication agent is used for ssh keys. Having more than one key is a possibility, since it is possible to have different keys for different tasks.

You can find the exact settings(s) to use in /etc/sudoers by running the SSH in verbose mode (-v) on the client. Be careful when working with patterns not to match more than is safe.

Adjusting these settings will most likely be an iterative process. Keep making changes to /etc/sudoers on the server while watching the verbose output until it works as it should. Ultimately /etc/sudoers will end up with a line allowing rsync(1) to run with a minimum of options.

Steps for rsync(1) with Remote Use of sudo(8) Over SSH edit

These examples are based on fetching data from a remote system. That is to say that the data gets copied from /source/directory/ on the remote system to /destination/directory/ locally. However, the steps will be the same for the reverse direction, but a few options will be placed differently and --sender will be omitted. Either way, copy-paste from the examples below won't work.

Preparation: Create a single purpose account to use only during the backups, create a pair of keys to use only for that account, then make sure you can log in to that account with ssh(1) with and without those keys.

$ ssh -i ~/.ssh/key_bkup_ed25519

The account on the server is named 'bkupacct' and the private Ed25519 key is ~/.ssh/key_bkup_ed25519 on the client. On the server, the account 'bkupacct' is a member of the group 'backups'. See the section on Public Key Authentication if necessary.

The public key, ~/.ssh/, must be copied to the account 'bkupacct' on the remote system and placed in ~/.ssh/authorized_keys in the correct place. Then it is necessary that the following directories on the server are owned by root and belong to the group 'backups' and are group readable, but not group writable, and definitely not world readable: ~ and ~/.ssh/. Same for the file ~/.ssh/authorized_keys there. (This also assumes you are not also using ACLs) However this is only one way of many to set permissions on the remote system:

$ sudo chown root:bkupacct ~
$ sudo chown root:bkupacct ~/.ssh/
$ sudo chown root:bkupacct ~/.ssh/authorized_keys
$ sudo chmod u=rwx,g=rx,o= ~
$ sudo chmod u=rwx,g=rx,o= ~/.ssh/
$ sudo chmod u=rwx,g=r,o=  ~/.ssh/authorized_keys

Now the configuration can begin.

Step 1: Configure sudoers(5) so that rsync(1) can work with sudo(8) on the remote host. In this case data is staying on the remote machine. The group 'backups' will temporarily need full access in order to find and set specific options used later in locking this down.

%backups ALL=(root:root) NOPASSWD: /usr/bin/rsync

That is a transitory step and it is important that line should not be left in place as-is for any length of time.

However, while it is in place, ensure that rsync(1) works with sudo(8) by testing it with the --rsync-path option.

$ rsync --rsync-path='sudo rsync' \
-aHv /destination/directory/

The transfer should run without errors, warnings, or extra password entry.

Step 2: Next, do the same transfer again but using the key for authentication to make sure that the two can be used together.

$ rsync -e 'ssh -i ~/.ssh/key_bkup_ed25519' --rsync-path='sudo rsync' \
-aHv /destination/directory/

Again, see the section on Public Key Authentication if necessary.

Step 3: Now collect the connection details. They are needed to tune sudoers(5) appropriately.

$ rsync -e 'ssh -E ssh.log -v -i ~/.ssh/key_bkup_ed25519' \
--rsync-path='sudo rsync' \
-aHv /destination/directory/

$ grep -i 'sending command' ssh.log

The second command, the one with grep(1), ought to produce something like the following:

debug1: Sending command: rsync --server --sender -vlHogDtpre.iLsfxCIvu . /source/directory/

The long string of letters and the directory are important to note because those will be used to tune sudoers(5) a little. Remember that in these examples, the data gets copied from /source/directory/ on the remote machine to /destination/directory/ locally.

Here are the settings which match the formula above, assuming the account is in the group backups:

%backups ALL=(root:root) NOPASSWD: /usr/bin/rsync --server --sender -vlHogDtpre.iLsfxCIvu . /source/directory/

That line adjusts sudoers(5) so that the backup account has enough access to run rsync(1) as root but only in the directories it is supposed to run in and without free-rein on the system.

More refinements may come later, but those are the basics for locking sudoers(5) down. At this point you are almost done, although the process can be automated much further. Be sure that the backed up data is not accessible to others once stored locally.

Step 4: Test rsync(1) with sudo(8) over ssh(1) to verify that the settings made in sudoers(5) are correct.

$ rsync -e 'ssh -i ~/.ssh/key_bkup_ed25519' --rsync-path='sudo rsync' \
-aHv /destination/directory/

The backup should run correctly at this point.

Step 5: Finally it is possible to lock that key into just the one task by prepending restrictions using the command="..." option in the authorized_keys file. The explanation for that is found in sshd(8).

command="/usr/bin/rsync --server --sender -vlHogDtpre.iLsfxCIvu . ./source/directory" ssh-ed25519 AAAAC3Nz...aWi

Thereafter that one key functions only for the backup. It's an extra layer upon the settings already made in the sudoers(5) file.

Thus you are able to do automated remote backup using rsync(1) with root level access yet avoiding remote root login. Nevertheless keep close tabs on the private key since it can still be used to fetch the remote backup and that may contain sensitive information anyway.

From start to finish, the process requires a lot of attention to detail, but is quite doable if taken one step at a time. Setting up backups going the reverse direction is quite similar. When going from local to remote the ---sender option will be omitted and the directories will be different.

Other Implementations of the Rsync Protocol edit

openrsync(1) is a clean room reimplementation[43] of version 27 of the Rsync protocol as supported by the implementation of rsync(1). It has been in OpenBSD's base system since OpenBSD version 6.5. It is invoked with a different name, so if it is on a remote system and's rsync(1) is on the local system, the --rsync-path option must be point to it by name:

$ rsync -a -v -e 'ssh -i key_rsa' \
	--rsync-path=/usr/bin/openrsync \ \

Going the other direction, starting with openrsync(1) and connecting to rsync(1) on the remote system, needs no such tweaking.

Backup Using tar(1) edit

A frequent choice for creating archives is tar(1). But since it copies whole files and directories, rsync(1) is usually much more efficient for updates or incremental backups.

The following will make a tarball of the directory /var/www/ and send it via stdout on the local machine into sdtin on the remote machine via a pipe into ssh(1) where, it is then directed into the file called backup.tar. Here tar(1) runs on a local machine and stores the tarball remotely:

$ tar cf - /var/www/ | ssh -l fred 'cat > backup.tar'

There are almost limitless variations on that recipe:

$ tar zcf - /var/www/ /home/*/www/ \
	|  ssh -l fred 'cat > $(date +"%Y-%m-%d").tar.gz'

That example does the same, but also gets user WWW directories, compress the tarball using gzip(1), and label the resulting file according to the current date. It can be done with keys, too:

$ tar zcf - /var/www/ /home/*/www/ \
	|  ssh -i key_rsa -l fred 'cat > $(date +"%Y-%m-%d").tgz'

And going the other direction is just as easy for tar(1) to find what is on a remote machine and store the tarball locally.

$ ssh 'tar zcf - /var/www/' >  backup.tgz

Or here is a fancier example of running tar(1) on the remote machine but storing the tarball locally.

$ ssh -i key_rsa -l fred 'tar jcf - /var/www/ /home/*/www/' \
	> $(date +"%Y-%m-%d").tar.bz2

So in summary, the secret to using tar(1) for backup is the use of stdout and stdin to effect the transfer through pipes and redirects.

Backup of Files With tar(1) But Without Making A Tarball edit

Sometimes it is necessary to just transfer the files and directories without making a tarball at the destination. In addition to writing to stdin on the source machine, tar(1) can read from stdin on the destination machine to transfer whole directory hierarchies at once.

$ tar zcf - /var/www/ | ssh -l fred "cd /some/path/; tar zxf -"

Or going the opposite direction, it would be the following.

$ ssh 'tar zcf - /var/www/' | (cd /some/path/; tar zxf - )

However, these still copy everything each time they are run. So rsync(1) described above in the previous section might be a better choice in many situations, since on subsequent runs it only copies the changes. Also, depending on the type of data network conditions, and CPUs available, compression might be a good idea either with tar(1) or ssh(1) itself.

Backup Using dump edit

Using dump(8) remotely is like using tar(1). One can copy from the remote server to the local server.

$ ssh -t 'sudo dump -0an -f - /var/www/ | gzip -c9' > backup.dump.gz

Note that the password prompt for sudo(8) might not be visible and it must be typed blindly.

Or one can go the other direction, copying from the locate server to the remote:

$ sudo dump -0an -f - /var/www/ | gzip -c9 | ssh 'cat > backup.dump.gz'

Note again that the password prompt might get hidden in the initial output from dump(8). However, it's still there, even if not visible.

Backup Using zfs(8) Snapshots edit

OpenZFS can easily make either full or incremental snapshots as a beneficial side effect of copy-on-write. These snapshots can be sent over SSH to or from another system. This method works equally well for backing up or restoring data. However, bandwidth is a consideration and the snapshots must be small enough to be feasible for the actual network connection in question. OpenZFS supports compressed replication such that the blocks which have been compressed on the disk remain compressed during transfer, reducing the need to recompress using another process. The transfers can be to or from either a regular file or another OpenZFS file system. It should be obvious but it is important to remember that smaller snapshots use less bandwidth and thus transfer more quickly than larger ones.

A full snapshot is required first because incremental snapshots only contain a partial set of data and require that the foundation upon which they were formed exists. The following uses zfs(8) to make a snapshot named 20210326 of a dataset named site01 in a pool named web.

$ zfs snapshot -r web/site01@20210326

The program itself will most likely be in the /sbin/ directory and either the PATH environment variable needs to include it or else the absolute path should be used instead. Incremental snapshots can subsequently be built upon the initial full snapshot by using the -i option. However, the ins and outs of OpenZFS management are far outside the scope of this book. Just the two methods for transfer between systems will be examined here. The one method is using an intermediate file and the other is more direct using a pipe. Both use zfs send and zfs receive and the accounts involved must have the correct privileges in the OpenZFS Delegation System. For sending, it will be send and snapshot for the relevant OpenZFS pool. For receiving, it will be create, mount, and receive for the relevant pool.

OpenZFS To And From A Remote File System Via A File edit

A snapshot can be transferred to a file on a local or remote system over SSH. This method does not need privileged access on either system, but the account running zfs must have the correct internal OpenZFS permissions as granted by zfs allow. Here a very small snapshot is downloaded from the remote system to a local file:

$ ssh '/sbin/zfs send -v web/site01@20210326' > site01.openzfs 
full send of web/site01@20210326 estimated size is 1.72M
total estimated size is 1.72M

If incremental snapshot are copied, the full snapshot on which they are based needs to be copied also. So care should be taken to ensure that this is a full snapshot and not just an incremental snapshot.

Later, restoring the snapshot is matter of going the reverse direction. In this case the data is retrieved from the file and sent to zfs(8) over SSH.

$ cat site01.openzfs | ssh '/sbin/zfs receive -v -F web/site01@20210326' 
receiving full stream of web/site01@20210326 into web/site01@20210326
received 1.85M stream in 6 seconds (316K/sec)

This is possible because the channel is 8-bit-clean when started without a PTY as happens when invoking programs directly at run time. Note that the targeted OpenZFS data set must be umounted using zfs(8) first. Then after the transfer it must be mounted again.

The Other Direction edit

Transferring from the local system to the remote is a matter of changing around the order of the components.

$ /sbin/zfs send -v web/site01@20210326 | ssh 'cat > site01.openzfs'
full send of web/site01@20210326 estimated size is 1.72M
total estimated size is 1.72M

Then similar changes are needed to restore from the remote to the local.

$ ssh 'cat site01.openzfs' | /sbin/zfs receive -v -F web/site01@20210326' 
receiving full stream of web/site01@20210326 into web/site01@20210326
received 1.85M stream in 6 seconds (316K/sec)

As usual, to avoid using the root account for these activities, the account running zfs(8) must have the right levels of access within the OpenZFS Delegation System.

OpenZFS Directly To And From A Remote File System edit

Alternatively that snapshot can be transferred over SSH to a file system on the remote computer. This method needs privileged access and will irrevocably replace any changes made on the remote system since the snapshot.

$ zfs send -v pool/www@20210322 | ssh 'zfs receive -F pool/www@20210322'

So if removable hard drives are used on the remote system, this can update them.

$ ssh 'zfs send -v pool/www@20210322' | zfs receive -F pool/www@20210322

Again, the remote account must already have been permitted the necessary internal ZFS permissions.

The Other Direction edit

Again, to go the other direction, from a remote system to a local one, it is a matter of changing around the order of the components.

$ ssh 'zfs send -v pool/www@20210322' | zfs receive -F pool/www@20210322


$ zfs send -v pool/www@20210322 | ssh 'zfs receive -F pool/www@20210322'

Again, working with the OpenZFS Delegation System can avoid the need for root access on either end of the transfer.

Buffering OpenZFS Transfers edit

Sometimes the CPU and network will alternate being the bottleneck during the file transfers. The mbuffer(1) utility can allow a steady flow of data [44] even when the CPU gets ahead of the network. The point is to leave a big enough buffer for there to always be some data transferring over the net even while the CPU is catching up.

$ cat site01.zfs | mbuffer -s 128k -m 1G \
| ssh 'mbuffer -s 128k -m 1G | /sbin/zfs receive -v -F web/site01'

summary: 1896 kiByte in  0.2sec - average of 7959 kiB/s
receiving full stream of web/site01@20210326 into web/site01@20210326
in @ 2556 kiB/s, out @ 1460 kiB/s, 1024 kiB total, buffer   0% full
summary: 1896 kiByte in  0.8sec - average of 2514 kiB/s
received 1.85M stream in 2 seconds (948K/sec)

Further details of working with OpenZFS and managing its snapshots are outside the scope of this book. Indeed. there are whole guides, tutorials, and even books written about OpenZFS.

File Transfer with SFTP edit


Basic SFTP service requires no additional setup, it is a built-in part of the OpenSSH server and it is the subsystem sftp-server(8) which then implements an SFTP file transfer. See the manual page for sftp-server(8). Alternately, the subsystem internal-sftp can implement an in-process SFTP server which may simplify configurations using ChrootDirectory to force a different filesystem root on clients.

On the client, the same options and tricks are available for SFTP as for the regular SSH clients. However, some client options may have to be specified with the full option name using the -o argument. For many dedicated graphical SFTP clients, it is possible to use a regular URL to point to the target. Many file managers nowadays have built-in support for SFTP. See the section "GUI Clients" above.

Basic SFTP edit

SFTP provides a very easy to use and very easy to configure option for accessing a remote system. Just to say it again, regular SFTP access requires no additional changes from the default configuration. The usual clients can be used or special ones like sshfs(1).

Automated SFTP edit

SFTP uploads or downloads can be automated. The prerequisite is key-based authentication. Once key-based authentication is working, a batch file can be used to carry out activities via SFTP. See the batchfile option -b in sftp(1) for details.

$ sftp -b /home/fred/cmds.batch -i /home/fred/.ssh/foo_key_rsa

If a dash (-) is used as the batch file name, SFTP commands will be read from stdin.

$ echo "put /var/log/foobar.log" | sftp -b - -i /home/fred/.ssh/foo_key_rsa

More than one SFTP command can be sent, but then it is better to use the batch file mode.

$ echo -e "put /var/log/foobar.log\nput /var/log/munged.log" | sftp -b - -i /home/fred/.ssh/foo_key_rsa

The batch file mode can be very useful in cron jobs and in scripting.

SFTP-only Accounts edit

Using the Match directive in sshd_config(5), it is possible to limit members of a specific group to using only SFTP for interaction with the server.

Subsystem sftp internal-sftp

Match Group sftp-only
	AllowTCPForwarding no
	X11Forwarding no
	ForceCommand internal-sftp

Note that disabling TCP forwarding does not improve security unless users are also denied shell access, as they can in principle install their own forwarders.

See PATTERNS in ssh_config(5) for more information on patterns available to Match.

It is common for a group of accounts to need to read and write files to their home directories on the server while having little or no reason to access the rest of the file system. SFTP provides a very easy to use and very easy to configure chroot. In some cases, it is enough to chroot users to their home directories. This may not be as straight forward because in most cases home directories aren't owned by root and allow writing by at least one user. However, since SFTP chroot requires that the chroot target directory and all parent directories are owned by root and not writable by any others, this causes some difficulty but is necessary. Without ownership restrictions, it is quite feasible to escape the chroot. [45]

One way around the difficulties imposed by this restriction is to have the home directory owned by root and have it populated with a number of other directories and files that are owned by the regular account to which the user can actually write to.

Match Group sftp-only
	ChrootDirectory %h
	AllowTCPForwarding no
	X11Forwarding no
	ForceCommand internal-sftp

In that case the root user will have to populate the target directory with the needed files and subdirectories and then change their ownership to that of the unprivileged account.

Three Ways of Setting Permissions for Chrooted SFTP-Only Accounts edit

There are at least three other ways set permissions for home directories for chrooted SFTP-only accounts. They each have strong advantages and a few drawbacks and thus will suit some particular situations but not others.

Option 1: Automatically Setting the Start Directory edit

If it is not practical to have the various home directories owned by root, a compromise can be made. ChrootDirectory can point to /home, which must be owned by root anyway, and then ForceCommand can then designate the user's home directory as the starting directory using the -d option.

Match Group sftp-only
        ChrootDirectory /home/
        ForceCommand internal-sftp -d %u

Nesting another directory under the chroot is another way to do this. The subdirectory would be writable by the unprivileged account, the chroot target would not.

Match Group sftp-only
        ChrootDirectory /home/%u
        AllowTCPForwarding no
        X11Forwarding no
        ForceCommand internal-sftp -d %u

If it is necessary to hide the contents of the home directories from other users, chmod(1) can be used. Permissions could be 0111 for /home and 0750, 0700, 0770, or 2770, and so on for the home directories, be sure to check the group memberships as well.

Option 2: Nested Home Directories edit

Alternately, for a similar effect but with more isolation, home directories can be nested one level deeper for the chrooted accounts. Note the ownership and permissions for the following directories:

$ ls -lhd /home/ /home/*/ /home/*/*/
drwxr-xr-x  4 root  root  4.0K Aug  4 20:47 /home/
drwxr-x---  3 root  user1 4.0K Aug  4 20:47 /home/user1/
drwxr-x---  3 root  user2 4.0K Aug  4 20:47 /home/user2/
drwxr-x--- 14 user1 user1 4.0K Aug  4 20:47 /home/user1/user1/
drwxr-x--- 14 user2 user2 4.0K Aug  4 20:47 /home/user2/user2/

Then the ChrootDirectory directive can lock the user to the directory above their home and the ForceCommand directive can put the user in their own home directory using the -d option. Once logged in they can only ever see their own files. This arrangement also makes it easier to add chrooted shell access later as system directories can be added to the chroot without being available to other accounts.

Another common case is to chroot access to a web server's document root or server root. If each site has its own hierarchy under /var/www/ such as /var/www/site1/ then chroot can be put to use as follows:

Match Group team1
	ChrootDirectory /var/www/
	ForceCommand internal-sftp -d site1

Match Group team2
	ChrootDirectory /var/www/
	ForceCommand internal-sftp -d site2

The site directories can then be group writable while the parent /var/www/ remains read-only for non-root users.

Option 3: Split Ownership of the Home Directory and Its Contents edit

As seen above there are several ways to deal with a chrooted SFTP service. A third way would be to set the directory to be owned by root, in another group, but not writable by that group. So with the following settings, the account 'fred' can log in and use any of the pre-made subdirectories or files in the usual way, but cannot add anything to the chroot target itself.

$ ls -lhd /home/ /home/fred/ /home/fred/*
drwxr-xr-x  68 root root 4.0K Sep 4 15:40 /home/
drwxr-xr-x  21 root fred 4.0K Sep 4 15:41 /home/fred/
drwxr-xr-x   8 fred fred 4.0K Sep 4 15:44 /home/fred/Documents
drwxr-xr-x   9 fred fred 4.0K Sep 4 15:41 /home/fred/Music
drwxr-xr-x 145 fred fred 4.0K Sep 4 15:41 /home/fred/Pictures
drwxr-xr-x   5 fred fred 4.0K Sep 4 15:41 /home/fred/Videos
drwxr-xr-x  98 fred fred 4.0K Sep 4 15:41 /home/fred/WWW

The corresponding lines for ssh_config(5) would be the following, with the account 'fred' being a member of the 'team1' group:

Match Group team1
	ChrootDirectory /home/%u
	ForceCommand internal-sftp

This method is quick to set up but the drawback is that it requires system administrator intervention any time a new directory or file is to be added to the home directory. That requires root permission to do even if the new file or directory is changed to being owned by the unprivileged account.

Umask edit

Starting with OpenSSH 5.4, sftp-server(8) can set a umask to override the default one set by the user’s account. The in-process SFTP server, internal-sftp, accepts the same options as the external SFTP subsystem.

Subsystem sftp internal-sftp -u 0022

However it is important to remember that umasks only restrict permissions, never loosen them.

Earlier versions can do the same thing through the use of a helper script, but this complicates chrooted directories very much. The helper script can be a regular script or it can be embedded inline in the configuration file though neither works easily in a chroot jail. It’s often easier to get a newer version of sshd(8) which supports umask as part of the server’s configuration. Here is an inline helper script for umask in OpenSSH 5.3 and earlier, based on one by gilles@

Subsystem sftp /bin/sh -c 'umask 0022; /usr/libexec/openssh/sftp-server'

Either way, this umask is server-side only. The original file permissions on the client side will usually, but not always, be used when calculating the final file permissions on the server. This depends on the client itself. Most clients pass the file permissions on to the server, FileZilla being a notable exception. As such, permissions can generally be tightened but not loosened. For example, a file that is mode 600 on the client will not be automatically become 664 or anything else less than the original 600 regardless of the server-side umask. That is unless the client does not forward the permissions, in which case only the server's umask will be used. So for most clients, if you want looser permissions on the uploaded file, change them on the client side before uploading.

Further Restricting Chrooted SFTP edit

There are several additional common scenarios where chrooted access is restricted further.

Chrooted SFTP to Shared Directories edit

Another common case is to chroot a group of users to different levels of the web server they are responsible for. For obvious reasons, symbolic links going from inside the jail to parts of the filesystem outside the chroot jail are not accessible to the chrooted users. So directory hierarchies must be planned more carefully if there are special combinations of access. See the earlier section on chrooted SFTP-only accounts.

In these kinds of directories, it may be useful to give different levels of access to more than just one group. In that case, ACLs might be needed.

Chrooted SFTP Accounts Accessible Only from Particular Addresses edit

More complex matching can be done. It is possible to allow a group of users to use SFTP, but not a shell login, only if they log in from a specific address or range of addresses. If they log in from the right addresses, then get SFTP and only SFTP, but if they try to log in from other addresses they will be denied access completely. Both conditions, the affirmative and negative matches, need to be accounted for.

Subsystem sftp internal-sftp

Match Group sftp-only, Address
        AllowTCPForwarding no
        X11Forwarding no
        ForceCommand internal-sftp
        ChrootDirectory  /home/servers/

Match Group sftp-only, Address *,!
        DenyGroups sftp-only

Note that for negation a wildcard must be specified first and then the address or range to be excluded following it. Mind the spaces or lack thereof. Similar matching can be done for a range of addresses by specifying the addresses in CIDR address/mask format, such as Any number of criteria can be specified and only if all of them are met then the directive in the subsequent lines take effect.

The first Match block that fits is the one that takes effect, so care must be taken when constructing conditional blocks to make them fit the precise situation desired. Also, any situations that don't fit a Match conditional block will fall through the cracks. Those will get the general configuration settings whatever they may be. Specific user and source address combinations can be tested with the configurations using the -T and -C options with the server for more options. See the section Debugging a Server Configuration for more.

Chrooted SFTP with Logging edit

If the internal-sftp in-process SFTP server is not used then the logging daemon must establish a socket in the chroot directory for the sftp-server(8) subsystem to access as /dev/log See the section on Logging.

Chrooted Login Shells edit

Making a chroot jail for interactive shells is difficult. The chroot and all its components must be root-owned directories that are not writable by any other user or group. The ChrootDirectory must contain the necessary files and directories to support the user's session. For an interactive session this requires at least a shell, typically bash(1), ksh(1), or sh(1), and basic device nodes inside /dev such as null(4), zero(4), stdin(4), stdout(4), stderr(4), arandom(4), and tty(4) devices. Paths may contain the following tokens that are expanded at runtime once the connecting user has been authenticated: %% is replaced by a literal '%', %h is replaced by the home directory of the user being authenticated, and %u is replaced by the username of that user.

sshfs(1) - SFTP File Transfer Via Local Folders edit

Another way to transfer files back and forth, or even use them remotely, is to use sshfs(1) It is a user-space file system client based on SFTP and utilizes the server's SFTP-subsystem. It can make a directory on the remote server accessible as if it were a directory on the local file system which can be accessed by any program. The user must have read-write privileges for mount point to use sshfs(1).

The following creates the mount point, mountpoint, in the home directory if none exists. Then sshfs(1) mounts the remote server.

$ test -d ~/mountpoint || mkdir --mode 700 ~/mountpoint
$ sshfs ~/mountpoint

Reading or writing files to the mount point is actually transferring data to or from the remote system. The amount of bandwidth consumed by the transfers can be reduced using compression. That can be important if the network connection has bandwidth caps or per-unit fees. However, if speed is the only issue, compression can make the transfer slower if the processors on either end are busy or not powerful enough. About the only way to be sure is to test and see which method is faster. Below, compression is specified with -C.

$ sshfs -C ~/mountpoint

Or try with debugging output:

$ sshfs -o sshfs_debug /home/fred/mountpoint

Named pipes will not work over sshfs(1). Use fusermount -u to umount these remote directories and close the SFTP session.

Using sshfs(1) With A Key edit

The ssh_command option is used to pass parameters on to ssh(1). In this example it is used to have ssh(1) point to a key used for authentication to mount a remote directory, /usr/src, locally as /home/fred/src.

$ sshfs -o ssh_command="ssh -i /home/fred/.ssh/id_rsa" /home/fred/src/

If a usable key is already loaded into the agent, then ssh(1) should find it and use it on behalf of sshfs(1) without needing intervention.

Public Key Authentication edit


Authentication keys can improve efficiency, if done properly. As a bonus advantage, the passphrase and private key never leave the client[46]. Key-based authentication is generally recommended for outward facing systems so that password authentication can be turned off.

Key-based authentication edit

OpenSSH can use public key cryptography for authentication. In public key cryptography, encryption and decryption are asymmetric. The keys are used in pairs, a public key to encrypt and a private key to decrypt. The ssh-keygen(1) utility can make RSA, Ed25519, ECDSA, Ed25519-SK, or ECDSA-SK keys for authenticating. Even though DSA keys can still be made, being exactly 1024 bits in size, they are no longer recommended and should be avoided. RSA keys are allowed to vary from 1024 bits on up. The default is now 3072. However, there is only limited benefit after 2048 bits and that makes elliptic curve algorithms preferable. ECDSA can be 256, 384 or 521 bits in size. Ed25519, Ed25519-SK, and ECDSA-SK keys each have a fixed length of 256 bits. Shorter keys are faster, but less secure. Longer keys are much slower to work with but provide better protection, up to a point.

Keys can be named to help remember what they are for. Because the key files can be named anything it is possible to have many keys each named for different services or tasks. The comment field at the end of the public key can also be useful in helping to keep the keys sorted, if you have many of them or use them infrequently.

The process of key-based authentication uses these keys to make a couple of exchanges using the keys to encrypt and decrypt some short message. At the start, a copy of the client's public key is stored on the server and the client's private key is on the client, both stay where they are. The private key never leaves the client. As the client first contacts the server, the server responds by using the client's public key to encrypt a random number and return that encrypted random number as a challenge to the client. The client responds to the challenge by using the matching private key to decrypt the message and extract the random number. The client then makes an MD5 hash of the session ID along with the random number from the challenge and returns that hash to the server. The server then makes its own hash of the session ID and the random number and compares that to the hash returned by the client. If there is a match, the login is allowed. If there is not a match, then the next of any public keys on the server registered as belonging to the same account is tried until either a match is found or all the keys have been tried or the maximum number of failures has been reached. [47]

When an agent is used on the client side to manage authentication, the process is similar. The difference is that ssh(1) passes the challenge off to the agent which then calculates the response and passes it back to ssh(1) which then passes the agent's response back to the server.

Basics of Public Key Authentication edit

A matching pair of keys is needed for public key authentication and ssh-keygen(1) is used to make the key pair. Out of that pair the public key must be properly stored on the remote host. There on the server public key is added to the designated authorized_keys file for that remote user account. The private key stays stored safely on the client. Once the keys have been prepared they can be used again and again.

There are four steps in preparation for key-based authentication:

1) Prepare the directories where the keys will stay. If either the authorized_keys file or .ssh directory do not exist on either the remote machine or the .ssh directory on the remote machine, create them and set the permissions correctly. On the client only a directory is needed, but it should not be writable by any account except its owner:

$ mkdir ~/.ssh/
$ chmod 0700 ~/.ssh/

On the remote machine, the .ssh directory is needed as is a special file to store the public keys, the default is authorized_keys.

$ mkdir ~/.ssh/
$ touch ~/.ssh/authorized_keys
$ chmod 0700 ~/.ssh/
$ chmod 0600 ~/.ssh/authorized_keys

2) Create a key pair. The example here creates a Ed25519 key pair in the directory ~/.ssh. The option -t assigns the key type and the option -f assigns the key file a name. It is good to give keys files descriptive names, especially if larger numbers of keys are managed. Below, the public key will be named and the private key will be called mykey_ed25519. Be sure to enter a sound passphrase to encrypt the private key using 128-bit AES.

$ ssh-keygen -t ed25519 -f ~/.ssh/mykey_ed25519

Ed25519, Ed25519-SK, and ECDSA-SK keys have a fixed length. For RSA and ECDSA keys, the -b option sets the number of bits used.

Since 6.5 a new private key format is available using a bcrypt(3) key derivative function (KDF) to better protect keys at rest. This new format is always used for Ed25519 keys, and sometime in the future will be the default for all keys. But for right now it may be requested when generating or saving existing keys of other types via the -o option in ssh-keygen(1).

$ ssh-keygen -o -b 4096 -t rsa -f ~/.ssh/mykey_rsa

Details of the new format are found in the source code in the file PROTOCOL.key.

3) Get the keys to the right places. Transfer only the public key to remote machine.

$ ssh-copy-id -i ~/.ssh/mykey_ed25519

If ssh-copy-id(1) is not available, any editor that does not wrap long lines can be used. For example, nano(1) can be started with the -w option to prevent wrapping of long lines. Or another way to set that permanently is by editing nanorc(5) However the authorized_keys file is edited to add the key, the key itself must be in the file whole and unbroken on a single line.

Then if they are not already on the client, transfer both the public and private keys there. It is usually best to keep both the public and private keys together in the directory ~/.ssh/, though the public key is not needed on the client after this step and can be regenerated if it is ever needed again.

4) Test the keys

While still logged in, use the client start another SSH session in a new window and try authenticating to the remote machine from the client using the private key.

$ ssh -i ~/.ssh/mykey_ed25519 -l fred

The option -i tells ssh(1) which private key to try. Close the original SSH session only after verifying that the key-based authentication works.

Once key-based authentication has been verified to be working, it is possible to make permanent shortcuts on the client using ssh_config(5), explained further below. In particular, see the IdentityFile, IdentitiesOnly, and AddKeysToAgent configuration directives, to name three.

Troubleshooting of Key-based Authentication:

If the server refuses to accept the key and fails over to the next authentication method (eg: "Server refused our key"), then there are several possible mistakes to look for on the server side.

One of the most common errors is that the file and directory permissions are wrong. The authorized key file must be owned by the user in question and not be group writable. Nor may the key file's directory be group or world writable.

$ chmod u=rwx,g=rx,o= ~/.ssh
$ chmod u=rw,g=,o=  ~/.ssh/authorized_keys

Another mistake that can happen is if the key inside the authorized_keys file on the remote host is broken by line breaks or has other whitespace in the middle. That can be fixed by joining up the lines and removing the spaces or by recopying the key more carefully.

And, though it should go without saying, the halves of the key pair need to match. The public key on the server needs to match the private key held on the client. If the public key is lost, then a new one can be generated with the -y option, but not the other way around. If the private key is lost, then the public key should be erased as it is no longer of any use. If many keys are in use for an account, it might be a good idea to add comments to them. On the client, it can be a good idea to know which server the key is for, either through the file name itself or through the comment field. A comment can be added using the -C option.

$ ssh-keygen -t ed25519 -f ~/.ssh/mykey_ed25519 -C "web server mirror"

On the server, it can be important to annotate which client they key is from if there is more than one public key there in an account. There the comment can be added to the authorized key file on the server in the last column if a comment does not already exist. Again, the format of the authorized keys file is given in the manual page for sshd(8) in the section "AUTHORIZED_KEYS FILE FORMAT". If the keys are not labeled they can be hard to match, which might or might not be what you want.

Associating Keys Permanently with a Server edit

A key can be specified at run time, but to save retyping the same paths again and again, the Host directive in ssh_config(5) can apply specific settings to a target host. In this case, by changing ~/.ssh/config it is possible to assign particular keys to be tried automatically whenever making a connection to that specific host. After adding the following lines to ~/.ssh/config, all that's needed is to type ssh web1 to connect with the key for that server.

Host web1
	IdentitiesOnly yes
	IdentityFile /home/fred/.ssh/web_key_ed25519

Below ~/.ssh/config uses different keys for server versus, regardless whether they resolve to the same machine. This is possible because the host name argument given to ssh(1) is not converted to a canonicalized host name before matching.

Host server
	IdentitiesOnly yes
	IdentityFile /home/fred/.ssh/key_a_rsa

	IdentitiesOnly yes
	IdentityFile /home/fred/.ssh/key_b_rsa

In this example the shorter name is tried first, but of course less ambiguous shortcuts can be made instead. The configuration file gets parsed on a first-match basis. So the most specific rules go at the beginning and the most general rules go at the end.

Encrypted Home Directories edit

When using encrypted home directories the keys must be stored in an unencrypted directory. That means somewhere outside the actual home directory which means sshd(8) needs to be configured appropriately to find the keys in that special location.

Here is one method for solving the access problem. Each user is given a subdirectory under /etc/ssh/keys/ which they can then use for storing their authorized_keys file. This is set in the server's configuration file /etc/ssh/sshd_config

AuthorizedKeysFile      /etc/ssh/keys/%u/authorized_keys

Setting a special location for the keys opens up more possibilities as to how the keys can be managed and multiple key file locations can be specified if they are separated by whitespace. The user does not have to have write permissions for the authorized_keys file. Only read permission is needed to be able to log in. But if the user is allowed to add, remove, or change their keys, then they will need write access to the file to do that.

One symptom of having an encrypted home directory is that key-based authentication only works when you are already logged into the same account, but fails when trying to make the first connection and log in for the first time.

Sometimes it is also necessary to add a script or call a program from /etc/ssh/sshrc immediately after authentication to decrypt the home directory.

Passwordless Login edit

One way of allowing passwordless logins is to follow the steps above, but simply do not enter a passphrase when asked for one while creating the key. Note that using keys that lack a passphrase is very risky, so the key files should be very well protected and kept track of. That includes that they only be used as single-purpose keys as described below. Timely key rotation becomes especially important. In general, it is not a good idea to make a key without a passphrase. A better solution is to have a passphrase and work with an authentication agent in conjunction with a single-purpose key. Most desktop environments launch an SSH agent automatically these days. It will be visible in the SSH_AUTH_SOCK environment variable if it is. On accounts with an agent, ssh-add(1) can load private keys into an available agent.

$ ssh-add ~/.ssh/mykey_ed25519

Thereafter, the client will automatically check the agent for the key when appropriate. If there are many keys in the agent, it will become necessary to set IdentitiesOnly. See the above section on using ~/.ssh/config for that. See [OpenSSH/Cookbook/Public_Key_Authentication#Key-based_Authentication_Using_an_Agent Key-based Authentication Using an Agent] below.

Requiring Both Keys and a Password edit

While users should have strong passphrases for their keys, there is no way to enforce or verify that. Indeed, since neither the private key nor its the passphrase ever leave the client machine there is nothing that the server can do to have any influence over that. Instead, it is possible to require both a key and a password. Starting with OpenSSH 6.2, it is possible for the server to require multiple authentication methods for login using the AuthenticationMethods directive.

AuthenticationMethods publickey,password

This example from sshd_config(5)] requires that users first authenticate using a key and it only queries for a password if the key succeeds. Thus with that configuration it is not possible to get to the system password prompt without first authenticating with a valid key. Changing the order of the arguments changes the order of the authentication methods.

Requiring Two or More Keys edit

Since OpenSSH 6.8, the server now remembers which public keys have been used for authentication and refuses to accept previously-used keys. This allows a set up requiring that users authenticate using two different public keys, maybe one in the file system and the other in a hardware token.

AuthenticationMethods publickey,publickey

The AuthenticationMethods directive, whether for keys or passwords, can also be set on the server under a Match directive to apply only to certain groups or situations.

Requiring Certain Key Types For Authentication edit

Also since OpenSSH 6.8, the PubkeyAcceptedKeyTypes directive, later changed to PubkeyAcceptedAlgorithms, can specify which key algorithms are accepted for authentication. Those not in the comma-separated pattern list are not allowed.

PubkeyAcceptedAlgorithms ssh-ed25519*,ssh-rsa*,ecdsa-sha2*,sk-ssh-ed25519*,sk-ecdsa-sha2*

Either the actual key types or a pattern can be in the list. Spaces are not allowed in the pattern list. The exact list of key types supported for authentication can be found by the -Q option using the client. The following two lines are equivalent.

$ ssh -Q key-sig | sort

$ ssh -Q PubkeyAcceptedAlgorithms | sort

For host-based authentication, it is the HostbasedAcceptedAlgorithms directive which determines the key types which are allowed for authentication.

Key-based Authentication Using an Agent edit

When an authentication agent, such as ssh-agent(1), is going to be used, it should generally be started at the beginning of a session and used to launch the login session or X-session so that the environment variables pointing to the agent and its UNIX-domain socket are passed to each subsequent shell and process. Many desktop distros do this automatically upon login or startup.

Starting an agent entails setting a pair of environment variables:

  • SSH_AGENT_PID : the process id of the agent
  • SSH_AUTH_SOCK : the filename and full path to the UNIX-domain socket

The various SSH and SFTP clients find these variables automatically and use them to contact the agent and try when authentication is needed. However, it is mainly SSH_AUTH_SOCK which is ever used. If the shell or desktop session was launched using ssh-agent(1), then these variables are already set and available. If they are not available, then it is necessary to either set the variables manually inside each shell or for each application in order to use the agent or else to point to the agent's socket using the directive IdentityAgent in the client's configuration file.

Once an agent is available, a relevant private key needs to be loaded before the agent can be used. Once in the agent the private key can then be used many times. Private keys are loaded into an agent with ssh-add(1).

$ ssh-add /home/fred/.ssh/mykey_ed25519
Identity added: /home/fred/.ssh/mykey_ed25519 (/home/fred/.ssh/mykey_ed25519)

Keys stay in the agent for as long as it is running unless specified otherwise. A timeout can be set either with the -t option when starting the agent itself or when actually loading the key using ssh-add(1). In either case, the -t option will set a timeout interval, after which the key will be purged from the agent.

$ ssh-add -t 1h30m /home/fred/.ssh/mykey_ed25519
Identity added: /home/fred/.ssh/mykey_ed25519 (/home/fred/.ssh/mykey_ed25519)
Lifetime set to 5400 seconds

The option -l will list the fingerprints of all of the identities in the agent.

$ ssh-add -l                                    
256 SHA256:77mfUupj364g1WQ+O8NM1ELj0G1QRx/pHtvzvDvDlOk mykey for task x (ED25519)
3072 SHA256:7unq90B/XjrRbucm/fqTOJu0I1vPygVkN9FgzsJdXbk myotherkey rsa for task y (RSA)

It is also possible to remove individual identities from the agent using -d which will remove them one at a time if identified by file name, but only if the file name is given and without the file name of the private key to be remove, -d will fail silently. Using -D instead will remove all of them at once without needing to specify any by name.

By default ssh-add(1) uses the agent connected via the socket named in the environment variable SSH_AUTH_SOCK, if it is set. Currently, that is its only option. However, for ssh(1) an alternative to using the environment variable is the client configuration directive IdentityAgent which tells the SSH clients which socket to use to communicate with the agent. If both the environment variable and the configuration directive are available at the same time, then the value in IdentityAgent takes precedence over what's in the environment variable. IdentityAgent can also be set to none to prevent the connection from trying to use any agent at all.

The client configuration directive AddKeysToAgent can also be useful in getting keys into an agent as needed. When set, it automatically loads a key into a running agent the first time the key is called for if it is not already loaded. Likewise the IdentitiesOnly directive can ensure that the relevant key is offered on the first try. Rather than typing these out whenever the client is run, they can be added to ~/.ssh/config and thereby added automatically for designated host connections.

Agent Forwarding edit

Agent forwarding is one means of passing through one or more intermediate hosts. However, the -J option for ProxyJump would be a safer option. See Passing Through a Gateway or Two in the section on jump hosts about that. With agent forwarding, intermediate machines forward challenges and responses back and forth between the client and the final destination. This comes with some risks but eliminates the need for using passwords or holding keys on any of these intermediate machines.

A main advantage of agent forwarding is that the private key itself is not needed on any remote machine, thus hindering unwanted file system access to it. [48] Another advantage is that the actual agent to which the user has authenticated does not go anywhere and is thus less susceptible to analysis.

One risk with agents is that they can be re-used to tailgate in if the permissions allow it. Keys cannot be copied this way, but authentication is possible when there are incorrect permissions. Note that disabling agent forwarding does not improve security unless users are also denied shell access, as they can always install their own forwarders.

The risks of agent forwarding can be mitigated by confirming each use of a key by adding the -c option when adding the key to the agent. This requires the SSH_ASKPASS variable be set and available to the agent process, but will generate a prompt on the host running the agent upon each use of the key by a remote system. So if passing through one or more intermediate hosts, it is usually better to instead have the SSH client use stdio forwarding with -W or -J.

On the client side agent forwarding is disabled by default and so if it is to be used it must be enabled explicitly. Put the following line in ssh_config(5) to enable agent forwarding for a particular server:

        ForwardAgent yes

On the server side the default configuration files allow authentication agent forwarding, so to use it, nothing needs to be done there, just on the client side. However, again, it would be preferable to take a look at ProxyJump instead.

Old Style, Somewhat Safer SSH Agent Forwarding edit

The best way to pass through one or more intermediate hosts is to use the ProxyJump option instead of authentication agent forwarding and thereby not risk exposing any private keys. If authentication agent forwarding must be used, then it would be advisable in the interest of following the principle of least privilege to forward an agent containing the minimum necessary number of keys. There are several ways to solve that.

In version 8.8 and earlier a partial solution is to make a one-off, ephemeral agent to hold just the one key or keys needed for the task at hand. Another partial solution would be to set up a user-accessible service at the operating system level and then use ssh_config for the rest.

Automatically launching an ephemeral agent unique to each session can be done by crafting either a special shell alias or function to launch a single-use agent. Either the function or the alias can be written to require confirmation for each requested signature. The following example is an alias is based on an updated blog post by Vincent Bernat[49] on SSH agent forwarding:

$ alias assh="ssh-agent ssh -o AddKeysToAgent=confirm -o ForwardAgent=yes"

Note the use of ssh-agent(1). When invoking that alias, the SSH client will be launched with a unique, ephemeral supporting key agent. The alias sets up a new agent, including setting the two environment variables, and then sets two client options while calling the client. This arrangement still checks with ssh_config(5) for other options and settings. When the SSH session is finished the agent which launched it ends and goes away, thus cleaning up after itself automatically.

Another way is to rely on the client's configuration file for some of the settings. Such methods rely mostly on ssh_config(5) but still require an independent method to launch an ephemeral agent because the OpenSSH client is already running by the time it reads the configuration file and is thus not affected by any changes to environment variables caused by the configuration file and it is through the environment variables that contain information about the agent. However, when the path to the UNIX-domain socket used to communicate with the authentication agent is decided in advance then the IdentityAgent option can point to it once the one-off agent[50] is actually launched. The following uses a specific agent's pre-defined socket whenever connecting to either of two particular domains:

Host * *
        User fred
        IdentitiesOnly yes
        IdentityFile %d/.ssh/id_cloud_01
        IdentityAgent /run/user/%i/ssh-cloud-01.socket
        ForwardAgent yes
        AddKeysToAgent yes

The %d stands for the path to the home directory and the %i stands for the user id (UID) for the current account. In some cases the %i token might also come in handy when setting the IdentityAgent option inside the configuration file. Again, be careful when forwarding agents with which keys are in the forwarded agent. See the section "TOKENS" in ssh_config(5) for more such abbreviations.

With those configuration settings, the authentication agent must already be up and running and point to the designated socket prior to starting the SSH client for that configuration to work. Additionally, it should place the socket in a directory which is inaccessible to any other accounts. ssh-agent(1) must use the -a option to name the socket:

$ ssh-agent -a /run/user/${UID}/ssh-cloud-01.socket

That agent configuration can be launched manually or via a script or service manager. However, in the interests of privacy and security in general, agent forwarding is to be avoided. The configuration directive ProxyJump is the best alternative and, on older systems, host traversal using ProxyCommand with netcat are preferable. Again, see the section on Proxies and Jump Hosts for how those methods are used.

New Style SSH Agent Destination Constraints edit

From 8.9 onward, ssh-agent(1) will allow the agent to limit which hosts they will be used for authentication as specified by ssh-add(1) using the -h option. These constraints have been added through two agent protocol extensions and a modification to the public key authentication protocol. This feature may evolve, but for now the result is such that keys for account authentication can be loaded into the agent in four ways:

  • no limits on forwarding (not recommended)
  • local use only, these will not get forwarded
  • forwarding, but only to specific remote hosts
  • forwarding to specific remote hosts via specified routes

The intent is that the restrictions fail safely so that they do not allow authentication when one or more hosts in the route lack the needed protocol features. The destinations and routes cannot be modified once the keys are loaded, but multiple routes to the same destination can be loaded and the routes can be any number of hops. If the routes need changing, then the key must be reloaded into the agent with the new route or routes.

The general default for the client is to keep keys in the agent for local use only. However, that can be enforced explicitly by adding the -a option when starting the client or else setting the ForwardAgent directive in ssh_config(5) to 'no' in the relevant configuration block.

In order to load keys for unlimited forwarding, which is not the best idea, just add them using ssh-add(1) as normal. Then use the -A option with the client or set the ForwardAgent directive in ssh_config(5) to 'yes' in the relevant configuration block.

In order to limit keys for connection only to a specific remote host, or to load keys for connection to a specific remote host with forwarding via one or more intermediate hosts, use he -h option when loading keys into the agent. Here the one key may be used only to connect to the specific destination:

$ ssh-agent -h server.key.ed25519

If an intermediate system is passed through, the best way is to use ProxyJump which is the -J option for the SSH Client. If agent forwarding must be allowed then the tightest way is to constrain which systems may use the keys, again using the -h option.

$ ssh-agent -h -h ">" server.key.ed25519

Multiple steps can be included, even multiple routes. They just have to be enumerated explicitly, though patterns may still be used for the destination hosts as well as specific names. Each host in the chain must support these protocol extensions for the connection to complete.

Any keys designated for forwarding are unusable for authentication on any other hosts than those which have been explicitly identified for forwarding. These permitted hosts are identified by host key or host certificate from the known_hosts file or another file designated by the -H option when loading the key with ssh-add(1). If -H is not used at the time the keys are loaded into the agent, then the default known hosts file(s) will be used: ~/.ssh/known_hosts, /etc/ssh/ssh_known_hosts, ~/.ssh/known_hosts2, and /etc/ssh/ssh_known_hosts2.

In the case of keys, the known_hosts list must be maintained conscientiously [51], perhaps with the help of the UpdateHostkeys and CanonicalizeHostname client configuration directives. Use of certificates requires the agent to only need to be aware of the Certificate Authority (CA).

Again, see Passing Through a Gateway or Two in the section on jump hosts about a way to pass through one or more intermediate machines without needing to forward an SSH agent.

Checking the Agent for Specific Keys edit

The ssh_add(1) utility's -T option can test whether a specific private key is available in the agent or not by looking up the matching public key. That can be useful in a shell script.



if ssh-add -T ${key}; then
        echo "Key ${key} Found"
        echo "Key ${key} missing"

Or it could be done with an alternate syntax just as well either in a script or in an interactive shell sessions,

$ key=/home/fred/.ssh/
$ ssh-add -T ${key} && echo "Key found" || echo "Key missing"

However, if the desired result would be to add key to the agent then the AddKeysToAgent client configuration option can ensure that a specific key is added to the SSH agent upon first use during any given login session. That can be done using -o AddKeysToAgent=yes as a run-time argument, or by modifying ssh_config(5) as appropriate:

Host www
        IdentityFile %d/.ssh/www.ed25519
        IdentitiesOnly yes
        AddKeysToAgent yes

With those options in the configuration file, the first time ssh www is run the specified key will get added to the agent and remain available.

Key-based Authentication Using A Hardware Security Token edit

While stand-alone keys have been around for a long time, it has been possible since version 8.2 to use keys backed by hardware security tokens, such as OnlyKey, Yubikey, or many others, though the FIDO2 protocol. The Universal 2nd Factor (U2F) authentication is supported directly in OpenSSH through FIDO2 and does not need third party software. At the moment there are two types of hardware backed keys, ECDSA-SK and Ed25519-SK, but only the latest hardware tokens support the latter. If the key Ed25519-SK format is not supported by the token's firmware, then the following error message will be presented when attempts to use that key type are made:

Key enrollment failed: invalid format

If supported, either key type can be created with ssh-keygen(1). The steps are almost identical to creating normal keys but the token must be available to the system (plugged in) first. Then if called for, the token's PIN must be entered and the token touched or otherwise activated. After that, the key creation proceeds as normal. Mind the key type as specified by the -t option.

$ ssh-keygen -t ed25519-sk -f /home/fred/.ssh/server.ed25519-sk -C "web server for fred"
Generating public/private ed25519-sk key pair.
You may need to touch your authenticator to authorize key generation.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/fred/.ssh/server.ed25519-sk
Your public key has been saved in /home/fred/.ssh/
The key fingerprint is:
SHA256:41wVVDnKJ9gKr2Sj4CFuYMhcNvYebZ6zq0PWyP4rRDo web server
The key's randomart image is:
+[ED25519-SK 256]-+
|           .o... |
|             .o  |
|           +.. . |
|   = .  . ..= .  |
|+ + * + So.. o   |
|o+.EoO *+oo      |
|.o oBo+++o       |
|  o .=.+.        |
| .   .===        |

Once created, the public and private key files get handled like with any other type of key. But when authenticating, the hardware token must be present and activated when called for.

$ ssh -i /home/fred/.ssh/server.ed25519-sk
Enter passphrase for key '/home/fred/.ssh/server.ed25519-sk': 
Confirm user presence for key ED25519-SK SHA256:41wVVDnKJ9gKr2Sj4CFuYMhcNvYebZ6zq0PWyP4rRDo

The resulting private key file is not actually the key itself but instead a "key handle" which is used by the hardware security token to derive the real private key on demand at the time it is actually used[52]. As a result, the hardware-backed private key file is useless without the accompanying hardware token. This also means that these key files are not portable across hardware tokens, say when having multiple tokens in reserve or as backup, even when used by the same account. So when multiple hardware tokens are in use, different key pairs must be generated for each token.

Hardware Security Token Resident Private Key edit

It is possible to store the private key within the token itself, but for the moment it cannot be used directly from inside the token and must first be saved as a file. Also, the key can only be loaded into the FIDO authenticator at the time of creation using the -O resident option with ssh-keygen(1). Otherwise, the process is the same as above.

$ ssh-keygen -O resident -t ed25519-sk -f /home/fred/.ssh/server.ed25519-sk -C "web server for fred"
. . .

When needed, the resident key can be extracted from the FIDO2 hardware token and saved into a file using the -K option. At this stage a passphrase can be added to the file, but no passphrase is kept within the token itself, only an optional PIN protects the key there.

$ ssh-keygen -K
Enter PIN for authenticator: 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Saved ED25519-SK key to id_ed25519_sk_rk

$ mv -i id_ed25519_sk_rk /home/fred/.ssh/server.ed25519-sk

Since the output file name is fixed, any pre-existing file with that name can get overwritten but there will be a warning first. However, it is not recommended to keep the key on the hardware token because it provides more protection when kept separately.

Single-purpose Keys edit

Tailored single-purpose keys can eliminate use of remote root logins for many administrative activities. A finely tailored sudoers is needed along with an unprivileged account. When done right, it gives just enough access to get the job done, following the security principle of Least Privilege.

Single-purpose keys are accompanied by use of either the ForceCommand directive in sshd_config(5) or the command="..." directive inside the authorized_keys file. The method is to generate a new key pair, transfer the public key to authorized-keys on the remote system, and then prepend the appropriate command or script there to the line with the key.

$ grep '^command' ~/.ssh/authorized_keys
command="/usr/local/bin/" ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEcgdzDvSebOEjuegEx4W1I/aA7MM3owHfMr9yg2WH8H

The command="..." directive inserted there overrides everything else and ensures that when logging in with just that key only the script /usr/local/bin/ is run. If it is necessary to pass parameters to the script, have a look at the contents of the SSH_ORIGINAL_COMMAND environment variable and use it in a case statement. Do not ever trust the contents of that variable nor use the contents directly, always indirectly.

Single-purpose keys are useful for allowing only a tunnel and nothing more. The following key will only echo some text and then exit, unless used non-interactively with the -N option.

$ grep '^command' ~/.ssh/authorized_keys
command="/bin/echo do-not-send-commands" ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBzTIWCaILN3tHx5WW+PMVDc7DfPM9xYNY61JgFmBGrA

No matter what the user tries while logging in with that key, the session will only echo the given text and then exits. Using the -N option disables running the remote program, allowing the connection to stay open, allowing a tunnel.

$ ssh -L 3306:localhost:3306 \
	-i ~/.ssh/tunnel_ed25519 \
	-N \
	-l fred \

That creates a tunnel and stays connected despite a key configuration which would close an interactive session. See also the -n or -f option for ssh(1).

Single-purpose Keys to Avoid Remote Root Access edit

The easy way is to write a short shell script, place it /usr/local/bin/, and then configure sudoers to allow the otherwise unprivileged account to run just that script and only that script.

%wheel  ALL=(root:root) NOPASSWD: /usr/sbin/service httpd stop
%wheel  ALL=(root:root) NOPASSWD: /usr/sbin/service httpd start

Then the key calls the script using command="..." inside authorized_keys. Here the one key starts the web server, the other stops the web server.

$ grep '^command' ~/.ssh/authorized_keys
command="/usr/bin/sudo /usr/sbin/service httpd stop" ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEcgdzDvSebOEjuegEx4W1I/aA7MM3owHfMr9yg2WH8H
command="/usr/bin/sudo /usr/sbin/service httpd start" ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMidyqZ6OCvbWqA8Zn+FjhpYE6NoWSxVjFnFUk6MrNZ4

Complicated programs like rsync(1), tar(1), mysqldump(1), and so on require an advanced approach when building a single-purpose key. For them, the -v option can show exactly what is being passed to the server so that sudoers can be set up correctly. That way they can be restricted to only access designated parts of the file system. For example, here is what ssh -v shows from one particular usage of rsync(1), note the "Sending command" line:

$ rsync -e 'ssh -v' ./backup/etc/
. . .
debug1: Sending command: rsync --server --sender -e.LsfxC . /etc/
. . .

That output can then be added to sudoers so that the key can do only that function.

%backup ALL=(root:root) NOPASSWD: /usr/bin/rsync --server --sender -e.LsfxC . /etc/

Then to tie it all together, the account "backup" needs a key:

$ grep '^command' ~/.ssh/authorized_keys
command="/usr/bin/rsync --server --sender -e.LsfxC . /etc/" ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMm0rs4eY8djqBb3dIEgbQ8lmdlxb9IAEuX/qFCTxFgb

Many of these programs have a ---dry-run or equivalent option. Remember to use it when figuring out the right settings.

Read-only Access to Keys edit

In some cases it is necessary to prevent accounts from being able to changing their own authentication keys. However, such situations may be a better case for using certificates. However, if done with keys it is accomplished by putting the key file in an external directory where the user has read-only access, both to the directory and to the key file. Then the AuthorizedKeysFile directive assigns where sshd(8) looks for the keys and can point to a secured location for the keys instead of the default location.

A good alternate location could be a new directory /etc/ssh/authorized_keys which could store the selected accounts' key files there. The change can be made to apply to only a group of accounts by putting the settings under a Match directive. The default location for keys on most systems is usually ~/.ssh/authorized_keys.

Match Group sftpusers
	AuthorizedKeysFile /etc/ssh/authorized_keys/%u

Then the permissions there would allow the keys to be read but not written:

$ ls -dhln /etc/ssh/
drwxr-x--x 3 0 0 4.0K Mar 30 22:16 /etc/ssh/authorized_keys/

$ ls -dhln /etc/ssh/*.pub
-rw-r--r-- 1 0 0 173 Mar 23 13:34 /etc/ssh/fred
-rw-r--r-- 1 0 0  93 Mar 23 13:34 /etc/ssh/user1
-rw-r--r-- 1 0 0 565 Mar 23 13:34 /etc/ssh/user2
. . .

The keys could even be in within subdirectories, though the same restrictions apply regarding permissions and ownership.

For chrooted SFTP, the method is the same to keep the key files out of reach of the accounts:

Match Group sftpusers
	ChrootDirectory /home
	ForceCommand internal-sftp -d %u
	AuthorizedKeysFile /etc/ssh/authorized_keys/%u

Of course a Match directive is not essential. The settings could be made to apply to all accounts by putting the directive in the main part of the server configuration file instead.

Mark Public Keys as Revoked edit

Keys can be revoked. Keys that have been revoked can be stored in /etc/ssh/revoked_keys, a file specified in sshd_config(5) using the directive RevokedKeys, so that sshd(8) will prevent attempts to log in with them. No warning or error on the client side will be given if a revoked key is tried. Authentication will simply progress to the next key or method.

The revoked keys file should contain a list of public keys, one per line, that have been revoked and can no longer be used to connect to the server. The key cannot contain any extras, such as login options or it will be ignored. If one of the revoked keys is tried during a login attempt, the server will simply ignore it and move on to the next authentication method. An entry will be made in the logs of the attempt, including the key's fingerprint. See the section on logging for a little more on that.

RevokedKeys /etc/ssh/revoked_keys

The RevokedKeys configuration directive is not set in sshd_config(5) by default. It must be set explicitly if it is to be used. This is another situation that might be better fulfilled through using certificate since a validity interval can be set in any combination of seconds, minutes, hours, days, or weeks can be set for certificates while keys are valid indefinitely.

Key Revocation Lists edit

A Key Revocation List (KRL) is a compact, binary form of representing revoked keys and certificates. In order to use a KRL, the server's configuration file must point to a valid list using the RevokedKeys directive. KRLs themselves are generated with ssh-keygen(1) and can be created from scratch or edited in place. Here a new one is made, populated with a single public key:

$ ssh-keygen -kf  /etc/ssh/revoked_keys -z 1 ~/.ssh/

Here an existing KRL is updated by adding the -u option:

$ ssh-keygen -ukf /etc/ssh/revoked_keys -z 2 ~/.ssh/

Once a KRL is in place, it is possible to test if a specific key or certificate is in the revocation list.

$ ssh-keygen -Qf  /etc/ssh/revoked_keys ~/.ssh/

Only public keys and certificates will be loaded into the KRL. Corrupt or broken keys will not be loaded and will produce an error message if tried. Like with the regular RevokedKeys list, the public key destined for the KRL cannot contain any extras like login options or it will produce an error when an attempt is made to load it into the KRL or search the KRL for it.

Verify a Host Key by Fingerprint edit

The above examples have been about using keys to authenticate the client to the server. A different context in which keys are used is when the server identifies itself to the client, which happens automatically at the beginning of each non-multiplexed session. In order for that identification to happen the client acquires a public key from the server, usually on or prior to first contact, which it can subsequently use to ensure that it is connecting to the same server again and not an impostor. The default locations for storing these acquired host keys on the client are in /etc/ssh/ssh_known_hosts, if managed by the system administrator, or in ~/.ssh/known_hosts if managed by the client's own account. The format of the contents is a line with a host address and its matching public key. The file is described in detail in the sshd(8) manual page in the section "SSH_KNOWN_HOSTS FILE FORMAT".

When connecting for the first time to a remote host, the server's host key should be verified in order to ensure that the client is connecting to the right machine and not an impostor or anything else. Usually this verification is done by comparing the fingerprint of the server's host key rather than trying to compare the whole key itself. By default the client will show the fingerprint if the key is not already found in the known_hosts register.

$ ssh -l fred
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is SHA256:LPFiMYrrCYQVsVUPzjOHv+ZjyxCHlVYJMBVFerVCP7k.
Are you sure you want to continue connecting (yes/no)?

That can be compared to a fingerprint received out of band, say by post, e-mail, SMS, courier, and so on. Specifically, the example represents the key's fingerprint as a base64 encoded SHA256 checksum. That is the default style. The fingerprint can also be displayed as an MD5 hash in hexadecimal instead by passing the client's FingerprintHash configuration directive as a runtime argument or setting it in ssh_config(5).

$ ssh -o FingerprintHash=md5
The authenticity of host ' (' can't be established.
RSA key fingerprint is MD5:10:4a:ec:d2:f1:38:f7:ea:0a:a0:0f:17:57:ea:a6:16.
Are you sure you want to continue connecting (yes/no)?

But the default in new versions is SHA256 in base64 has a lower chance of collision.

In OpenSSH 6.7 and earlier, the client showed fingerprints as a hexadecimal MD5 checksum instead a of the base64-encoded SHA256 checksum currently used:

$ ssh -l fred
The authenticity of host ' (' can't be established. 
RSA key fingerprint is 4a:11:ef:d3:f2:48:f8:ea:1a:a2:0d:17:57:ea:a6:16. 
Are you sure you want to continue connecting (yes/no)?

Another way of comparing keys is to use the ASCII art visual host key. See further below about that.

Downloading keys edit

Even though a host’s key is usually displayed for review the first time the SSH client tries to connect, it can also be fetched on demand at any time using ssh-keyscan(1):

$ ssh-keyscan  
# SSH-2.0-OpenSSH_8.2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLC2PpBnFrbXh2YoK030Y5JdglqCWfozNiSMjsbWQt1QS09TcINqWK1aLOsNLByBE2WBymtLJEppiUVOFFPze+I=
# SSH-2.0-OpenSSH_8.2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9iViojCZkcpdLju7/3+OaxKs/11TAU4SuvIPTvVYvQO32o4KOdw54fQmd8f4qUWU59EUks9VQNdqf1uT1LXZN+3zXU51mCwzMzIsJuEH0nXECtUrlpEOMlhqYh5UVkOvm0pqx1jbBV0QaTyDBOhvZsNmzp2o8ZKRSLCt9kMsEgzJmexM0Ho7v3/zHeHSD7elP7TKOJOATwqi4f6R5nNWaR6v/oNdGDtFYJnQfKUn2pdD30VtOKgUl2Wz9xDNMKrIkiM8Vsg8ly35WEuFQ1xLKjVlWSS6Frl5wLqmU1oIgowwWv+3kJS2/CRlopECy726oBgKzNoYfDOBAAbahSK8R
# SSH-2.0-OpenSSH_8.2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDDOmBOknpyJ61Qnaeq2s+pHOH6rdMn09iREz2A/yO2m

Once a key is acquired, its fingerprint can be shown using ssh-keygen(1). This can be done directly with a pipe.

$ ssh-keyscan | ssh-keygen -lf - 
# SSH-2.0-OpenSSH_8.2
# SSH-2.0-OpenSSH_8.2
# SSH-2.0-OpenSSH_8.2
256 SHA256:sxh5i6KjXZd8c34mVTBfWk6/q5cC6BzR6Qxep5nBMVo (ECDSA)
3072 SHA256:hlPei3IXhkZmo+GBLamiiIaWbeGZMqeTXg15R42yCC0 (RSA)
256 SHA256:ZmS+IoHh31CmQZ4NJjv3z58Pfa0zMaOgxu8yAcpuwuw (ED25519)

If there is more than one public key type is available from the server on the port polled, then ssh-keyscan(1) will fetch each of them. If there is more than one key fed via stdin or a file, then ssh-keygen(1) will process them in order. Prior to OpenSSH 7.2 manual fingerprinting was a two step process, the key was read to a file and then processed for its fingerprint.

$ ssh-keyscan -t ed25519 >
# SSH-2.0-OpenSSH_6.8
$ ssh-keygen -lf                       
256 SHA256:ZmS+IoHh31CmQZ4NJjv3z58Pfa0zMaOgxu8yAcpuwuw (ED25519)

Note that some output from ssh-keyscan(1) is sent to stderr instead of stdout.

A hash, or fingerprint, can be generated manually with awk(1), sed(1) and xxd(1), on systems where they are found.

$ awk '{print $2}' | base64 -d | md5sum -b | sed 's/../&:/g; s/: .*$//'
$ awk '{print $2}' | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64

It is possible to find all hosts from a file which have new or different keys from those in known_hosts, if the host names are in clear text and not stored as hashes.

$ ssh-keyscan -t rsa,ecdsa -f ssh_hosts | \
  sort -u - ~/.ssh/known_hosts | \
  diff ~/.ssh/known_hosts -

Using ssh-keyscan(1) with ssh_config(5) edit

The utility ssh-keyscan(1) does not parse ssh_config(5). That is in part to keep the code base simple. There are a lot of configuration options which would be complicated to implement, including but not limited to ProxyJump, ProxyCommand, Match, BindInterface, and CanonicalizeHostname[53] . Resolving host names via the client configuration file can be done by wrapping the utility in a short shell function:

my-ssh-keyscan() {
	for host in "$@" ; do
		ssh-keyscan $(ssh -G "$host" | awk '/^hostname/ {print $2}')

That shell function uses the -G option of ssh(1) to resolve each host name using ssh_config(5) and then check the resulting host name for SSH keys.

ASCII Art Visual Host Key edit

An ASCII art representation of the key can be displayed along with the SHA256 base64 fingerprint:

$ ssh-keygen -lvf key 
256 SHA256:BClQBFAGuz55+tgHM1aazI8FUo8eJiwmMcqg2U3UgWU (ED25519)
+--[ED25519 256]--+
|o+=*++Eo         |
|+o .+.o.         |
|B=.oo.  .        |
|*B.=.o .         |
|= B *   S        |
|. .@ .           |
| +..B            |
|  *. o           |
| o.o.            |

In OpenSSH 6.7 and earlier the fingerprint is in MD5 hexadecimal form.

$ ssh-keygen -lvf key 
2048 37:af:05:99:e7:fb:86:6c:98:ee:14:a6:30:06:bc:f0 (RSA) 
+--[ RSA 2048]----+ 
|          o      | 
|         o .     | 
|        o o      | 
|       o +       | 
|   .  . S        | 
|    E ..         | 
|  .o.* ..        | 
|  .*=.+o         | 
|  ..==+.         | 

More on Verifying SSH Keys edit

Keys on the client or the server can be verified against known good keys by comparing the base64-encoded SHA256 fingerprints.

Verifying Stray Client Keys edit

Sometimes is is necessary to compare two uncertain key files to check if they are part of the same key pair. However, public keys are more or less disposable. So the easy way in such situations on the client machine is to just rename or erase the old, problematic, public key and replace it with a new one generated from the existing private key.

$ ssh-keygen -y -f ~/.ssh/my_key_rsa

But if the two parts must really be compared, it is done in two steps using ssh-keygen(1). First, a new public key is re-generated from the known private key and used to make a fingerprint to stdout. Next, the fingerprint of the unknown public key is generated for comparison. In this example, the private key my_key_a_rsa and the public key are compared:

$ ssh-keygen -y -f my_key_a_rsa | ssh-keygen -l -f -
$ ssh-keygen -l -f

The result is a base64-encoded SHA256 checksum for each key with the one fingerprint displayed right below the other for easy visual comparison. Older versions don't support reading from stdin so an intermediate file will be needed then. Even older versions will only show an MD5 checksum for each key. Either way, automation with a shell script is simple enough to accomplish but outside the scope of this book.

Verifying Server Keys edit

Reliable verification of a server's host key must be done when first connecting. It can be necessary to contact the system administrator who can provide it out of band so as to know the fingerprint in advance and have it ready to verify the first connection.

Here is an example of the server's RSA key being read and its fingerprint shown as SHA256 base64:

$ ssh-keygen -lf /etc/ssh/     
3072 SHA256:hlPei3IXhkZmo+GBLamiiIaWbeGZMqeTXg15R42yCC0 (RSA)

And here the corresponding ECDSA key is read, but shown as an MD5 hexadecimal hash:

$ ssh-keygen -E md5 -lf /etc/ssh/
256 MD5:ed:d2:34:b4:93:fd:0e:eb:08:ee:b3:c4:b3:4f:28:e4 (ECDSA)

Prior to 6.8, the fingerprint was expressed as an MD5 hexadecimal hash:

$ ssh-keygen -lf /etc/ssh/ 
2048 MD5:e4:a0:f4:19:46:d7:a4:cc:be:ea:9b:65:a7:62:db:2c (RSA)

It is also possible to use ssh-keyscan(1) to get keys from an active SSH server. However, the fingerprints still needs to be verified out of band.

Warning: Remote Host Identification Has Changed! edit

If a server's key does not match what the client finds has been recorded in either the system's or the local account's authorized_keys files, then the client will issue a warning along with the fingerprint of the suspicious key.

Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
Please contact your system administrator.
Add correct host key in /home/fred/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/fred/.ssh/known_hosts:19
  remove with:
  ssh-keygen -f "/home/fred/.ssh/known_hosts" -R ""
RSA host key for has changed and you have requested strict checking.
Host key verification failed.

Three reasons for the warning are common.

One reason is that the server's keys were replaced, often because the server's operating system was reinstalled without backing up the old keys. Another reason can be when the system administrator has phased out deprecated or compromised keys. However that can be planned better and if there is time to plan the migration, new keys can just be added to the server and have the clients use the UpdateHostKeys option so that the new keys are accepted if the old keys match. A third situation is when the connection is made to the wrong machine, such as when the remote system changes IP addresses because of dynamic address allocation.

In all three cases where the key has changed there is only one thing to do: contact the system administrator and verify the key. Ask if the OpenSSH-server was recently reinstalled, or was the machine restored from an old backup? Keep in mind that the system administrator may be you yourself in some cases.

The case which is rather rare but serious enough that it should be ruled out for sure is that the wrong machine is part of a man-in-the-middle attack.

In all four cases, an authentic key fingerprint can be acquired by any method where it is possible to verify the integrity and origin of the message, for example via PGP-signed e-mail. If physical access is possible, then use the console to get the right fingerprint. Once the authentic key fingerprint is available, return to the client machine where you got the error and remove the old key from ~/.ssh/known_hosts

$ ssh-keygen -R

Then try logging in, but compare the key fingerprints first and proceed if and only if the key fingerprint matches what you received out of band. If the key fingerprint matches, then go through with the login process and the key will be automatically added. If the key fingerprint does not match, stop immediately and figure out what you are connecting to. It would be a good idea to get on the phone, a real phone not a computer phone, to the remote machine's system administrator or the network administrator.

Multiple Keys for a Host, Multiple Hosts for a Key in known_hosts edit

Multiple host names or IP addresses can use the same key in the known_hosts file by using pattern matching or simply by listing multiple systems for the same key. That can be done in either the global list of keys in /etc/ssh/ssh_known_hosts and the local, account-specific lists of keys in each account's ~/.ssh/known_hosts file. Labs, computational clusters, and similar pools of machines can make use of keys in that way. Here is a key shared by three specific hosts, identified by name:

server1,server2,server3 ssh-rsa AAAAB097y0yiblo97gvl...jhvlhjgluibp7y807t08mmniKjug...==

Or a range can be specified by using globbing to a limited extent in either /etc/ssh/ssh_known_hosts or ~/.ssh/known_hosts.

172.19.40.* ssh-rsa AAAAB097y0yiblo97gvl...jhvlhjgluibp7y807t08mmniKjug...==

Conversely, for multiple keys for the same address, it is necessary to make multiple entries in either /etc/ssh/ssh_known_hosts or ~/.ssh/known_hosts for each key.

server1 ssh-rsa AAAAB097y0yiblo97gvljh...vlhjgluibp7y807t08mmniKjug...==
server1 ssh-rsa AAAAB0liuouibl kuhlhlu...qerf1dcw16twc61c6cw1ryer4t...==
server1 ssh-rsa AAAAB568ijh68uhg63wedx...aq14rdfcvbhu865rfgbvcfrt65...==

Thus in order to get a pool of servers to share a pool of keys, each server-key combination must be added manually to the known_hosts file:

server1 ssh-rsa AAAAB097y0yiblo97gvljh...07t8mmniKjug...==
server1 ssh-rsa AAAAB0liuouibl kuhlhlu...qerfw1ryer4t...==
server1 ssh-rsa AAAAB568ijh68uhg63wedx...aq14rvcfrt65...==

server2 ssh-rsa AAAAB097y0yiblo97gvljh...07t8mmniKjug...==
server2 ssh-rsa AAAAB0liuouibl kuhlhlu...qerfw1ryer4t...==
server2 ssh-rsa AAAAB568ijh68uhg63wedx...aq14rvcfrt65...==

Though upgrading to certificates might be a more appropriate approach that manually updating lots of keys.

Another way of Dealing with Dynamic (roaming) IP Addresses edit

It is possible to manually point to the right key using HostKeyAlias either as part of ssh_config(5) or as a runtime parameter. Here the key for machine Foobar is used to connect to host

$ ssh -o StrictHostKeyChecking=accept-new \
      -o HostKeyAlias=foobar   \

This is useful when DHCP is not configured to try to keep the same addresses for the same machines over time or when using certain stdio forwarding methods to pass through intermediate hosts.

Hostkey Update and Rotation in known_hosts edit

A protocol extension to rotate weak public keys out of known_hosts has been in OpenSSH from version 6.8[54] and later. With it the server is able to inform the client of all its host keys and update known_hosts with new ones when at least one trusted key already known. This method still requires the private keys be available to the server [55] so that proofs can be completed. In ssh_config(5), the directive UpdateHostKeys specifies whether the client should accept updates of additional host keys from the server after authentication is completed and add them to known_hosts. A server can offer multiple keys of the same type for a period before removing the deprecated key from those offered, thus allowing an automated option for rotating keys as well as for upgrading from weaker algorithms to stronger ones. See also RFC 4819: Secure Shell Public Key Subsystem about key management standards.

Certificate-based Authentication edit


Certificates are keys which have been signed by another key[56]. The key used for such signing is called the certificate authority. It is made in advance and set aside, reserved for signing only. Other parties use the signing key's public half to verify the authenticity of the signed key being used for server identification, in the case of a host certificate[57], or for login, in the case of a user certificate[58] .

In the interest of privilege separation, make separate certificate authorities for host certificates and user certificates if both are going to be used. As of the time of this writing, either of the elliptical curve algorithms are a good choice.

Overview of SSH Certificates edit

X.509 is an ITU Telecommunications Standardization Sector standard which defines the format of public key certificates.

When using certificates either the client or the server are pre-configured to accept keys which have themselves been signed by another key specially designated and set aside for just the purpose of signing the keys actually used for work. That other key is known as a certificate authority, or CA for short. Like with normal keys, certificates are used to generate signatures used in authentication. However, rather than looking up the matching public key in a file, the public key is filed with a signature and the signature used to verify the public key and then the public key is used to ensure that they negotiations are happening with a client in possession of the matching private key. So a prerequisite for using certificates is at least a passing familiarity with normal SSH. See the chapter on Public Key Authentication. The mechanics of normal key-based authentication are described briefly there in the introduction. SSH uses its own simpler certificate format and not the X.509 certificate format.

Two of the main advantages of certificates over keys are that they can use an expiration date, or even a date range of validity, and that they eliminate need for trust-on-first-use or complicated key verification methods. Mostly they facilitate large scale deployments by easing the processes of key approval and distribution and provide a better option than copying the same host keys across multiple destinations.

User certificates authenticate users to their accounts on the servers. Host certificates authenticate servers to the clients, proving that the clients are connecting to the right system. The use of a principals field to designate users versus hosts is the main difference between host and user certificates. In host certificates, the principals field refers to the server names represented by the certificate. In user certificates that field refers to the accounts which are allowed to use the certificate for logging in. Additional limitations just as specific source addresses and forced commands are available for user certificates. Date and time of validity are possible for both. Host certificates and user certificates should use separate certificate authorities. For a more authoritative resource, see the "CERTIFICATES" section of ssh-keygen(1).

SSH User Certificates edit

User certificates authenticate the user to a server or other remote device, in other words they allow people and scripts to log in. Authenticating the client to the server by means of user certificates means that the authorized_keys file on the server is no longer needed. Instead the user certificate, really a signed key, is checked against the certificate authority to verify if the signature is valid. If it is, then the login process can proceed. Another advantage is that the signatures can be designated as valid only for a range of dates. So in practice, it is possible to have an expiration date. User certificates are also tied to specific accounts, referred to as 'principals'. The principal(s) allowed to use the signed key must be designated or the server will not accept use of the authentication key even if it is properly signed.

It is even possible to restrict the source of incoming connections or force specific commands using options like force-command and source-address similar to normal SSH public keys. These restrictions are set at the time of signing. If those options are to be changed, the key must be resigned or else become invalid.

Files Associated with SSH User Certificates edit

There are five files associated with user certificates. The Certificate Authority must be kept safe and stored out of band. If it is lost then no new user accounts can be added to the pool and a new certificate authority must be established. If it falls into the wrong hands then an attacker could use it to pass their accounts off as legitimate users:

  • Certificate Authority - a private SSH key generated for signing other keys

The next one is kept on the server itself:

  • Certificate Public Key - the public component of the certificate authority

The last three files are kept on the client systems:

  • Private SSH Key - the private key used for authenticating to the server or remote device
  • Public SSH Key - the matching public key which has been signed by the Certificate Authority
  • User Certificate - the signature made for the User Public Key using the Certificate Authority

Optionally, the user certificate and its key can be associated permanently with a remote server or device using the | ssh_config file, either globally or per-account, via the CertificateFile and IdentityFile directives.

Steps for Working with SSH User Certificates edit

There are four steps in working with user certificates. Step one, creation of a certificate authority, is done just once per pool of users or scripts. Step two, adjusting the servers' configurations, can be repeated for as many server systems or remote devices as are needed. Step three, signing the user keys, is done once per user account. Step four, deploying the user certificate, can be done for as many clients as allowed by the principals and any source-address limitations that might be included.

1. Creating a Certificate Authority edit

It is a very good idea to keep separate certificate authorities for hosts and users. So, in the interest of privilege separation, make a separate certificate authority for user certificates even if you already have one for host certificates.

$ ssh-keygen -t ed25519 -f ~/.ssh/user_ca_key \
        -C 'User Certificate Authority for *'

The private key created here should be kept somewhere other than the servers. However, the servers will have access to the public component so as to be able to verify the signature that will be put forth by the clients.

2. Storing the Public Component of the Certificate Authority on the Server edit

The server needs to be able to look up the matching certificate in order to validate the signature on user keys. That is set in the SSH daemon's configuration. Copy the host certificate to the same place the host keys are stored. Then point the OpenSSH server to the certificate authority's public component using the TrustedUserCAKeys directive in |sshd_config(5)

TrustedUserCAKeys /etc/ssh/

Double check to make sure the file permissions are correct.

$ ls -lhn /etc/ssh/
-rw-r--r--  1 0  0   114B May  4 16:38 /etc/ssh/

Then any number of accounts can be used with that certificate authority.

3. Processing an Existing Public Key edit

The users must have a key pair already made and then submit the public key component of the pair for signing. Sign it and transfer the signed copies back to the right person. Delete any artifacts of this process immediately, since extra public keys lying around can only cause clutter at best. Here a public key named has been accepted and a certificate is made with it.

$ ssh-keygen -s user_ca_key -I 'edcba' -z '0002' -n fred \

The resulting certificate will be named and will have the internal ID "edcba" and an internal serial number "2". Both the ID and the serial number must be calculated externally. For successful logins the certificate's id and serial fields will be included in the log. See the section Logging and Troubleshooting for more depth on the topic. It is a very good idea to list a principal for the certificate, even for user certificates. The principal listed in the certificate does need to match the account it will log in to.

Even though the public key itself is not strictly needed after that on the client side for logging in, it can be good for the client to keep. But if the public key has been lost, a new one can be regenerated from the private key, though not the other way around. When the private key is gone, it is gone. So keep a proper backup schedule. If a file exists with the name the public key should have, it had better be the public key itself or else the login attempt will fail.

The logistics for getting the public key and delivering the certificate are outside the scope of this book. But at this point, the resulting certificate should be transferred back to the person working with the key that was submitted.

4. Logging in Using an SSH User Certificate edit

On the client side, both the user certificate and the private key it corresponds to are needed to log in.

$ ssh -o -i server01.ed25519 \

Once things are working manually a shortcut can be made using ssh_config(5) on the client. Use of IdentitiesOnly might also be needed if an agent is used and there are multiple keys in the agent.

Host server01
        User fred
        IdentitiesOnly yes
        IdentityFile /home/fred/.ssh/server01.ed25519
        CertificateFile /home/fred/.ssh/

With those settings, running ssh server01 on that client will try to apply both the designated key and its corresponding user certificate and designated principal.

SSH Host Certificates edit

One of the main uses of keys by OpenSSH is the authentication of a remote host to a client so that the client can verify that it is connecting directly to the right system and not an impostor of via an intruder in between. For that, once acknowledged, the known_hosts file usually keeps a copy of the public key in a register of host keys paired with host names or IP addresses.

The difficulty with host keys in general is in populating the known_hosts file on clients for large pools of client machines or when connecting to new systems for the first time. In a data center or lab or Internet of Things deployment, machines are always coming and going. That means new SSH host keys each time. If there are a lot of hosts involved, then that adds up to a lot of keys in the register. Using a host certificate instead, an arbitrarily large pool of hosts using the same certificate authority need only one entry in the known_hosts register, even as new hosts are added to the pool. By signing the host keys which a new server or device uses to identify itself, it is still possible to roll out new systems with unique keys but have them recognized correctly and safely by clients on the first try without risking the potential for a man-in-the-middle.

By using host certificates, these identifying host keys are signed and the signature can be verified against the agreed upon certificate authority, thus greatly easing the otherwise involved process of collecting and verifying the host's public key when making the first connection.

Files Associated with SSH Host Certificates edit

There are five files involved in using host certificates. Like with user certificates, the certificate authority must be kept safe. The same precautions apply as for user certificates but for hosts rather than the accounts on them:

  • Certificate Authority - a private key generated for signing other keys

The next three files are kept on the server itself.

  • Certificate public key - the public component of the certificate authority
  • Host Public Key - the actual key that the SSH daemon uses to identify itself to the clients
  • Host Certificate - the signature made for the Host Public Key using the Certificate Authority

Then on the clients, either in the client's register or the system-wide register of recognized hosts:

  • known_hosts - contains a reference to the host certificate and its principals

When the clients find and use a valid host certificate, no entry for the individual host will be added to the known_hosts register.

Steps for Using SSH Host Certificates edit

There are four general stages in working with host certificates. Step one, creation of a certificate authority, is done just once per server or pool of servers or devices. Step two, signing the host keys, is done once per server device as is step three. Step four, configuring the clients, can be repeated for as many client machines or individual login accounts as needed.

With each step, mind the paths. There is no one-size-fits-all solution, so it will be necessary to decide where the files should go.

1. Creating a Certificate Authority edit

Again, a certificate authority, or CA, is just another SSH key. However rather than using it for authenticating the servers or clients directly, it is used to sign and then validate the other keys which are then actually used for authentication.

$ ssh-keygen -t ecdsa -f ~/.ssh/ca_key \
        -C 'Host Certficate Authority for *'

The private key created here should be kept somewhere other than the servers where they will be used.

The public key in this step must be distributed out of band to the clients which will then use it to verify host identities upon first connection.

2. Fetching and Signing the Host Key edit

Only the public key gets signed. Fetch the remote host's public host key via a reliable method, sign it, and then upload the resulting certificate to the server. The -h option indicates that this shall be a host certificate and the -s option points to the key used to do the signing. Here a copy of the host key has been acquired from its server and will be worked on locally:

$ ssh-keygen -h -s ~/.ssh/ca_key -V '+1d' -I abcd -z 00001 \
         -n ./
Enter passphrase: 
Signed host key /etc/ssh/ id "abcd" serial 1 for valid from 2020-05-05T09:51:00 to 2020-05-06T09:52:01

The validity interval set by the -V option is time span relative to the date and time when the key signed. So it would be possible to make a certificate valid for only an hour tomorrow using the formula -V '+1d2h:+1d3h'. If no start time is set, then the value is interpreted as the stopping time. If a specific stopping time or date is required, that is best done by having a script calculate that and then call ssh_key-gen(1). If no time is given at all, the certificate will be considered valid indefinitely.

The -I option assigns a label for the purpose of identifying the certificate.

The -z option manually assigns a serial number to the certificate. That serial number must be extracted from the old certificate and then incremented if it is to be kept in sequence. The default is to not have a serial number. The -n option assigns a set of principals, that would be which hosts may use the certificate in the context of host certificates.

The contents of the certificate can be reviewed using the -L option.

$ ssh-keygen -L -f          
        Type: host certificate
        Public key: ECDSA-CERT SHA256:kVSFLH5MP/3uJWU57JxD8xVFs7ia8Pww8/ro+pq4S50
        Signing CA: ECDSA SHA256:INewUSvbnfVbgUhtLBhh+XKL0uN99qbXjsi0jvD/IGI (using ecdsa-sha2-nistp256)
        Key ID: "abcd"
        Serial: 1
        Valid: from 2020-05-05T09:51:00 to 2020-05-06T09:52:01
        Critical Options: (none)
        Extensions: (none)

The certificate for the public host key must be transferred over to where the server can use it. That usually means the same directory where the regular public host key is found, which is/etc/ssh/ by default. Check to be sure that the certificate has the right permissions after copying it in place.

$ ls /etc/ssh/ssh_host_ecdsa_key*.pub
ls -nlh /etc/ssh/ssh_host_ecdsa_key*.pub
-rw-r--r--  1 0  0   653 May  4 16:49 /etc/ssh/
-rw-r--r--  1 0  0   172 Feb 21 16:09 /etc/ssh/

3. Publishing the Host Certificate. edit

Once the host certificate is in place, the SSH daemon on the remote host must be pointed at the host certificate. See ssh_config(5) for more.

HostCertificate /etc/ssh/

The SSH daemon must then be instructed to reload its configuration file. The exact method varies from sytem to system but ultimately the daemon will receive a HUP signal.

4. Updating Clients to Acknowledge the Designated Certificate Authority edit

Finally add a reference to the certificate authority in the client's known_hosts file:

@cert-authority * ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFJQHK0uoOpBfynyKrjF/SjsLMFewUAihosD6UL3/HkaFPI1n3XAg9D7xePyUWf8thR2e0QVl5TeGLdFiGyCgt0=

That's it. However, it is important to realize that each of the above steps will fail more or less silently and the client will fall back to the usual approach of verifying the host's identity without reporting any error for its part. Any useful debugging information will be in the daemon's logs and even that will be of limited nature. See the chapter on Logging and Troubleshooting for examples.

Orchestration of Certificate Deployment edit

Automatically deploying certificates, either user certificates or host certificates, is beyond the scope of this book. There are many ways to do it and many factors involved. The details depend not only on which orchestration software is used, but also which specific distro or operating system is in place at the end points. In short, figure out how to do it manually and then figure out how to automate that process using the work flow allowed by the deployment scripts or software.

However, it is important to note that the certificate authority can be kept in an agent. Investigate about the -U option for signing.

Limiting User Certificates edit

Various limitations can be bound to a user certificate inside the certificate itself. These are mostly specified using the -O option when signing the key and include a validity interval, a forced command, source address ranges, disabling pseudo-terminal allocation, and others. See the manual page for ssh-keygen(1) for an authoritative list. These limitations can be combined or used separately. The examples below will address them separately for clarity.

Time Limitations for User Certificates edit

The certificate can be made valid for a planned time period, called here as a validity interval. The validity interval can have a specific starting date and time, ending date and time, or both. However, each certificate may have only a single validity interval.

Validity intervals are specified with the -V option and can use an absolute date and time range or a relative one. Using the key example from the user certificate section above, the following would limit when the certificate would be valid. Specifically, it is good for a five minute period on June 24, 2020 from 1:55pm through 2pm.

$ ssh-keygen -V '202006241355:202006241400' \
        -s user_ca_key -I 'edcba' -z '0003' -n fred \

Be sure to look closely at the resulting output to ensure that the range is what it needs to be.

Relative intervals can be used, too. Here is a certificate which is good for only five minutes, starting right away:

$ ssh-keygen -V ':+5m' \
        -s user_ca_key -I 'edcba' -z '0004' -n fred \

Note that the seconds are included and are counted from when the signing is initiated not when the passphrase is eventually entered and the signing finally completed. So if the certificate signing is initiated at 35 seconds past the top of the minute, the expiration time will also be at 35 seconds past the fifth minute. And, again, look closely at the resulting output.

Forced Commands with User Certificates edit

A user certificate can be tied to a specific command on the server by using the -O option as it is created. In this example, the certificate which only ever show the time and date whenever it is used to connect to the SSH server:

$ ssh-keygen -O force-command='/bin/date +"%T %F"' \
        -s user_ca_key -I 'edcba' -z '0005' -n fred \

If there is a forced command in both the certificate and sshd_config(5), then the latter takes precedence. Any command that was passed as a run time argument is overridden, yet can be found in the SSH_ORIGINAL_COMMAND environment variable. Commands that come from inside the certificate won't affect the SSH_ORIGINAL_COMMAND variable and will have to be parsed from the certificate itself, which will be held in the ephemeral file pointed to by the SSH_USER_AUTH environment variable.

$ awk '/^publickey/ {print $2,$3}' ${SSH_USER_AUTH} \
        | ssh-keygen -Lf -

The file will only exist while the session is still open. The SSH_USER_AUTH variable itself will only be set if the SSH server's configuration has ExposeAuthInfo set to 'yes' and the default is 'no'.

Source Address Restrictions on User Certificates edit

A certificate can be limited to a specific CIDR range.

$ ssh-keygen -O source-address=',' \
        -s user_ca_key -I 'edcba' -z '0006' -n fred \

The CIDR range must be valid.

Viewing Limitations on User Certificates edit

If the certificate is at hand, it is possible to look at it in detail and see which limitations apply, certificate-side. Below is an example certificate for the account 'fred', two LAN ranges, and just over one hour of access.

$ ssh-keygen -Lf
        Type: user certificate
        Public key: ED25519-CERT SHA256:hSy7QrAApIU1LgDCUrtBK2F2TZxhvincnJ0djSDow7I
        Signing CA: ED25519 SHA256:dVgTW1INbhvHjHbeAe10R9Niu8BpejifO286RZ7/niU (using ssh-ed25519)
        Key ID: "edcba"
        Serial: 7
        Valid: from 2020-06-24T15:17:00 to 2020-06-24T16:23:47
        Critical Options: 
                force-command /bin/date +"%T",source-address=,

Obviously the certificate itself cannot show any additional restrictions made server-side in the SSH server's configuration.

If the user certificate is not at hand, but is used for authentication, then limitations and all other embedded characteristics can be gleaned by using the SSH_USER_AUTH variable provided by the ExposeAuthInfo option in sshd_config(5) to fetch the certificate from the server. The certificate itself will leave the SSH_ORIGINAL_COMMAND variable alone, so the temporary file will be the only way to see what was actually in the certificate. Again, the certificate file pointed to by the SSH_USER_AUTH variable will only exist while the session is open.

Host-based Authentication edit


Host-based authentication allows hosts to authenticate on behalf of all or some of that particular host's users. Those accounts can be all of the accounts on a system or a subset designated by the Match directive. This type of authentication can be useful for managing computing clusters and other fairly homogeneous pools of machines.

In all, three files on the server must be prepared for host-based authentication. On the client only two must be modified, but the host itself must have SSH host keys assigned. What follows sets up host-based authentication from one system to another in a single direction. For authentication both directions, follow the procedure twice but reverse the roles of the systems.

Client-side Configurations for Host-based Authentication edit

On the client or source host, two files must be configured and in addition at least one host key must exist:

/etc/ssh/ssh_known_hosts - global file for public keys for known hosts
/etc/ssh/ssh_config - allow clients to request host-based authentication

Then the client's host keys must be created if they do not already exist:


These three steps need to be done on each of the client systems which will be connecting to the specified host. Once set up, the accounts logged on one system will be able to connect to another system without further interactive authentication.

Note that in situations environments host-based authentication might not be considered sufficient to prevent unauthorized access since it goes on a host-by-host basis, mostly.

1. Populate the Client with the Server's Public Keys edit

The remote server's public host keys must be stored on the client system in the global client configuration file /etc/ssh/ssh_known_hosts. One way to get the public keys from the server is to fetch them using ssh-keyscan(1) and save them.

$ ssh-keyscan | tee -a /etc/ssh/ssh_known_hosts

Be sure to make a backup copy of /etc/ssh/ssh_known_hosts, if it exists, before trying anything. Another way would be to add the server's public host key to the ~/.ssh/known_hosts files in the relevant accounts. But that way is more work.

However they are acquired and verified, the public keys listed there must obviously correspond to the private host keys on the server. Any or all of the three types can be used, RSA, ECDSA, or Ed25519. DSA should no longer be used. For verification of the public keys, refer to the section on how to verify a host key by fingerprint for some relevant methods.

2. System-wide Client Configuration edit

Two changes must be made. First, the client's configuration must request host-based authentication when connecting to the designated systems. Second, the client configuration file must be set to enable ssh-keysign(8). It is a helper application to access the local host keys and generate the digital signature required for host-based authentication. Both changes can be done globally in /etc/ssh/ssh_config.

Here is an excerpt from /etc/ssh/ssh_config on the client trying host-based authentication to all machines. The Host directive in ssh_config(5) can be used to further constrain access to a particular server or group of servers.

Host *
	HostbasedAuthentication yes
	EnableSSHKeysign yes

In some distributions, such as Red Hat Enterprise Linux 8, the EnableSSHKeysign directive may need to be placed into the general section before it will work. In that case, do this:

Host *
	HostbasedAuthentication yes

Host *
	EnableSSHKeysign yes

Other configuration directives can be added as needed. For example, here are two additional settings applied to the same pool:

Host *
	HostbasedAuthentication yes
	EnableSSHKeysign yes
	ServerAliveCountMax 3
	ServerAliveInterval 60

If the home directory of the client host is one that is shared with other machines, say using NFS or AFS, then it may be useful to look at the NoHostAuthenticationForLocalhost directive, too.

As a bit of trivia, the program ssh-keysign(8) itself must be SUID root. But SUID was probably set when it was installed and no changes there are needed.

3. Set Up the Client System's Own Host Keys edit

The program ssh-keysign(8) needs to read the client system's private host keys in /etc/ssh/. Installing the OpenSSH server will create these keys but if the OpenSSH server has not been installed on the client system, it does not need to be. It is enough to have the keys by themselves. If the client system's private host keys do not exist, it will be necessary to add them manually using ssh-keygen(1) before host-based authentication will work.

$ ssh-keygen -A

The default path is /etc/ssh/ when using the -A option.

Note that although the client's system does not have to have an SSH server actually running in order to use host-based authentication to reach another system, it is entirely feasible to install but then disable or uninstall the SSH server on the client's server as a way to get the host keys in place.

Server-side Configurations for Host-based Authentication edit

Three files on the server or target host must be modified to enable and allow host-based authentication:

/etc/shosts.equiv same syntax as old rhosts.equiv
/etc/ssh/ssh_known_hosts holds the host public keys from the clients
/etc/ssh/sshd_config turns on host-based authentication

The exact location of shosts.equiv may vary depending on operating system and distro.

1. Registering the Allowed Client Systems with the Server edit

The shosts.equiv file must contain a list of the client systems which are allowed to attempt host-based authentication. The file can contain host names, IP addresses, or net groups. It is best to keep this file simple and oriented to just the list of hosts, either by name or IP number. It provides only the first cut, anyway. For fine tuning, use sshd_config(5) instead to set or revoke access for specific users and groups.

It is important to note that if you are using the shosts.equiv file, the user name on the client and server system must match. For instance, to connect to, the user name on the client must also be bob. If you need user 'alice' to connect to, you need to specify this in the .shosts file for 'bob', not in the global shosts.equiv file. -bull

This file resides in the directory /etc/ in OpenBSD, which this book uses for the reference system when possible. On other systems the file might be found in the directory /etc/ssh/. Either way, the shosts.equiv file identifies which addresses are allowed to try authenticating. Check the manual page for sshd(1) on the actual system being used to be sure of the correct location.

Hunt for a manual page hosts.equiv(5) for more details on .shosts.equiv (and .shosts.), but consider that its use is long since deprecated and most systems won't even have that manual page.

Leftovers from the Old Days edit

Two legacy files may be on really old systems. and are optionally modified if they are referred to from within shosts.equiv. Each line in the netgroup file consists of a net group name followed by a list of the members of the net group, specifically host, user, and domain.

/etc/netgroup - default netgroup list
/etc/netgroup.db - netgroup database, build from netgroup

However, these are mostly legacy from the old rhosts and should be avoided.

Another leftover is using .shosts in each account's home directory. That would be a local equivalent to .shosts.equiv. In that way, individual users can have a local .shosts containing a list of trusted remote machines, or user-machine pairs, which are allowed to try host-based authentication.

.shosts must not writable by any group or any other users. Permissions set to 0644 should do it. The usage and format of .shosts is exactly the same as .rhosts, but allows host-based authentication without permitting login by insecure, legacy tools rlogin and rsh. The list is one line per host. The first column is obligatory and contains the name or address of the host permitted to attempt host-based authentication.

However a global .shosts.equiv is preferable to having .shosts in each and every home directory.

2. Populate the Server with the Client's Public Keys edit

The client systems listed in the server's shosts.equiv must also have their public keys in /etc/ssh/ssh_known_hosts on the server in order to be acknowledged. There are three required data fields per line. First is the host name or IP address or comma separated list of them, corresponding to those from shosts.equiv. Next is the key type, either ssh-rsa for RSA keys or ssh-ed25519 for Ed25519 keys or ecdsa-sha2-nistp256 for ECDSA keys. The third field is the public key itself. Last, and optionally, can be a comment about the key.

desktop, ssh-rsa AAAAB3NzaC1yc2EAAAABIw ... qqU24CcgzmM=

Just like step one for the client, there are many ways of collecting the public key information from the client and getting it to the server. It can be copied using sftp(1), copied from ~/.ssh/known_hosts, or grabbed using ssh-keyscan(1). Though the latter two methods only work if the client system also has an SSH server running.

$ ssh-keyscan -t rsa | tee -a /etc/ssh/ssh_known_hosts

3. System-wide Server Configuration edit

The third file that must be changed on the server is sshd_config(5). It must be told to allow host-based authentication by setting the HostbasedAuthentication directive, either for all users or just some users or just certain groups.

HostbasedAuthentication yes

Host-based authentication can be limited to specific users or groups. Here is an example excerpt from sshd_config(5) allowing allow any user in the group 'cluster2' to let the hosts authenticate on their behalf:

Match Group cluster2
        HostbasedAuthentication yes

Certain host keys types can be allowed using HostbasedAcceptedAlgorithms, formerly HostbasedAcceptedKeyTypes, with a comma-delimited list of acceptable key algorthms. All the key algorithms which are to be allowed must be listed because those not listed are not allowed. Patterns can be used in the whitelist. Below, Ed25519 and ECDSA keys are allowed, but others, such as RSA and DSA are not.

HostbasedAuthentication yes
HostbasedAcceptedAlgorithms ssh-ed25519*,ecdsa-sha2*

Host Names and Other DNS Matters edit

Make complete DNS entries for the clients if possible, including allowing reverse lookups. If the client machine is not listed in DNS, then the server might have trouble recognizing it. In that case you might have to tell sshd(8) not to do reverse lookups in DNS for connecting hosts. This can be a problem on informal LANs where hosts have addresses but no registered host names. Here is an example from /etc/ssh/sshd_config to work around lack of DNS records for the client using the HostbasedUsesNameFromPacketOnly directive.

HostbasedAuthentication yes
HostbasedUsesNameFromPacketOnly yes

Don't add this directive unless the normal way fails. Otherwise it can interfere and prevent authentication.

Sometimes the host trying to connect gets identified to the target host as something other than what was expected. That too will block authentication. So make sure that the configuration files match what the host actually calls itself.

Debugging edit

Configuration should be quite straight forward, with small changes in only three files on the server and two on the client to manage. If there are difficulties, be prepared to run sshd(8) standalone at debug level 1 (-d) to 3 (-ddd) and ssh(1) at debug level 3 (-vvv) a few times to see what you missed. The mistakes have to be cleared up in the right order, so take it one step at a time.

If the server produces the message "debug3: auth_rhosts2_raw: no hosts access file exists" turns up, the shosts.equiv file is probably in the wrong place or missing and no fallback ~/.shosts lies in reserve in that account.

If the server cannot find the key for the client despite it being in known_hosts and if the client's host name is not in regular DNS, then it might be necessary to add the directive HostbasedUsesNameFromPacketOnly. This uses the name supplied by the client itself rather than doing a DNS lookup.

Here is a sample excerpt from a successful host-based authentication for user 'fred' from the host at, also known as desktop1, using an Ed25519 key. The server first tries looking for an ECDSA key and does not find it.

# /usr/sbin/sshd -ddd
debug2: load_server_config: filename /etc/ssh/sshd_config
debug3: /etc/ssh/sshd_config:111 setting HostbasedAuthentication yes
debug3: /etc/ssh/sshd_config:112 setting HostbasedUsesNameFromPacketOnly yes
debug1: sshd version OpenSSH_6.8, LibreSSL 2.1
debug1: userauth-request for user fred service ssh-connection method hostbased [preauth]
debug1: attempt 1 failures 0 [preauth]
debug2: input_userauth_request: try method hostbased [preauth]
debug1: userauth_hostbased: cuser fred chost desktop1. pkalg ecdsa-sha2-nistp256 slen 100 [preauth]
debug3: mm_answer_keyallowed: key_from_blob: 0x76eede00
debug2: hostbased_key_allowed: chost desktop1. resolvedname ipaddr
debug2: stripping trailing dot from chost desktop1.
debug2: auth_rhosts2: clientuser fred hostname desktop1 ipaddr desktop1
debug1: temporarily_use_uid: 1000/1000 (e=0/0)
debug1: restore_uid: 0/0
debug1: fd 4 clearing O_NONBLOCK
debug2: hostbased_key_allowed: access allowed by auth_rhosts2
debug3: hostkeys_foreach: reading file "/etc/ssh/ssh_known_hosts"
debug3: record_hostkey: found key type ED25519 in file /etc/ssh/ssh_known_hosts:1
debug3: load_hostkeys: loaded 1 keys from desktop1
debug1: temporarily_use_uid: 1000/1000 (e=0/0)
debug3: hostkeys_foreach: reading file "/home/fred/.ssh/known_hosts"
debug1: restore_uid: 0/0
debug1: check_key_in_hostfiles: key for host desktop1 not found
Failed hostbased for fred from port 10827 ssh2: ECDSA SHA256:CEXGTmrVgeY1qEiwFe2Yy3XqrWdjm98jKmX0LK5mlQg, client user "fred", client host "desktop1"
debug3: mm_answer_keyallowed: key 0x76eede00 is not allowed
debug3: mm_request_send entering: type 23
debug2: userauth_hostbased: authenticated 0 [preauth]
debug3: userauth_finish: failure partial=0 next methods="publickey,password,keyboard-interactive,hostbased" [preauth]
debug1: userauth-request for user fred service ssh-connection method hostbased [preauth]
debug1: attempt 2 failures 1 [preauth]
debug2: input_userauth_request: try method hostbased [preauth]
debug1: userauth_hostbased: cuser fred chost desktop1. pkalg ssh-ed25519 slen 83 [preauth]
debug3: mm_key_allowed entering [preauth]
debug3: mm_request_send entering: type 22 [preauth]
debug3: mm_key_allowed: waiting for MONITOR_ANS_KEYALLOWED [preauth]
debug3: mm_request_receive_expect entering: type 23 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 22
debug3: mm_answer_keyallowed entering
debug3: mm_answer_keyallowed: key_from_blob: 0x7e499180
debug2: hostbased_key_allowed: chost desktop1. resolvedname ipaddr
debug2: stripping trailing dot from chost desktop1.
debug2: auth_rhosts2: clientuser fred hostname desktop1 ipaddr desktop1
debug1: temporarily_use_uid: 1000/1000 (e=0/0)
debug1: restore_uid: 0/0
debug1: fd 4 clearing O_NONBLOCK
debug2: hostbased_key_allowed: access allowed by auth_rhosts2
debug3: hostkeys_foreach: reading file "/etc/ssh/ssh_known_hosts"
debug3: record_hostkey: found key type ED25519 in file /etc/ssh/ssh_known_hosts:1
debug3: load_hostkeys: loaded 1 keys from desktop1
debug1: temporarily_use_uid: 1000/1000 (e=0/0)
debug3: hostkeys_foreach: reading file "/home/fred/.ssh/known_hosts"
debug1: restore_uid: 0/0
debug1: check_key_in_hostfiles: key for desktop1 found at /etc/ssh/ssh_known_hosts:1
Accepted ED25519 public key SHA256:BDBRg/JZ36+PKYSQTJDsWNW9rAfmUQCgWcY7desk/+Q from fred@desktop1
debug3: mm_answer_keyallowed: key 0x7e499180 is allowed
debug3: mm_request_send entering: type 23
debug3: mm_key_verify entering [preauth]
debug3: mm_request_send entering: type 24 [preauth]
debug3: mm_key_verify: waiting for MONITOR_ANS_KEYVERIFY [preauth]
debug3: mm_request_receive_expect entering: type 25 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 24
debug3: mm_answer_keyverify: key 0x7e49a700 signature verified
debug3: mm_request_send entering: type 25
Accepted hostbased for fred from port 10827 ssh2: ED25519 SHA256:BDBRg/JZ36+PKYSQTJDsWNW9rAfmUQCgWcY7desk/+Q, client user "fred", client host "desktop1"
debug1: monitor_child_preauth: fred has been authenticated by privileged process

Note any warnings or error messages and read them carefully. If there is too much output, remember the -e option for the server and use it to save the debugging output to a separate file and then read that file afterwards.

Since this method relies on the client as well as the server, running the client with increased verbosity can sometimes help too.

$ ssh -v
$ ssh -vv
$ ssh -vvv

If there is too much output from the client to handle, remember the -E option and redirect the debug logs to a file and then read that file at leisure.

Load Balancing edit


MaxStartups edit

Random early drop can be enabled by specifying the three colon-separated values start:rate:full. After the number of unauthenticated connections reaches the value specified by start, sshd(8) will begin to refuse new connections at a percentage specified by rate. The proportional rate of refused connections then increases linearly as the limit specified by full is approached until 100% is reached. At that point all new attempts at connection are refused until the backlog goes down.

MaxStartups   10:30:100

For example, if MaxStartups 5:30:90 is given in sshd_config(5), then starting with 5 new connections pending authentication the server will start to drop 30% of the new connections. By the time the backlog increases to 90 pending unauthenticated connections, 100% will be dropped.

In the default settings, the value for full has been increased to 100 pending connections to make it harder to succumb to a denial of service due to attack or heavy load. So the new default is 10:30:100.

Alternately, if the number of incoming connections is not managed at the network level by a packet filter or other tricks like round-robin DNS, it is possible to limit it at the SSH server itself. Setting MaxStartups to an integer sets a hard limit to the maximum number of concurrent unauthenticated connections to the SSH daemon.

MaxStartups   10

Additional connections will be dropped until authentication succeeds or the LoginGraceTime expires for another connection. The old default was 10.

Preventing Timeouts Of A Not So Active Session edit

There are two connections that can be tracked during an SSH session, the network's TCP connection and the encrypted SSH session traveling on top of it. Some tunnels and VPNs might not be active all the time, so there is the risk that the session times out or even that the TCP session times out at the router or firewall. The network connection can be be tracked with TCPKeepAlive, but is not an accurate indication of the state of the actual SSH connections. It is, however, a useful indicator of the status of the actual network. Either the client or the server can counter that by keeping either the encrypted connection active using a heartbeat.

On the client side, if the global client configuration is not already set, individuals can use ServerAliveInterval to choose an interval in seconds for server alive heartbeats and use ServerAliveCountMax to set the corresponding maximum number of missed client messages allowed before the encrypted SSH session is considered closed.

ServerAliveInterval  15
ServerAliveCountMax  4

On the server side, ClientAliveInterval sets the interval in seconds between client alive heartbeats. ClientAliveCountMax sets the maximum number of missed client messages allowed before the encrypted SSH session is considered closed. If no other data has been sent or received during that time, sshd(8) will send a message through the encrypted channel to request a response from the client. If sshd_config(5) has ClientAliveInterval set to 15, and ClientAliveCountMax set to 4, unresponsive SSH clients will be disconnected after approximately 60 (= 15 x 4) seconds.

ClientAliveInterval  15
ClientAliveCountMax  4

If a time-based RekeyLimit is also used but the time limit is shorter than the ClientAliveInterval heartbeat, then the shorter re-key limit will be used for the heartbeat interval instead.

This is more or less the same principal as with the server. Again, that is set in ~/.ssh/config and can be applied globally to all connections from that account or selectively to specific connections using a Host or Match block.

Ensuring Timeouts Of An Inactive Interactive Session edit

If the server is no longer disconnecting idle SSH accounts when they reach the timeout configured by the ClientAliveInterval option, then the work-around for that is to set the shell's TMOUT variable to the desire timeout value. When TMOUT is set, it specifies the number of seconds the shell will wait for a line of input to be entered before closing the shell and thus the SSH session. Note that this means pressing Enter, too, because other typing will not be enough by itself to prevent the timeout and the line must actually be entered for the timer to be reset.

On the server, check for the presence of the SSH_CONNECTION variable, which is usually empty unless currently in an SSH session, to differentiate an SSH session from a local shell. If the account is allowed to make changes to the timeout, then the following can be in the account's own profile, such as ~/.profile,

if [ "$SSH_CONNECTION" != "" ]; then
        # 10 minutes
        export TMOUT

If the account must not be able to change this setting, then it must be in the global profile and made read-only, such as somewhere under /etc/profile.d/,

if [ "$SSH_CONNECTION" != "" ]; then
        # 10 minutes
        readonly TMOUT
        export TMOUT

Both examples above are for Bourne shells, their derivatives, and maybe some other shells. A few shells might have other options available, such as an actual autologout variable found in tcsh(1).

TCP Wrappers, Also Called tcpd(8) edit

As of 6.7, OpenSSH's sshd(8) no longer supports TCP Wrappers, also referred to as tcpd(8). So this subsection only applies to 6.6 and earlier. Other options that can be used instead of tcpd(8) include packet filters like PF [59], ipf, NFTables, or even old IPTables. In particular, the Match directive for the OpenSSH server supports filtering by CIDR address. Use these instead and keep in mind the phrase "equivalent security control" which can smooth out hassles caused by security auditors which may still have a "tcpwrappers" checkbox left over from the 1990s on their worksheets.

The tcpd(8) program was an access control utility for incoming requests to Internet services. It was used for services that have a one-to-one mapping to executables, such as sshd(8), and which have been compiled to interact with it. It checked first a whitelist (/etc/hosts.allow) and then a blacklist (/etc/hosts.deny) to approve or deny access. The first pattern to match any given connection attempt was used. The default in /etc/hosts.deny was to block access, if no rules match in /etc/hosts.allow:

sshd: ALL

In addition to access control, tcpd(8) can be set to run scripts using twist or spawn when a rule is triggered. spawn launches another program as a child process of tcpd(8). From /etc/hosts.allow:

sshd: : allow
sshd: : spawn \
	/bin/date -u +"%%F %%T UTC from %h" >> /var/log/sshd.log : allow

The variable %h expands to the connecting clients host name or ip number. The manual page for hosts_access(5) includes a full description of the variables available. Because the program in the example, date, uses the same symbol for variables, the escape character (%) must be escaped (%%) so it will be ignored by tcpd(8) and get passed on to date correctly.

twist replaces the service requested with another program. It is sometimes used for honeypots, but can really be used for anything. From /etc/hosts.deny:

sshd: : deny
sshd: ALL : twist /bin/echo "Sorry, fresh out." : deny

With TCP Wrappers the whitelist /etc/hosts.allow is searched first, then the blacklist /etc/hosts.deny. The first match is applied. If no applicable rules are found, then access is granted by default. It should not be used any more and the better alternatives used instead. See also the Match directive in sshd_config(5) about CIDR addresses or else the AllowUsers and DenyUsers directives.

Using TCP Wrappers To Allow Connections From Only A Specific Subnet edit

It was possible to use TCP Wrappers just set sshd(8) to listen only to the local address and not accept any external connections. That was one way. To use TCP Wrappers for that, put a line in /etc/hosts.deny blocking everything:

sshd: ALL

And add an exception for the in /etc/hosts.allow by designating the ip range using CIDR notation or by domain name


The same method can be used to limit access to just the localhost ( by adding a line to /etc/hosts.allow:


Again, the best practice is to block from everywhere and then open up exceptions. Keep in mind that if domains are used instead of IP ranges, DNS entries must be in order and DNS itself accessible. However, the above is of historical interest only. The same kind of limitations are better done by setting sshd_config(5) accordingly and instead of using TCP Wrappers utilize the Match directive in the OpenSSH server or the packet filter in the operating system itself.

The Extended Internet Services Daemon (xinetd) edit

The Extended Internet Services Daemon, xinetd(8), can provide many kinds of access control. That includes but is not limited to name, address or network of remote host, and time of day. It can place limits on the number of services per each service as well as discontinue services if loads exceed a certain limit.

The super-server listens for incoming requests and launches sshd(8) on demand, so it is necessary to first stop sshd(8) from running as standalone daemon. This may mean modifying System V init scripts or Upstart configuration files. Then make an xinetd(8) configuration file for the service SSH. It will probably go in /etc/xinetd.d/ssh The argument -i is important as it tells sshd(8) that it is being run from xinetd(8).

service ssh
	socket_type     = stream
	protocol        = tcp
	wait            = no
	user            = root
	server          = /usr/sbin/sshd
	server_args     = -i
	per_source      = UNLIMITED
	log_on_failure  = USERID HOST
	access_times    = 08:00-15:25
	banner          = /etc/banner.inetd.connection.txt
	banner_success  = /etc/banner.inetd.welcome.txt	
	banner_fail     = /etc/banner.inetd.takeahike.txt

	# instances       = 10
	# nice            = 10
	# bind            =
	# only_from       =
	# no_access       =
	# no_access       +=

Finally, reload the configuration by sending SIGHUP to xinetd(8).

Multiplexing edit


Multiplexing is the ability to send more than one signal over a single line or connection. In OpenSSH, multiplexing can re-use an existing outgoing TCP connection for multiple concurrent SSH sessions to a remote SSH server, avoiding the overhead of creating a new TCP connection and reauthenticating each time.

Advantages of Multiplexing edit

An advantage of SSH multiplexing is that the overhead of creating new TCP connections and negotiating the secure connection is eliminated. The overall number of connections that a machine may accept is a finite resource and the limit is more noticeable on some machines than on others and varies greatly depending on both load and usage. There is also a significant delay when opening a new connection. Activities that repeatedly open new connections can be significantly sped up using multiplexing.

The difference between multiplexing and standalone sessions can be seen by comparing the tables below. Both are selected output from netstat -nt slightly edited for clarity. We see in Table 1, "SSH Connections, Separate", that without multiplexing each new login creates a new TCP connection, one per login session. Following that we see in the other table, Table 2, "SSH Connections, Multiplexed", that when multiplexing is used, new logins are channelled over the already established TCP connection.

#              Local Address       Foreign Address         State

# one connection
tcp    0   0   192.168.x.y:45050   192.168.x.z:22       ESTABLISHED

# two separate connections
tcp    0   0   192.168.x.y:45050   192.168.x.z:22       ESTABLISHED
tcp    0   0   192.168.x.y:45051   192.168.x.z:22       ESTABLISHED

# three separate connections
tcp    0   0   192.168.x.y:45050   192.168.x.z:22       ESTABLISHED
tcp    0   0   192.168.x.y:45051   192.168.x.z:22       ESTABLISHED
tcp    0   0   192.168.x.y:45052   192.168.x.z:22       ESTABLISHED

Table 1: SSH Connections, Separate

Both tables show TCP/IP connections associated with SSH sessions. The table above shows a new TCP/IP connection for each new SSH connection. The table below shows a single TCP/IP connection despite multiple active SSH sessions.

#              Local Address       Foreign Address         State

# one connection
tcp    0   0   192.168.x.y:58913   192.168.x.z:22       ESTABLISHED

# two multiplexed connections
tcp    0   0   192.168.x.y:58913   192.168.x.z:22       ESTABLISHED

# three multiplexed connections
tcp    0   0   192.168.x.y:58913   192.168.x.z:22       ESTABLISHED

Table 2: SSH Connections, Multiplexed

As we can see with multiplexing, only a single TCP connection is set up and used regardless of whether or not there are multiple SSH sessions carried over it.

Or we can compare the time it takes to run true(1) on a slow remote server, using time(1). The two commands would be something like time ssh true versus time ssh -S ./path/to/somesocket true and use keys with an agent for each of them. First without multiplexing, we see the normal connection time:

real    0m0.658s
user    0m0.016s
sys     0m0.008s

Then we do the same thing again, but with a multiplexed connection to see a faster result:

real    0m0.029s
user    0m0.004s
sys     0m0.004s

The difference is quite large and will definitely add up for any activity where connections are made repeatedly in rapid succession. The speed gain for multiplexed connections doesn't come with the master connection, that is normal speed, but with the second and subsequent multiplexed connections. The overhead for a new SSH connection remains, but the overhead of a new TCP connection is avoided. The second and later connections will reuse the established TCP connection over and over and not need to create a new TCP connection for each new SSH connection.

Setting Up Multiplexing edit

The OpenSSH client supports multiplexing its outgoing connections, since version 3.9 (August 18, 2004)[60], using the ControlMaster, ControlPath and ControlPersist configuration directives which get defined in ssh_config(5). The client configuration file usually defaults to the location ~/.ssh/config. All three directives are described in the manual page for ssh_config(5). See also the "TOKENS" section there for the list of tokens available for use in the ControlPath. Any valid tokens used are expanded at run time.

ControlMaster determines whether ssh(1) will listen for control connections and what to do about them. ControlPath sets the location for the control socket used by the multiplexed sessions. These can be either globally or locally in ssh_config(5) or else specified at run time. Control sockets are removed automatically when the master connection has ended. ControlPersist can be used in conjunction with ControlMaster. If ControlPersist is set to 'yes', then it will leave the master connection open in the background to accept new connections until either killed explicitly or closed with -O or ends at a pre-defined timeout. If ControlPersist is set to a time, then it will leave the master connection open for the designated time or until the last multiplexed session is closed, whichever is longer.

Here is a sample excerpt from ssh_config(5) applicable for starting a multiplexed session to via the shortcut machine1.

Host machine1
        ControlPath ~/.ssh/controlmasters/%r@%h:%p
        ControlMaster auto
        ControlPersist 10m

With that configuration, the first connection to machine1 will create a control socket in the directory ~/.ssh/controlmasters/ Then any subsequent connections, up to 10 by default as set by MaxSessions on the SSH server, will re-use that control path automatically as multiplexed sessions. Confirmation of each new connection can be required if ControlMaster set to 'autoask' instead of 'auto'.

Please note with the settings above that the control socket would be placed into the folder ~/.ssh/controlmasters/. If that folder doesn't exist first, the SSH client will exit with a specific error about unix_listener complaining that the file or path does not exist:

unix_listener: cannot bind to path /home/fred/.ssh/controlmasters/ No such file or directory

The option -O ssh(1) can be used to manage the connection using the same shortcut configuration. To cancel all existing connections, including the master connection, use 'exit' instead of 'stop'.

$ ssh -O check machine1
Master running (pid=14379)
$ ssh -O stop machine1
Stop listening request sent
$ ssh -O check machine1
Control socket connect(/Users/Username/.ssh/sockets/machine1): No such file or directory

In that example, the status of the connection is checked first. Then the master connection is told not to accept further multiplexing requests and finally we check again that no control socket is available.

Manually Establishing Multiplexed Connections edit

Multiplexed sessions need a control master to connect to. The run time parameters -M and -S also correspond to ControlMaster and ControlPath, respectively. So first an initial master connection is established using -M when accompanied by the path to the control socket using -S.

$ ssh -M -S /home/fred/.ssh/controlmasters/

Then subsequent multiplexed connections are made in other terminals. They use ControlPath or -S to point to the control socket.

$ ssh -S /home/fred/.ssh/controlmasters/

Note that the control socket is named "", or %r@%h:%p, to try to make the name unique. The combination %r, %h and %p stand for the remote user name, the remote host and the remote host's port. The control sockets should be given unique names.

Multiple -M options place ssh(1) into master mode with confirmation required before slave connections are accepted. This is the same as ControlMaster=ask. Both require X to ask for the confirmation.

Here is the master connection for the host as set to ask for confirmation of new multiplexed sessions:

$ ssh -MM -S ~/.ssh/controlmasters/%r@%h:%p

And here is a subsequent multiplexed connection:

$ ssh -S ~/.ssh/controlmasters/%r@%h:%p

The status of the control master connection can be queried using -O check, which will tell if it is running or not.

$ ssh -O check -S ~/.ssh/controlmasters/%r@%h:%p

If the control session has been stopped, the query will return an error about "No such file or directory" even if there are still multiplexed sessions running because the socket is gone.

Alternately, instead of specifying -M and -S as runtime options, configuration options can be spelled out in full using -o for easier transfer to the client configuration file when figured out.

The way below sets configuration options as run time parameters by first starting a control master.

$ ssh -o "ControlMaster=yes" -o "ControlPath=/home/fred/.ssh/controlmasters/%r@%h:%p"

Then subsequent sessions are connected to the control master via socket at the end of the control path.

$ ssh -o "ControlPath=/home/fred/.ssh/controlmasters/%r@%h:%p"

And of course all that can be put into ssh_config(5) as shown in the previous section. Starting with 6.7, the combination of %r@%h:%p and variations on it can be replaced with %C which by itself generates a SHA1 hash from the concatenation of %l%h%p%r.

$ ssh -S ~/.ssh/controlmasters/%C

There are two advantages with that. One is that the hash can be shorter than its combined elements while still uniquely identifying the connection. The other is that it obfuscates the connection information which would have been otherwise displayed in the socket's name.

Ending Multiplexed Connections edit

One way to end multiplexed sessions is to exit all related SSH sessions, including the control master. If the control master has been placed in the background using ControlPersist, then it will be necessary to stop it with -O and either 'stop' or 'exit'. That also requires knowing the full path to and filename of the control socket as used in the creation of the master session, if it has not been defined in a shortcut in ssh_config(5).

$ ssh -O stop server1
$ ssh -O stop -S ~/.ssh/controlmasters/%C

The multiplex command -O stop will gracefully shutdown multiplexing. After issuing the command, the control socket is removed and no new multiplexed sessions are accepted for that master. Existing connections are allowed to continue and the master connection will persist until the last multiplexed connection closes.

In contrast, the multiplex command -O exit removes the control socket and immediately terminates all existing connections.

Again, the directive ControlPersist can also be set to timeout after a set period of disuse. The time interval there is written in a time format listed in sshd_config(5) or else defaults to seconds if no units are given. It will cause the master connection to automatically close if it has no client connections for the specified time.

Host server1
        ControlPath ~/.ssh/controlmasters/%C
        ControlMaster yes
        ControlPersist 2h

The above example lets the control master timeout after 2 hours of inactivity. Care should be used with persistent control sockets. A user that can read and write to a control socket can establish new connections without further authentication.

Multiplexing Options edit

The values for configuring session multiplexing can be set either in the user-specific ssh_config(5), or the global /etc/ssh/ssh_config, or using parameters when running from the shell or a script. ControlMaster can be overridden in the run time arguments to re-use an existing master when ControlMaster is set to 'yes', by setting it explicitly to 'no':

$ ssh -o "ControlMaster=no"

ControlMaster accepts five different values: 'no', 'yes', 'ask', 'auto', and 'autoask'.

  • 'no' is the default. New sessions will not try to connect to an established master session, but additional sessions can still multiplex by connecting explicitly to an existing socket.
  • 'yes' creates a new master session each time, unless explicitly overridden. The new master session will listen for connections.
  • 'ask' creates a new master each time, unless overridden, which listen for connections. If overridden, ssh-askpass(1) will popup an message in X to ask the master session owner to approve or deny the request. If the request is denied, then the session being created falls back to being a regular, standalone session.
  • 'auto' creates a master session automatically but if there is a master session already available, subsequent sessions are automatically multiplexed.
  • 'autoask' automatically assumes that if a master session exists, that subsequent sessions should be multiplexed, but ask first before adding a session.

Refused connections are logged to the master session.

Host * 
        ControlMaster ask

ControlPath can be a fixed string or include any of several pre-defined variables described in the TOKENS section of the ssh_config(5). %L is for the first component of the local host name and %l for the full local host name. %h is the target host name and %n is the original target host name and %p is for the destination port on the remote server. %r is for the remote user name and %u for the user running ssh(1). Or they can be combined as %C which is a SHA1 hash produced from %l%h%p%r.

Host * 
        ControlMaster ask 
        ControlPath ~/.ssh/controlmasters/%C

ControlPersist accepts 'yes', 'no' or a time interval. If a time interval is given, the default is in seconds. Units can extend the time to minutes, hours, days, weeks or a combination. If 'yes' the master connection stays in the background indefinitely.

Host * 
        ControlMaster ask 
        ControlPath ~/.ssh/controlmasters/%C
        ControlPersist 10m

Port Forwarding After the Fact edit

It is possible to request port forwarding without having to establish new connections. Here we forward port 8080 on the local host to port 80 on the remote host using -L:

$ ssh -O forward -L 8080:localhost:80 -S ~/.ssh/controlmasters/

The same can be done for remote forwarding, using -R. The escape sequence ~C is not available for multiplexed sessions, so -O forward is the only way to add port forwarding on the fly.

Port forwarding can be canceled without having to close any sessions using -O cancel.

$ ssh -O cancel -L 8080:localhost:80 -S ~/.ssh/controlmasters/

The exact same syntax used for forwarding is used for cancelling. However, there is no way currently to look up which ports are being forwarded and how they are forwarded.

Additional Notes About Multiplexing edit

Never use any publicly accessible directory for the control path sockets. Place those sockets in a directory somewhere else, one to which only your account has access. For example ~/.ssh/socket/ would be a much safer choice and /tmp/ would be a bad choice.

In cases where it is useful to have a standing connection, it is always possible to combine multiplexing with other options, such as -f or -N to have the control master drop to the background upon connection and not load a shell.

In sshd_config(5), the directive MaxSessions specifies the maximum number of open sessions permitted per network connection. This is used when multiplexing ssh sessions over a single TCP connection. Setting MaxSessions to 1 disables multiplexing and setting it to 0 disables login/shell/subsystem sessions completely. The default is 10. The MaxSessions directive can also be set under a Match conditional block so that different settings are available for different conditions.

Errors Preventing Multiplexing edit

The directory used for holding the control sockets must be on a file system that actually allows the creation of sockets. AFS is one example that doesn't, as are some implementations of HFS+. If an attempt is made at creating a socket on a file system that does not allow it, an error something like the following will occur:

$ ssh -M -S /home/fred/.ssh/mux
muxserver_listen: link mux listener /home/fred/.ssh/mux.vjfeIFFzHnhgHoOV => /home/fred/.ssh/mux: Operation not permitted

A similar issue was seen when trying to make Unix domain sockets on OverlayFS filesystems, which are often used with Docker, prior to Linux 4.7 kernel[61].

If the file system cannot be reconfigured to allow sockets, then the only other option is to place the control path socket somewhere else on a file system that does support creation of sockets.

Observing Multiplexing edit

It is possible to make some rough measurements to show the differences between multiplexed connections and one-off connections as shown in the tables and figures above.

Measuring the Number of TCP Connections edit

Tables 1 and 2 above use output from netstat(8) and awk(1) to show the number of TCP connections corresponding to the number of SSH connections.

netstat -nt | awk 'NR == 2'

ssh -f sleep 60
echo # one connection
netstat -nt | awk '$5 ~ /:22$/'

ssh -f sleep 60
echo # two connections
netstat -nt | awk '$5 ~ /:22$/'

ssh -f sleep 60
echo # three connections
netstat -nt | awk '$5 ~ /:22$/'

echo Table 1


netstat -nt | awk 'NR == 2'

ssh -f -M -S ~/.ssh/demo sleep 60
echo # one connection
netstat -nt | awk '$5 ~ /:22$/'

ssh -f -S ~/.ssh/demo sleep 60 &
echo # two connections
netstat -nt | awk '$5 ~ /:22$/'

ssh -f -S ~/.ssh/demo sleep 60 &
echo # three connections
netstat -nt | awk '$5 ~ /:22$/'

echo Table 2

The sleep(1) period can be increased if more delay is needed. While the connections are active, it is also possible to look on the server using ps(1), such as with ps uwx, to see the processes as a means of verifying that multiple connections are indeed being made.

Measuring Response Time edit

Measuring the response time requires setting up keys with an agent first so that the agent handles the authentication and eliminates authentication as a source of delay. See the section on keys, if needed. All of the response time tests below will depend on using keys for authentication.

For a one-off connection, just add time(1) to check how long access takes.

$ time ssh -i ~/.ssh/rsakey true

For a multiplexed connection, a master control session must be established. Then subsequent connections will show an increase in speed.

$ ssh -f -M -S ~/.ssh/demo -i ~/.ssh/rsakey 
$ time ssh  -S ~/.ssh/demo -i ~/.ssh/rsakey true
$ time ssh  -S ~/.ssh/demo -i ~/.ssh/rsakey true

These response times are approximate but the difference is nonetheless large enough to be seen between one-off connections and multiplexed connections.

Keeping sessions open edit

Probably you also want to keep your sessions open when inactive. You can configure it using ServerAliveInterval and ServerAliveCountMax options, see ssh_config(5) for the details.

Multiplexing HTTPS and SSH Using sslh edit

A different kind of multiplexing is when more than one protocol is carried over the same port. sslh does just that with SSL and SSH. It figures out what kind of connection is coming in and forwards it to the appropriate service. Thus it allows a server to receive both HTTPS and SSH over the same port, making it possible to connect to the server from behind restrictive firewalls in some cases. It does not hide SSH. Even a quick scan for SSH on the listening port, say with scanssh(1), will show that it is there. Please note that this method only works with simpler packet filters, such as PF or nftables(8) and its front-ends, firewalld and UFW, which filter based on the destination port and won't fool protocol analysis and application layer filters, such as Zorp, which filter based on the actual protocol used. Here's how sslh(8) can be installed in four steps:

  • First install your web server and configure it to accept HTTPS requests. Be sure that it listens for HTTPS on the localhost only. It can help in that regard if it is on a non-standard port, say 2443, instead of 443.
  • Next, set your SSH server to accept connections on the localhost for port 22. It really could be any port, but port 22 is the standard for SSH.
  • Next, create an unprivileged user to run sslh(8). The example below has the user 'sslh' for the unprivileged account.
  • Lastly, install and launch sslh(8) so that it listens on port 443 and forwards HTTPS and SSH to the appropriate ports on the local host. Substitute the external IP number for your machine. The actual name of the executable and path may vary from system to system:
$ /usr/local/sbin/sslh-fork -u sslh -p xx.yy.zz.aa:443 --tls --ssh

Another option is to use a configuration file with sslh(8) and not pass any parameters at runtime. There should be at least one sample configuration file, basic.cfg or example.cfg, included in the package when it is installed. The finished configuration file should look something like this:

user: "sslh";
listen: ( { host: "xx.yy.zz.aa"; port: "443" } );
on-timeout: "ssl";
   { name: "ssh"; host: "localhost"; port: "22"; probe: "builtin"; },
   { name: "ssl"; host: "localhost"; port: "2443"; probe: "builtin"; }

Mind the quotes, commas, and semicolons.

If an old version of SSH is used in conjunction with now-deprecated TCP Wrappers, as described in hosts_access(5), then the option service: works to provide the name of the service that they need.

    { name: "ssh"; service: "ssh"; host: "localhost"; port: "22"; probe: "builtin"; },

If TCP Wrappers are not used, which is most likely the case, then service: is not needed.

Runtime parameters override any configuration file settings that may be in place. sslh(8) supports the protocols HTTP, SSL, SSH, OpenVPN, tinc, and XMPP out of the box. But actually any protocol that can be identified by regular expression pattern matching can be used. There are two variants of sslh, a forked version (sslh or sslh-fork) and a single-threaded version (sslh-select). See the project web site at for more details.

Proxies and Jump Hosts edit

A proxy is an intermediary that forwards requests from clients to other servers. Performance improvement, load balancing, security or access control are some reasons proxies are used.

Jump Hosts -- Passing Through a Gateway or Two edit

It is possible to connect to another host via one or more intermediaries so that the client can act as if the connection were direct.

The main method is to use an SSH connection to forward the SSH protocol through one or more jump hosts, using the ProxyJump directive, to an SSH server running on the target destination host. This is the most secure method because encryption is end-to-end. In addition to whatever other encryption goes on, the end points of the chain encrypt and decrypt each other's traffic. So the traffic passing through the intermediate hosts is always encrypted. But this method cannot be used if the intermediate hosts deny port forwarding.

Using the ProxyCommand option to invoke Netcat as the last in the chain is a variation of this for very old clients. The SSH protocol is forwarded by nc instead of ssh. Attention must also be paid to whether or not the username changes from host to host in the chain of SSH connections. The outdated netcat method does not allow a change of username. Other methods do.

When port forwarding is available the easiest way is to use ProxyJump in the configuration file or -J as a run-time parameter. An example of -J usage is:

$ ssh -J

The ProxyJump directive (-J) is so useful it has an entire sub-section below.

In older versions -J is not available. In this case the safest and most straightforward way is to use ssh(1)'s stdio forwarding (-W) mode to "bounce" the connection through an intermediate host.

$ ssh -o ProxyCommand="ssh -W %h:%p"

This approach supports port-forwarding without further tricks.

Even older clients don't support the -W option. In this case ssh -tt may be used. This forces TTY allocation and passes the SSH traffic as though typed, though this is less secure. To connect to server2 via firewall as the jump host:

$ ssh -tt ssh -tt

That example opens an SSH session to the remote machine. You can also pass commands. For example, to reattach to a remote screen session using screen you can do the following:

$ ssh -tt ssh -tt screen -dR

The chain can be arbitrarily long and is not limited to just two hosts. The disadvantage of this approach over stdio-forwarding at the network layer with -W or -J is that your session, any forwarded agent, X11 server, and sockets are exposed to the intermediate hosts.

Passing Through One or More Gateways Using ProxyJump edit

Starting from OpenSSH 7.3, released August 2016[62], the easiest way to pass through one or more jump hosts is with the ProxyJump directive in ssh_config(5).

Host server2
        User fred

Multiple jump hosts can be specified as a comma-separated list. The hosts will be visited in the order listed.

Host server3
        User fred

It also has the shortcut of -J when using it as a run-time parameter.

$ ssh -J fred@

Multiple jump hosts can be chained in the same way.

$ ssh -J, fred@

It is not possible to use both the ProxyJump and ProxyCommand directives in the same host configuration. The first one found is used and then the other blocked.

Transiting a Jump Host Which Has Multiple RDomains / Routing Tables edit

When passing through a jump host which has its relevant interfaces each on a different rdomain(4), it will be necessary to manipulate the routing tables manually. Specifically that means going back to an older method of transit which relies on netcat and uses ProxyCommand instead of ProxyJump so that route(8) can be added in the middle. Here is an example doing so using only run time parameters to pass through to rdomain 1:

$ ssh -o ProxyCommand="ssh route -T 1 exec nc %h %p"

That configuration can be made persistent by adding it to the client configuration file, ssh_config(5):

Host jump
        User user1
        IdentitiesOnly yes
        IdentityFile ~/.ssh/jump_key

Host server
        User fred
        IdentitiesOnly yes
        IdentityFile ~/.ssh/inside_key
        ProxyCommand ssh jump route -T 1 exec nc %h %p

With those settings, it is then possible to connect to the host on the LAN via the jump host simply with the line ssh server and all the settings will be taken into use.

Otherwise, the recommended way would have been to use ProxyJump. For more on using the older ProxyCommand directive see the section below, ProxyCommand with Netcat.

Conditional Use of Jump Hosts edit

It is possible to use Match exec to select for difficult patterns or otherwise make complex decisions, such as which network the client is connecting from or the network connectivity of the available network(s). Below are two cases for how to automatically choose when to use ProxyJump to connect via an intermediate host. The first example is for occasions when there is no local IPv6 connectivity when connecting to a remote machine which has only IPv6[63] [64]. The second case[65] is for occasions when the target machines are on another network.

Match host
        User fred

Match host !exec "route -n get -inet6 %h"

With those settings, connections to the machine will go via a jump host only if it is not directly accessible via IPv6.

On GNU/Linux systems that capability would be more complicated.

Match host
        User fred

Match host !exec "/sbin/ip route get $(host -t AAAA %h | sed 's/^.* //')"

The scripts are invoked using the shell named in the $SHELL environment variable. Remember that configuration directives are applied according to the first one to match. So specific rules go near the top and more general ones go at the end.

Another way is to use the nc(1) utility to check for connectivity to the target host directly at the port in question.

Match !host !exec "nc -z -w 1 %h %p"

Since the above assumes that only one jump host is ever used, it might be combined with other Match criteria for more precision.

Match host 192.168.1.* !host !exec "nc -z -w 1 %h %p"

Match host 192.168.2.* !host !exec "nc -z -w 1 %h %p"

That would catch all connections going to the network 192.168.1.*, if there is no direct connection, and send them through the first jump host. Likewise it would also send all connections to the network 192.168.2.* through the second jump host if there is no direct connection. But if there is a direct connection the client will proceed without the jump host.

Of course the tests will have to be more complicated if the client machine moves between several different networks with the same numbering.

Using Canonical Host Names Which Are Behind Jump Hosts edit

Some local area networks (LANs) have their own internal domain name service to assign its own host names. These names are not accessible to systems outside the LAN. So therefore it is not possible to use the -J or ProxyJump option from the outside because the client would not be able to look up the names on the LAN on the other side of the jump host. However, the jump host itself can look these names up, so the ProxyCommand option can be used instead to call an SSH client on the jump host and use its capabilities to look up a name on the LAN.

In the following sequence, the absence of outside DNS records for the inner host is shown. Then a connection is made to the inner host via the jump host using the jump host's SSH client and the -W option. Keys or certificates are the recommended way of connecting, but passwords are used in this example so that the stages are more visible:

$ host fuloong04.localnet.lan
Host fuloong04.localnet.lan not found: 3(NXDOMAIN)

$ host fuloong04
Host fuloong04 not found: 2(SERVFAIL)

$ ssh -o ProxyCommand="ssh -W fuloong04.localnet.lan:22" fred@fuloong04's password: 
fred@fuloong04's password: 
Last login: Sun May 24 04:06:21 2020 from
OpenBSD 6.7-current (GENERIC) #44: Sun May 24 04:06:31 MDT 2020

$ host fuloong04.localnet.lan
fuloong04.localnet.lan has address

The -W option ensures that the connection is forwarded over the secure channel and just passes through the jump host without being decrypted. The jump host must both be able to do the DNS look up for LAN names as well as have an SSH client available.

Here is what the initial connection looks like as the client collects the host key and asks about it:

$ ssh -o ProxyCommand="ssh -W fuloong04.localnet.lan:22" fred@fuloong04's password: 
The authenticity of host 'fuloong04 (<no hostip for proxy command>)' can't be established.
ECDSA key fingerprint is SHA256:TGH6fbXRswaP7iR1OBrJuhJ+c1JiEvvc2GJ2krqaJy4.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'fuloong04' (ECDSA) to the list of known hosts.
fred@fuloong04's password: 
Last login: Thu May  7 21:06:18 2020 from
OpenBSD 6.7-current (GENERIC) #44: Sun May 24 04:06:21 MDT 2020

Notice that while a name must be given in the usual position it will only serve to identify which host keys are to be associated with the connection. After that initial connection, a host key is saved in known_hosts for the name given as destination fuloong04 not the name given in the ProxyCommand option:

$ ssh-keygen -F fuloong04
# Host fuloong04 found: line 55 
fuloong04 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKwZOXY7wP1lCvlv5YC64QvWPxSrhCa0Dn+pK4b97jjeUqGy/wII5zCnuv6WZKCEjSQnKFP+8K9cmSGnIvUisPg= 

$ ssh-keygen -F fuloong04.localnet.lan

The target host name just serves to identify which host keys to expect. It has no association with the actual host name or address passed in the ProxyCommand option and could even be a random string. Above fuloong04 was used by itself but, again, it could be any random string or even overridden by using the HostKeyAlias option.

As usual, these SSH options can be recorded in a permanent shortcut within ssh_config(5) to reduce effort in typing and to avoid errors.

Old Methods of Passing Through Jump Hosts edit

A note on old methods: These old methods are deprecated because newer versions of OpenSSH provide easier ways to pass through a jump host. See instead the section Passing Through One or More Gateways Using ProxyJump above. However, some long term support versions of various GNU/Linux distributions may keep very old versions of OpenSSH around on purpose. So while these are unlikely to be encountered, there is still a possibility to need these methods for some years to come.

In old versions of OpenSSH, specifically version 7.2 and earlier, passing through one or more gateways is more complex and requires use of stdio forwarding or, prior to 5.4, use of the netcat(1) utility.

Old: Passing Through a Gateway Using stdio Forwarding (Netcat Mode) edit

Between OpenSSH 5.4[66] and 7.2 inclusive, a 'netcat mode' can connect stdio on the client to a single port forwarded on the server. This can also be used to connect using ssh(1), but it needs the ProxyCommand option either as a run time parameter or as part of ~/.ssh/config. However, it no longer needs netcat(1) to be installed on the intermediary machine(s). Here is an example of using it in a run time parameter.

$ ssh -o ProxyCommand="ssh -W %h:%p"

In that example, authentication will happen twice, first on the jump host and then on the final host where it will bring up a shell.

The syntax is the same if the gateway is identified in the configuration file. ssh(1) expands the full name of the gateway and the destination from the configuration file. The following allows the destination host to be reached by entering ssh server in the terminal.

Host server
        ProxyCommand ssh -W %h:%p

The same can be done for SFTP. Here the destination SFTP server can be reached by entering sftp sftpserver and the configuration file takes care of the rest. If there is a mix up with the final host key, then it is necessary to add in HostKeyAlias to explicitly name which key will be used to identify the destination system.

Host sftpserver
        ProxyCommand ssh -W %h:%p

It is possible to add the key for the gateway to the ssh-agent which you have running or else specify it in the configuration file. The option User refers to the user name on the destination. If the user is the same on both the destination and the originating machine, then it does not need to be used. If the user name is different on the gateway, then the -l option can be used in the ProxyCommand option. Here, the user 'fred' on the local machine, logs into the gateway as 'fred2' and into the destination server as 'fred3'.

Host server
        User fred3
        ProxyCommand ssh -l fred2 -i /home/fred/.ssh/rsa_key -W %h:%p

If both the gateway and destination are using keys, then the option IdentityFile in the config is used to point to the gateway's private key and the option IdentityFile specified on the commandline points at the destination's private key.

Host jump
        IdentityFile /home/fred/.ssh/rsa_key_2
        ProxyCommand ssh -i /home/fred/.ssh/rsa_key -W %h:%p

The old way prior to OpenSSH 5.4 used netcat, nc(1).

Host server
        ProxyCommand ssh nc %h %p

But that should not be used anymore and the netcat mode, provided by -W, should be used instead. The new way does not require netcat at all on any of the machines.

Old: Recursively Chaining Gateways Using stdio Forwarding edit

The easy way to do this is with ProxyJump which is available starting with OpenSSH 7.3 mentioned above. For older versions, if the route always has the same hosts in the same order, then a straightforward chain can be put in the configuration file. Here three hosts are chained with the destination being given the shortcut machine3.

Host machine1
        User fred
        IdentityFile /home/fred/.ssh/machine1_e25519
        Port 2222

Host machine2
        User fred
        IdentityFile /home/fred/.ssh/machine2_e25519
        Port 2222
        ProxyCommand ssh -W %h:%p machine1

Host machine3
        User fred
        IdentityFile /home/fred/.ssh/machine3_e25519
        Port 2222
        ProxyCommand ssh -W %h:%p machine2

Thus any machine in the chain can be reached with a single line. For example, the final machine can be reached with ssh machine3 and worked with as normal. This includes port forwarding and any other capabilities.

Only hostname and, for second and subsequent hosts, ProxyCommand are needed for each Host directive. If keys are not used, then IdentityFile is not needed. If the user is the same for all hosts, the that can be skipped. And if the port is the default, then Port can be skipped. If using many keys in the agent at the same time and the error "too many authentication" pops up on the client end, it might be necessary to add IdentitiesOnly yes to each host's configuration.

Old: Recursively Chaining an Arbitrary Number of Hosts edit

Again, the easy way to do this is with ProxyJump should be used when available, which is starting with OpenSSH 7.3. For older versions, it is possible to make the configuration more abstract and allow passing through an arbitrary number of gateways. You can set the user name with -l thanks to the token, but that user name will be used for all host that you connect to or through.

Host */* 
        ProxyCommand ssh %r@$(dirname %h) -W $(basename %h):%p

In this way hosts are separated with a slash (/) and can be arbitrary in number[67].

$ ssh host1/host2/host3/host4

There are limitations resulting from using the slash as a separator, as there would be with other symbols. However, it allows use of dirname(1) and basename(1) to process the host names.

The following configuration uses sed(1) to allow different port numbers and user names using the plus sign (+) as the delimiter for hosts, a colon (:) for ports, and an percentage sign (%) for user names. The basic structure is ssh -W $() $() and where %h is substituted for the target host name.

Host *+*
        ProxyCommand ssh -W $(echo %h | sed 's/^.*+//;s/^\([^:]*$\)/\1:22/') $(echo %h | sed 's/+[^+]*$//;s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/;s/:\([^:+]*\)$/ -p \1/')

The port can be left off for the default of 22 or delimited with a colon (:) for non-standard values[68].

$ ssh host1+host2:2022+host3:2224

As-is, the colons confound sftp(1), so the above configuration will only work with it using standard ports. If sftp(1) is needed on non-standard ports then another delimiter, such as an underscore (_), can be configured.

Any user name except the final one can be specified for a given host using the designated delimiter, in the above it is a percentage sign (%). The destination host's user name is specified with -l and all others can be joined to their corresponding host name with the delimiter.

$ ssh -l user3 user1%host1+user2%host2+host3

If user names are specified, depending on the delimiter, ssh(1) can be unable to match the final host to an IP number and the key fingerprint in known_hosts. In such cases, it will ask for verification each time the connection is established, but this should not be a problem if either the equal sign (=) or percentage sign (%) is used.

If keys are to be used then load them into an agent, then the client figures them out automatically if agent forwarding with the -A option is used. However, agent forwarding is not needed if the ProxyJump option (-J) is available. It is considered by many to actually be a security flaw and a general misfeature. So use the ProxyJump option instead if it is available. Also, because the default for MaxAuthTries on the server is 6, using keys normally in an agent will limit the number of keys or hops to 6, with server-side logging getting triggered after half that.

Old: ProxyCommand with Netcat edit

Another way, which was needed for OpenSSH 5.3 and older, is to use the ProxyCommand configuration directive and netcat. The utility nc(1) is for reading and writing network connections directly. It can be used to pass connections onward to a second machine. In this case, login is the final destination reached via the intermediary jumphost.

$ ssh -o 'ProxyCommand ssh %h nc 22' \
      -o '' \

Keys and different login names can also be used. Using ProxyCommand, ssh(1) will first connect to jumphost and then from there to The HostKeyAlias directive is needed to look up the right key for, without it the key for jumphost will be tried and that will, of course, fail unless both have the same keys. The account user2 exists on jumphost.

$ ssh -o 'ProxyCommand ssh -i key-rsa -l user2 %h nc 22' \
      -o '' \

It's also possible to make this arrangement more permanent and reduce typing by editing ssh_config Here a connection is made to host2 via host1:

        ProxyCommand ssh nc %h %p

Here a connection is made to server2 via server1 using the shortcut name 'jump'.

Host jump
        ProxyCommand ssh %h nc 22
        User fred

It can be made more general:

        ProxyCommand none

Host * my-private-host
        ProxyCommand ssh nc %h %p

The same can be done with sftp(1) by passing parameters on to ssh(1). Here is a simple example with sftp(1) where machine1 is the jump host to connect to machine2. The user name is the same for both hosts.

$ sftp -o 'ProxyCommand=ssh %h nc 22' \
       -o '' \

Here is a more complex example using a key for server1 but regular password-based login for the SFTP server.

$ sftp -o 'ProxyCommand ssh -i /Volumes/Home/fred/.ssh/server1_rsa \
       -l user2 nc 22'   \
       -o ''

If the user accounts names are different on the two machines, that works, too. Here, 'user2' is the account on the second machine which is the final target. The user 'fred' is the account on the intermediary or jump host.

$ ssh -l user2 \
      -o 'ProxyCommand ssh -l fred %h nc 22' \
      -o 'HostKeyAlias' \
Old: ProxyCommand without Netcat, Using Bash's /dev/tcp/ Pseudo-device edit

On GNU/Linux jump hosts missing even nc(1), but which have the Bash shell interpreter, the pseudo device /dev/tcp[69] can be another option. This makes use of some built-in Bash functionality, along with the cat(1) utility:

$ ssh -o \
	-o ProxyCommand="ssh -l fred %h 'exec 3<>/dev/tcp/; cat <&3 & cat >&3;kill $!'"

It is necessary to set the HostKeyAlias to the host name or IP address of the destination because the SSH connection is just passing through the jump host and needs to expect the right key.

The main prerequisite for this method is having a login shell on a GNU/Linux jump host with Bash as the default shell. If another shell is used instead, such as the POSIX shell, then the pseudo device will not exist and an error will occur instead:

sh: 1: cannot create /dev/tcp/ Directory nonexistent

With ksh(1), a similar error will occur.

ksh: cannot create /dev/tcp/ No such file or directory

The zsh(1) shell will also be missing the /dev/tcp pseudo device but produce a different error.

zsh:1: no such file or directory: /dev/tcp/
zsh:1: 3: bad file descriptor
zsh:1: 3: bad file descriptor
zsh:kill:1: not enough arguments

So, again, this is a process specific to the bash(1) shell. Furthermore, this method is specific to Bash on GNU/Linux distros and will not be available with any of the BSDs or Mac OS as a jump host even if bash(1) is present.

The full details of the connection process from the client perspective can be captured to a log file using the '-E option while increasing the verbosity of the debugging information.

$ ssh -vvv -E /tmp/ \
	-o \
	-o ProxyCommand="ssh -l fred %h 'exec 3<>/dev/tcp/; cat <&3 & cat >&3;kill $!'"

This pseudo device method is included here mostly as a curiosity but can serve as a last resort in some edge cases. As mentioned before, all recent deployments of OpenSSH will have the -J option for ProxyJump instead.

Port Forwarding Through One or More Intermediate Hosts edit

Tunneling, also called port forwarding, is when a port on one machine mapped to a connection to a port on another machine. In that way remote services can be accessed as if they were local. Or in the case of reverse port forwarding, vice versa. Forwarding can be done directly from one machine to another or via a machine in the middle.

When available, the ProxyJump option is preferable. It works just as easily with port forwarding. Below a tunnel is set up from the localhost to machine2 which is behind a jump host.

$ ssh -L 8900:localhost:80 -J machine2

Thus port 8900 on the localhost will actually be a tunnel to port 80 on machine2. It can be as simple as that. As with the usual ProxyJump option, jump hosts can be chained by joining them with commas.

$ ssh -L 8900:localhost:80 -J,jumphost2.localnet.lan machine2

Alternative accounts and ports can be specified that way, too, if needed. This method works best with key- or certificate-based authentication. It is possible to use all the options in this way, such as -X for X11 forwarding. The same with -D for making a SOCKS proxy.

Old: Port Forwarding Via a Single Intermediate Host Without ProxyJump edit

Below we are setting up a tunnel from the localhost to machine2, which is behind a firewall, machine1. The tunnel will be via machine1 which is publicly accessible and also has access to machine2.

$ ssh -L

Next connecting to the tunnel will actually connect to the second host, machine2.

$ ssh -p 2222 remoteuser@localhost

Here is an example of running rsync(1) between the two hosts using machine1 as an intermediary with the above setup.

$ rsync -av -e "ssh -p 2222"  /path/to/some/dir/   localhost:/path/to/some/dir/

SOCKS Proxy edit

It is possible to connect via an intermediate machine using a SOCKS proxy. SOCKS4 and SOCKS5 proxies are currently supported by OpenSSH. SOCKS5[70] allows transparent traversal of a firewall or other barrier by an application and can use strong authentication with help of GSS-API. Dynamic application-level port forwarding allows the outgoing port to be allocated on the fly thus creating a proxy at the TCP session level.

Here the web browser can connect to the SOCKS proxy on port 3555 on the local host:

$ ssh -D 3555

Using ssh(1) as a SOCKS5 proxy, or in any other capacity where forwarding is used, you can specify multiple ports in one action:

$ ssh -D 80 -D 8080 -f -C -q -N

You'll also want the DNS requests to go via your proxy. So, for example, in recent versions of Firefox, there may be an option "Proxy DNS when using SOCKS v5" to check. Or in older versions, about:config needs network.proxy.socks_remote_dns set to true instead. However, in Chromium[71], you'll need to launch the browser while adding two run-time options --proxy-server and --host-resolver-rules to point to the proxy and tell the browser not to send any DNS requests over the open network.

It will be similar for other programs that support SOCKS proxies. So, you can tunnel Samba over ssh(1), too.

Going via a jump host is just a matter of using the ProxyJump option:

$ ssh -D 8899 -J machine2

Chaining multiple jump hosts can be done this way too. See the earlier section on that for discussion and examples.

Old: SOCKS Proxy Via a Single Intermediate Host edit

If you want to open a SOCKS proxy via an intermediate host, it is possible:

$ ssh -D 3555 -J

On older clients, an extra step is needed.

$ ssh -L 8001:localhost:8002 -t ssh -D 8002

In the second example, the client will see a SOCKS proxy on port 8001 on the local host, which is actually a connection to machine1 and the traffic will ultimately enter and leave the net through machine2. Port 8001 on the local host connects to port 8002 on machine1 which is a SOCKS proxy to machine2. The port numbers can be chosen to be whatever is needed, but forwarding privileged ports still requires root privileges.

SSH Over Tor edit

There are two ways to use SSH over Tor. One is using the client over Tor, the other is to host the server as an Onion service. Running the client over Tor allows its origin to be hidden. Hosting the server behind Tor allows its location to be hidden. The two methods can be combined.

Tunneling the SSH Client Over Tor with Netcat edit

Besides using ssh(1) as a SOCKS proxy, it is possible to tunnel the SSH protocol itself over another SOCKS proxy such as Tor. It is anonymity software and a corresponding network that uses relay hosts to conceal a user's location and network activity. Its architecture is intended to prevent surveillance and traffic analysis. Tor can be used in cases where it is important to conceal the point of origin of the SSH client. It can also be used to connect to onion services. Unfortunately, connecting through Tor often comes at the expense of noticeable latency.

On the end point which the client sees, Tor is a regular SOCKS5 proxy and can be used like any other SOCKS5 proxy. So this is tunneling SSH over a SOCKS proxy. The easiest way to do this is with the torsocks utility if available, simply by preceding a SSH command with it:[72]

$ torsocks ssh

However, this can also be accomplished by using netcat:

$ ssh -o ProxyCommand="nc -X 5 -x localhost:9050 %h %p"

When attempting a connection like this, it is very important that it does not leak information. In particular, the DNS lookup should occur over Tor and not be done by the client itself. Make sure that if VerifyHostKeyDNS is used that it be set to 'no'. The default is 'no' but check to be sure. It can be passed as a run-time argument to remove any doubt or uncertainty.

$ ssh -o VerifyHostKeyDNS=no -o ProxyCommand="nc -X 5 -x localhost:9050 %h %p"

Using the netcat-openbsd nc(1) package, this seems not to leak any DNS information. Other netcat packages might or might not be the same. It's also not clear if there are other ways in which this method might leak information. YMMV.

Since these methods proxy through Tor, all of them will also enable connecting to onion services. You can configure SSH to automatically connect to these through Tor, without affecting other types of connections. An entry similar to the following can be added to ssh_config or ~/.ssh/config:

Host *.onion
        VerifyHostKeyDNS no
        ProxyCommand nc -x localhost:9050 -X 5 %h %p

You can further add CanonicalizeHostname yes before any Host declarations so that if you give onion services a nickname, SSH will apply this configuration after determining the hostname is a .onion address.

Providing SSH as an Onion Service edit

SSH can be served from an .onion address to provide anonymity and privacy for the client and, to a certain extent, the server. Neither will know where the other is located, though a lot of additional precautions must be taken to come close to anonymizing the server itself and at least some level of trusting the users will still be necessary.

The first step in making SSH available over Tor is to set up sshd_config(5) so that the SSH service listens on the localhost address and only on the localhost address.


Multiple ListenAddress directives can be used if multiple ports are to be made available. However, any ListenAddress directives provided should bind only to addresses in the network or the IPv6 equivalent. Listening to any WAN or LAN address would defeat Tor's anonymity by allowing the SSH server to be identified from its public keys.

The next step in serving SSH over Tor is to set up a Tor client with a hidden service forwarded to the local SSH server. Follow the instructions for installing a Tor client given at the Tor Project web site[73], but skip the part about the web server if it is not needed. Add the appropriate HiddenServicePort directive to match the address and that sshd(8) is using.

HiddenServicePort 22

If necessary, add additional directives for additional ports. In newer versions of Tor, a 56-character version 3 onion address[74] can be made by adding HiddenServiceVersion 3 to the Tor configuration in versions that support it.

Be sure that HiddenServiceDir points to a location appropriate for your system. The onion address of the new SSH service will then be in the file hostname inside the directory indicated by the HiddenServiceDir directive and will be accessible regardless of where it is on the net or how many layers of NAT have it buried. To use the SSH service and verify that it is actually available over Tor, see the preceding section on using the SSH client over Tor with Netcat.

Just making it available over Tor is not enough alone to anonymize the server. However, one of the other advantages of this method is NAT punching. If the SSH service is behind several layers of NAT, then providing it as an Onion service allows passing through those layers seamlessly without configuring each router at each layer. This eliminates the need for a reverse tunnel to an external server and works through an arbitrary number of layers of NAT such as are now found with mobile phone modems.

Passing Through a Gateway with an Ad Hoc VPN edit

Two subnets can be connected over SSH by configuring the network routing on the end points to use the tunnel. The result is a VPN. A drawback is that root access is needed on both hosts, or at least sudo(8) access to ifconfig(8) and route(8). A related more limited and more archaic approach, not presented here but which does not require root at the remote end, would be to use ssh to establish connectivity and then establish a network using PPP and slirp.[75]

Note that there are very few instances where use of a VPN is legitimately called for, not because VPNs are illegal (quite the opposite, indeed data protection laws in many countries make them absolutely compulsory to protect content in transit) but simply because OpenSSH is usually flexible enough to complete most routine sysadmin and operational tasks using normal SSH methods as and when required. This SSH ad-hoc VPN method is therefore needed only very rarely.

Take this example with two networks. One network has the address range through The other has the address range through Each has a machine, and respectively, that will function as a gateway. Local machine numbering starts with 3 because 2 will be used for the tunnel interfaces on each LAN.

              + =====    +
              |                                   |                                   +---
              |                                   |                                   +---
              |                                   |                                   +---
              |                                   |
10.0.50.etc---+                                   +---172.16.99.etc
              |                                   |                                   +---

First a tun device is created on each machine, a virtual network device for point-to-point IP tunneling. Then the tun interfaces on these two gateways are then connected by an SSH tunnel. Each tun interface is assigned an IP address.

The tunnel connects machines and to each other, and each are already connected to their own local area network (LAN). Here is a VPN with the client as, remote as First, set on the client:

$ ssh -f -w 0:1 true
$ ifconfig tun0 netmask
$ route add

On the server:

$ ifconfig tun1 netmask
$ route add

Troubleshooting an Ad Hoc VPN edit

There are several possible causes to the following kind of error message:

channel 0: open failed: connect failed: open failed
Tunnel forwarding failed

One possibility is that tunneling has not yet been enabled on the server. The SSH server's default configuration has tunneling turned off, so it must be enabled explicitly using the PermitTunnel configuration directive prior to attempting an ad hoc VPN. Failure to enable tunneling will result in an error like the above when connecting using the -w option in the client. The solution in that case for that is to set PermitTunnel to 'yes' on the server.

Another possibility is that either the remote system or the local system or both lack the necessary network interface pseudo-device. That can happen because either or both accounts lack the privileges necessary to create the tun(4) devices on the fly. Several work-arounds exist, including using a privileged account to make the tun(4) devices on each system in advance. In other words, the solution is to manually create the necessary network interface pseudo-devices for the tunneling.

A third possibility is that the one or both of the accounts do not have proper permissions to access the network interface pseudo-devices. Check and, if necessary, correct group memberships.



  1. "Statistics from the current scan results". 2008.
  2. "OpenSSH History". OpenSSH. Retrieved 2012-11-17.
  3. "UNIX History Timeline". Éric Lévénez. Retrieved 2011-02-17.
  4. "Hobbes' Internet Timeline". Robert H'obbes' Zakon. Retrieved 2011-02-17.
  5. Howard Dahdah (2009). "The A-Z of Programming Languages: Bourne shell, or sh". Computerworld. Retrieved 2011-02-18.
  6. a b Phil Zimmermann (1991). "Why I Wrote PGP". Massachusetts Institute of Technology. Retrieved 2011-02-18.
  7. Bill Bryant; Theodor Ts'o (1988). "Designing an Authentication System: a Dialogue in Four Scenes". Retrieved 2011-02-17.
  8. "Help:SSH 1.0.0 license". FUNET. Retrieved 2013-04-13.
  9. Tatu Ylönen (1995-07-12). "ANNOUNCEMENT: Ssh (Secure Shell) Remote Login Program". news:// Retrieved 2011-11-26. {{cite web}}: External link in |publisher= (help)
  10. "Help:SSH 1.2.12 license". friedl. Retrieved 2011-02-17.
  11. "Help:SSH license". friedl. Retrieved 2011-02-17.
  13. "OpenSSH Project History and Credits". OpenSSH. Retrieved 2011-03-10.
  14. a b Robert H'obbes' Zakon. "Hobbes' Internet Timeline". Zakon Group LLC. Retrieved 2011-02-17.
  15. Damien Miller (2013-11-29). "ChaCha20 and Poly1305 in OpenSSH". Retrieved 2014-04-26.
  18. "SSH 1.0.0 README". FUNET. 1995.
  19. "OpenSSH: Secure Shell integrated into OpenBSD operating system". LWN. 1999. Retrieved 2011-02-18.
  20. "European Parliament resolution on the existence of a global system for the interception of private and commercial communications (ECHELON interception system) (2001/2098(INI))". European Parliament. 2001. ECHELON A5-0264 2001. Retrieved 2011-02-18.
  21. "OpenSSH Manual Pages". OpenSSH. Retrieved 2011-02-17.
  22. "RFC 4251: The Secure Shell (SSH) Protocol Architecture". 2006. Retrieved 2013-10-31.
  23. "Why You Need To Stop Using FTP". 2011-07-10. Retrieved 2012-01-09.
  24. Manolis Tzanidakis (2011-09-09). "Stop Using FTP! How to Transfer Files Securely". Wazi. Retrieved 2012-01-09.
  25. Jay Ribak (2002). "Active FTP vs. Passive FTP, a Definitive Explanation". Retrieved 2020-03-20.
  26. Nils Provos (2003). "Privilege Separated OpenSSH". University of Michigan. Retrieved 2011-02-17.
  27. "OpenSSH 5.9 Release Notes". OpenSSH. 2011-09-06. Retrieved 2012-11-17.
  28. "OpenSSH 9.0 Release Notes". 2022-04-08. Retrieved 2022-04-10.
  29. "Service Name and Transport Protocol Port Number Registry". IETF. 2012.
  30. "OpenSSH 2.3.0p1 release notes".
  31. Brian Hatch (2004). "SSH Host Key Protection". SecurityFocus. Retrieved 2013-04-14.
  32. Jake Edge (2008). "Debian vulnerability has widespread effects". LWN. Retrieved 2013-04-14.
  33. Stefan Tatschner (2020-10-15). "SSH (Reverse) Tunnel Through Websocket". Retrieved 2020-10-20.
  34. Niels Provos; Peter Honeyman (2001). "ScanSSH - Scanning the Internet for SSH Servers" (PDF). Center for Information Technology Integration (CITI), University of Michigan. Retrieved 2016-03-05.
  35. "RFC 4252: The Secure Shell (SSH) Authentication Protocol". IETF. 2006. Retrieved 2021-08-10.
  36. Tucker, Darren (2021-08-10). "Re: ssh authlog: Failed none for invalid user". OpenBSD. Retrieved 2021-08-10. 
  37. Tucker, Darren (2019-04-01). "IdentityFile vs IdentitiesOnly". openssh-unix-dev mailing list. Retrieved 2019-04-04. 
  38. "SSH Troubleshooting Guide". IT Tavern. 2023-01-17. Retrieved 2023-01-26.
  39. "The OpenBSD Foundation 2016 Fundraising Campaign". The OpenBSD Foundation. 2016. Retrieved 2016-03-07.
  40. "Sharing Terminal Sessions With Tmux And Screen". HowToForge. 2012.
  41. "How Rsync Works". Samba.
  42. "NEWS for rsync 2.6.0 (1 Jan 2004)". Samba. 2004-01-01. Retrieved 2020-05-02.
  43. "openrsync imported into the tree". Undeadly. 2019-02-11. Retrieved 2020-05-10.
  44. Dan Langille (2014-05-03). "zfs send on FreeBSD over ssh using mbuffer". Retrieved 2020-05-22.
  45. "Openssh SFTP Chroot Code Execution". 2018-01-07. Retrieved 2018-01-09.
  46. "The Secure Shell (SSH) Authentication Protocol". IETF. 2006. Retrieved 2015-05-06.
  47. Steve Friedl (2006-02-22). "An Illustrated Guide to SSH Agent Forwarding". Retrieved 2013-04-27.
  48. Daniel Robbins (2002-02-01). "Common threads: OpenSSH key management, Part 3". IBM. Retrieved 2013-04-27.
  49. Vincent Bernat (2020-04-05). "Safer SSH agent forwarding". Retrieved 2020-10-04.
  50. "Managing multiple SSH agents". Wikimedia. Retrieved 2020-04-07.
  51. Damien Miller (2021-12-16). "SSH agent restriction". OpenSSH. Retrieved 2022-03-06.
  52. "OpenSSH U2F/FIDO support in base". OpenBSD-Tech Mailing List. 2019-11-14. Retrieved 2021-03-24.
  53. Miller, Damien (2023-03-01). "Why does ssh-keyscan not use .ssh/config?". Retrieved 2023-03-01. 
  54. Damien Miller (2015-02-01). "Key rotation in OpenSSH 6.8+". DJM's Personal Weblog. Retrieved 2016-03-05.
  55. Damien Miller (2015-02-17). "Hostkey rotation, redux". DJM's Personal Weblog. Retrieved 2016-03-05.
  56. Stephen Harris (2016-10-30). "Using SSH certificates". Retrieved 2020-05-07.
  57. Maggie Danger (2014-08-07). "How to Harden SSH with Identities and Certificates". Magnus Achim Deininger. Retrieved 2020-05-07.
  58. Mike Malone (2019-09-11). "If you're not using SSH certificates you're doing SSH wrong". Smallstep Labs, Inc. Retrieved 2020-05-07.
  59. Peter N M Hansteen (2011). "Firewalling with PF".
  60. "OpenSSH 3.9 Release Notes". OpenSSH. 2004-08-18. Retrieved 2018-12-16.
  61. Szeredi, Miklos (2016-06-16). "[GIT PULL overlayfs fixes for 4.7-rc3"]. Retrieved 2018-10-05. 
  62. "OpenSSH 7.3 Release Notes". OpenSSH. 2016-08-01. Retrieved 2016-08-01.
  63. Peter Hessler (2018-12-04). "phessler". Mastodon. Retrieved 2018-12-05.
  64. Mike Lee Williams (2017-07-13). "Tunneling an SSH connection only when necessary using Match". Retrieved 2019-09-05.
  65. Andrew Hewus Fresh (2019-08-25). "afresh1". Mastodon. Retrieved 2019-09-05.
  66. "OpenSSH 5.4 Release Notes". OpenSSH. 2010-04-07. Retrieved 2013-04-19.
  67. Josh Hoblitt (2011-09-03). "Recursively chaining SSH ProxyCommand". [Read This Fine Material] from Joshua Hoblitt. Retrieved 2014-02-14.
  68. Mike Hommey (2016-02-08). "SSH through jump hosts, revisited". glandium. Retrieved 2016-02-09.
  69. "All about ssh ProxyCommand". Stack Exchange. 2011.
  70. "SOCKS Protocol version 5". IETF. Retrieved 2011-02-17.
  71. "Configuring a SOCKS proxy server in Chrome". The Chromium Projects. Retrieved 2016-12-10.
  72. "SSH Over Tor". The Tor Project. 2012-08-28. Retrieved 2013-05-04.
  73. "Configuring Hidden Services for Tor". The Tor Project, Inc. Retrieved 2016-12-09.
  74. "Tor Rendezvous Specification - Version 3". The Tor Project, Inc. 2015-05-26. Retrieved 2018-12-14.
  75. Jeremy Impson (2008-09-16). "pppsshslirp: create a PPP session through SSH to a remote machine to which you don't have root". Retrieved 2016-12-10.

References edit