Why is my SSH login slow?
up vote
86
down vote
favorite
I'm seeing delays in SSH Logins. Specifically, there are 2 spots where I see a range from instantaneous to multi-second delays.
- Between issuing the ssh command and getting a login prompt and
- between entering the passphrase and having the shell load
Now, specifically I'm looking at ssh details only here. Obviously network latency, speed of the hardware and OSes involved, complex login scripts, etc can cause delays. For context I ssh to a vast multitude of linux distributions and some Solaris hosts using mostly Ubuntu, CentOS, and MacOS X as my client systems. Almost all of the time, the ssh server configuration is unchanged from the OS's default settings.
What ssh server configurations should I be interested in? Are there OS/kernel parameters that can be tuned? Login shell tricks? Etc?
linux ssh login performance solaris
add a comment |
up vote
86
down vote
favorite
I'm seeing delays in SSH Logins. Specifically, there are 2 spots where I see a range from instantaneous to multi-second delays.
- Between issuing the ssh command and getting a login prompt and
- between entering the passphrase and having the shell load
Now, specifically I'm looking at ssh details only here. Obviously network latency, speed of the hardware and OSes involved, complex login scripts, etc can cause delays. For context I ssh to a vast multitude of linux distributions and some Solaris hosts using mostly Ubuntu, CentOS, and MacOS X as my client systems. Almost all of the time, the ssh server configuration is unchanged from the OS's default settings.
What ssh server configurations should I be interested in? Are there OS/kernel parameters that can be tuned? Login shell tricks? Etc?
linux ssh login performance solaris
are you using local accounts ? - sometimes i find pam authentication can add a delay to logging in with ssh
– Sirex
Jul 22 '10 at 7:08
Usually local accounts. Sometimes NIS.
– Peter Lyons
Jul 24 '10 at 2:39
add a comment |
up vote
86
down vote
favorite
up vote
86
down vote
favorite
I'm seeing delays in SSH Logins. Specifically, there are 2 spots where I see a range from instantaneous to multi-second delays.
- Between issuing the ssh command and getting a login prompt and
- between entering the passphrase and having the shell load
Now, specifically I'm looking at ssh details only here. Obviously network latency, speed of the hardware and OSes involved, complex login scripts, etc can cause delays. For context I ssh to a vast multitude of linux distributions and some Solaris hosts using mostly Ubuntu, CentOS, and MacOS X as my client systems. Almost all of the time, the ssh server configuration is unchanged from the OS's default settings.
What ssh server configurations should I be interested in? Are there OS/kernel parameters that can be tuned? Login shell tricks? Etc?
linux ssh login performance solaris
I'm seeing delays in SSH Logins. Specifically, there are 2 spots where I see a range from instantaneous to multi-second delays.
- Between issuing the ssh command and getting a login prompt and
- between entering the passphrase and having the shell load
Now, specifically I'm looking at ssh details only here. Obviously network latency, speed of the hardware and OSes involved, complex login scripts, etc can cause delays. For context I ssh to a vast multitude of linux distributions and some Solaris hosts using mostly Ubuntu, CentOS, and MacOS X as my client systems. Almost all of the time, the ssh server configuration is unchanged from the OS's default settings.
What ssh server configurations should I be interested in? Are there OS/kernel parameters that can be tuned? Login shell tricks? Etc?
linux ssh login performance solaris
linux ssh login performance solaris
edited Jun 8 '17 at 10:30
Bob
44.7k20135170
44.7k20135170
asked Jul 22 '10 at 7:02
Peter Lyons
7121614
7121614
are you using local accounts ? - sometimes i find pam authentication can add a delay to logging in with ssh
– Sirex
Jul 22 '10 at 7:08
Usually local accounts. Sometimes NIS.
– Peter Lyons
Jul 24 '10 at 2:39
add a comment |
are you using local accounts ? - sometimes i find pam authentication can add a delay to logging in with ssh
– Sirex
Jul 22 '10 at 7:08
Usually local accounts. Sometimes NIS.
– Peter Lyons
Jul 24 '10 at 2:39
are you using local accounts ? - sometimes i find pam authentication can add a delay to logging in with ssh
– Sirex
Jul 22 '10 at 7:08
are you using local accounts ? - sometimes i find pam authentication can add a delay to logging in with ssh
– Sirex
Jul 22 '10 at 7:08
Usually local accounts. Sometimes NIS.
– Peter Lyons
Jul 24 '10 at 2:39
Usually local accounts. Sometimes NIS.
– Peter Lyons
Jul 24 '10 at 2:39
add a comment |
22 Answers
22
active
oldest
votes
up vote
110
down vote
accepted
Try setting UseDNS
to no
in /etc/sshd_config
or /etc/ssh/sshd_config
.
5
+1 that is the most common cause of delay when logging in to ssh
– matthias krull
Jul 22 '10 at 9:22
2
"Solaris 11 note: I tried the UseDNS no setting on Solaris 11 and it corrupted the service start. Not exactly a friendly response by the service. YMMV with other *Nix variants but it seems UseDNS no may not be a valid option in Solaris 11." - comment by Keith Hoffman
– Sathya♦
Jan 11 '12 at 7:21
2
I was skeptical as I use to login using the IP address (home LAN), but this solution fixed my issue. For Google's sake, though it was occurring just after, the delay had nothing to do with the "key: /home/mylogin/.ssh/id_ecdsa ((nil))" message (when runningssh -vvv
).
– Skippy le Grand Gourou
May 19 '14 at 10:42
2
+1 for making it explicit, the file/etc/ssh/sshd_config
! I was adding in/etc/sshd_config
and seeing no difference at all!!
– vyom
Nov 20 '14 at 9:22
1
@SkippyleGrandGourou: Some Solaris versions were using a modified OpenSSH, called SunSSH, which had some annoying incompatibilities. Solaris 11.3 adds OpenSSH back and SunSSH will eventually be removed...
– Gert van den Berg
Jan 4 '16 at 13:01
|
show 3 more comments
up vote
34
down vote
When I ran ssh -vvv
on a server with a similar slow performance I saw a hang here:
debug1: Next authentication method: gssapi-with-mic
By editing /etc/ssh/ssh_config
and commenting out that authentication method I got the login performance back to normal. Here's what I have in my /etc/ssh/ssh_config
on the server:
GSSAPIAuthentication no
You can set this globally on the server, so it doesn't accept GSSAPI to authenticate. Just add GSSAPIAuthentication no
to /etc/ssh/sshd_config
on the server and restart the service.
I found this to be the case on with my RHEL5 servers once winbind/ad logins were configured.
– Chad
Feb 22 '13 at 20:57
Worked for me thanks +1.
– racic
Jan 12 '15 at 15:02
This works for me on a Ubuntu 14.04 server.
– Penghe Geng
Aug 3 '16 at 14:11
For CentOS 7, need to set bothGSSAPIAuthentication no
andUseDNS no
in/etc/ssh/sshd_config
file.
– Sunry
Feb 22 at 2:34
add a comment |
up vote
14
down vote
For me, the culprit was IPv6 resolution, it was timing out. (Bad DNS setting at my host provider, I guess.) I discovered this by doing ssh -v
, which showed which step was hanging.
The solution is to ssh
with the -4
option:
ssh -4 me@myserver.com
2
I suspect more and more of us are going to see this as time passes and things (badly and) slowly accommodate IPV6. Thanks!
– sage
Feb 3 '16 at 23:13
... and this answer is particularly unhelpful without the debug message that confirms this is the problem.
– E.P.
Apr 29 '16 at 12:21
It is my experience that this is a very common problem when SSH is listening on dualstack interfaces and the first thing i check when I can log in, but it takes more time time than expected.
– Mogget
Dec 17 '16 at 21:44
Is there any chance we can fix IPv6 rather than defaulting to IPv4?
– msrd0
Sep 26 at 16:15
add a comment |
up vote
11
down vote
With systemd, login may hangs on dbus communication with logind after some upgrades, then you need to restart logind
systemctl restart systemd-logind
Saw that on debian 8, arch linx, and on a suse list
1
Oh wow, now that was the culprit! Thanks a bunch!
– mahatmanich
Mar 4 '17 at 22:29
Same for me. Took a while to rule out all possible DNS and SSH issues first. Note: If the issue applies to slow sudo as well, try this first.
– Michael
Aug 30 '17 at 10:36
add a comment |
up vote
9
down vote
You can always start ssh
with the -v
option which displays what is being done at the moment.
$ ssh -v you@host
With the information you gave I can only suggest some client side configurations:
Since you write that you are entering passwords manually, I would suggest that you use public key authentification if possible. This removes you as a speed bottleneck.
You could also disable X-forwarding with
-x
and authentication forwarding with-a
(these might already be disabled by default). Especially disabling X-forwarding can give you a big speed improvement if your client needs to start an X-server for thessh
command (e.g. under OS X).
Everything else really depends on what kinds of delays you experience where and when.
Good hint about verbosity, you can also increase it by having more v's. Up to 3 IIRC.
– vtest
Sep 22 '10 at 18:51
add a comment |
up vote
7
down vote
Regarding the 2. point, here is an answer that don't require to modify the server nor require to have root/administrative privileges.
You need to edit your "user ssh_config" file which is:
vi $HOME/.ssh/config
(Note: you would have to create the directory $HOME/.ssh if it does not exist)
And add:
Host *
GSSAPIAuthentication no
GSSAPIDelegateCredentials yes
You can do so on a per host basis if required :) example:
Host linux-srv
HostName 192.158.1.1
GSSAPIAuthentication no
GSSAPIDelegateCredentials yes
Make sure the IP address match your server IP. One cool advantage is that now ssh will provide autocomplete for this server. So you can type ssh lin
+ Tab
and it should autocomplete to ssh linux-srv
.
add a comment |
up vote
4
down vote
Check /etc/resolv.conf
on the server to be sure that DNS server, listed in this file, works ok, and delete any non-working DNS.
Sometimes it is very helpful.
add a comment |
up vote
2
down vote
Besides the DNS issues already mentioned, if you're ssh'ing into a server with many NFS mounts, then there can be a delay between password and prompt as the quota
command checks for your usage/quota on all filesystems not mounted with the noquota
. On Solaris systems, you can see this in the default /etc/profile
and skip it by running touch $HOME/.hushlogin
.
add a comment |
up vote
1
down vote
Work fine.
# uname -a
SunOS oi-san-01 5.11 oi_151a3 i86pc i386 i86pc Solaris
# ssh -V
Sun_SSH_1.5, SSH protocols 1.5/2.0, OpenSSL 0x009080ff
# echo "GSSAPIAuthentication no" >> /etc/ssh/sshd_config
# echo "LookupClientHostnames no" >> /etc/ssh/sshd_config
# svcadm restart ssh
UseDNS no do not work with OpenIndiana !!!
Read "man sshd_config" for all the options
"LookupClientHostnames no" if your server can not resolve
add a comment |
up vote
1
down vote
If none of the above answers works and you're facing dns reverse lookup problems you can also check if nscd
(name service cache daemon) is installed and running.
If this is the problem it is because you have no dns cache, and each time you query for a hostname that is not on your hostfile you send the question to your name server instead of looking in your cache
I tried all the above options and the only change that worked was start nscd
.
You should also verify the order to make dns query resolution in /etc/nsswitch.conf
to use hosts file first.
add a comment |
up vote
1
down vote
This is probably only specific to the Debian/Ubuntu OpenSSH, which includes the user-group-modes.patch written by one of the Debian package maintainers. This patch allows the ~/.ssh files to have the group writable bit set (g+w) if there is only one user with the same gid as that of the file. The patch's secure_permissions() function does this check. One of the phases of the check is to go through each passwd entry using getpwent() and compare the gid of the entry with the gid of the file.
On a system with many entries and/or slow NIS/LDAP authentication, this check will be slow. nscd does not cache getpwent() calls, so every passwd entry will be read over the network if the server is not local. On the system I found this, it added about 4 seconds for each invocation of ssh or login into the system.
The fix is to remove the writable bit on all of the files in ~/.ssh by doing chmod g-w ~/.ssh/*
.
add a comment |
up vote
1
down vote
I found that restarting systemd-logind.service only cured the problem for a few hours. Changing UsePAM from yes to no in sshd_config has resulted in fast logins, although motd is no longer displayed.
Comments about security issues?
I had gone through EVERY other suggestion here, and this is the only thing which fixed the issue on my Samba4 enable server... THANKS!
– Deven Phillips
Oct 21 '16 at 12:35
WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several problems.
– bbaassssiiee
Jan 3 '17 at 11:55
add a comment |
up vote
1
down vote
To complete all the answers showing that DNS resolutions can slow your ssh login, sometimes, a firewall rules is missing.
For example, if you DROP all the INPUT paquets by default
iptables -t filter -P INPUT DROP
then you'll have to accept INPUT for ssh port and DNS request
iptables -t filter -A INPUT -p tcp --dport 53 -j ACCEPT
iptables -t filter -A INPUT -p udp --dport 53 -j ACCEPT
add a comment |
up vote
1
down vote
ssh -vvv
connection went really fine until it hung on the system trying to get the terminal for at least 20 Seconds:
debug1: channel 0: new [client-session]
debug3: ssh_session2_open: channel_new: 0
debug2: channel 0: send open
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
... waiting ... waiting ... waiting
After doing a systemctl restart systemd-logind
on the server I had instant connection again!
This was on debian8! So systemd was the issue here!
Note: Bastien Durel already gave an answer for this issue, however it lacks the debug information. I hope this is helpful to someone.
add a comment |
up vote
1
down vote
I have recently found another cause of slow ssh logins.
Even if you have UseDNS no
in /etc/sshd_config
, sshd may still perform reverse DNS lookups if /etc/hosts.deny
has an entry like:
nnn-nnn-nnn-nnn.rev.some.domain.com
That might happen if you have DenyHosts installed in your system.
It would be great if someone knew how to make DenyHosts avoid putting this kind of entry in /etc/hosts.deny
.
Here is a link, to the DenyHosts FAQ, on how to remove entries from /etc/hosts.deny
- see How can I remove an IP address that DenyHosts blocked?
add a comment |
up vote
1
down vote
We may find that the preferred name resolution method isn't the host file and then DNS.
For example, this would be the usual configuration:
[root@LINUX1 ~]# cat /etc/nsswitch.conf|grep hosts
#hosts: db files nisplus nis dns
hosts: files dns myhostname
First, hosts file is reached (option: files) and then DNS (option: dns), however we can find that another name resolution system has been added that is not operational and is causing us the slowness in trying to do the reverse resolution.
If the name resolution order isn't correct, you can change it at: /etc/nsswitch.conf
Extracted from: http://www.sysadmit.com/2017/07/linux-ssh-login-lento.html
add a comment |
up vote
1
down vote
This thread is already providing a bunch of solutions but mine is not given here =).
So here it is.
My problem (took about 1 minutes to ssh login into my raspberry pi), was due to a corrupted .bash_history file.
Since the file is read at login, this was causing the login delay. Once I removed the file, login time went back to normal, like instantaneous.
Hope this will help some other people.
Cheers
add a comment |
up vote
0
down vote
For me I needed GSSAPI, and I didn't want to turn off reverse DNS lookups. That just didn't seem like a good idea, so I checked out the main page for resolv.conf. It turns out that a firewall between me and the servers I was SSHing to, was interfering with DNS requests, because they weren't in a form that the firewall expected. In the end, all I needed to do was add this line to resolv.conf on the servers that I was SSHing to:
options single-request-reopen
add a comment |
up vote
0
down vote
Remarkably, a package update of bind on CentOS 7 broke named, now stating in the log that /etc/named.conf had a permissions problem. It had worked well for months with 0640. Now it wants 0644. This makes sense as the named daemon belongs to the 'named' user.
With named down everything was slow, from ssh logins to page serving from the local web server, sluggish LAMP apps etc, most probably because every request would time out on the dead local server before looking up to an external, secondary DNS that is configured.
add a comment |
up vote
0
down vote
I tried all the answers but none of them worked. finally I find out my problem:
first I run sudo tail -f /var/log/auth.log
so I can see the log of ssh
then in another session run ssh 172.16.111.166
and noticed waiting on
/usr/bin/sss_ssh_knownhostsproxy -p 22 172.16.111.166
after searching I located this line in /etc/ssd/ssh_config
ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p %h
I commented it and the delay gone
add a comment |
up vote
0
down vote
For me there was an issue in my local /etc/hosts
file. So ssh
was trying two different IP (one wrong) which took forever to time-out.
Using ssh -v
did the trick here:
$ ssh -vvv remotesrv
OpenSSH_6.7p1 Debian-5, OpenSSL 1.0.1k 8 Jan 2015
debug1: Reading configuration data /home/mathieu/.ssh/config
debug1: /home/mathieu/.ssh/config line 60: Applying options for remotesrv
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to remotesrv [192.168.0.10] port 22.
debug1: connect to address 192.168.0.10 port 22: Connection timed out
debug1: Connecting to remotesrv [192.168.0.26] port 22.
debug1: Connection established.
The/etc/hosts
of the server?
– Daniel F
Nov 20 at 19:17
add a comment |
up vote
0
down vote
Note: This started as a "How to debug", tutorial, but ended up being the solution that helped me on an Ubuntu 16.04 LTS server.
TLDR: Run landscape-sysinfo
and check if that command takes a long time to finish; it's the system information printout on a new SSH login. Note that this command isn't available on all systems, the landscape-common
package installs it. ("But wait, there's more...")
Start a second ssh server on another port on the machine that has the problem, do so in debug mode, which won't make it fork and will print out debug messages:
sudo /usr/sbin/sshd -ddd -p 44321
connect to that server from another machine in verbose mode:
ssh -vvv -p 44321 username@server
My client outputs the following lines right before starting to sleep:
debug1: Entering interactive session.
debug1: pledge: network
Googling that isn't really helpful, but the server logs are better:
debug3: mm_send_keystate: Finished sending state [preauth]
debug1: monitor_read_log: child log fd closed
debug1: PAM: establishing credentials
debug3: PAM: opening session
---- Pauses here ----
debug3: PAM: sshpam_store_conv called with 1 messages
User child is on pid 28051
I noticed that when I change UsePAM yes
to UsePAM no
then this issue is resolved.
Not related to UseDNS
or any other setting, only UsePAM
affects this problem on my system.
I have no clue why, and I'm also not leaving UsePAM
at no
, because I do not know which the side-effects are, but this lets me continue investigating.
So please don't consider this to be an answer, but a first step to start finding out what's wrong.
So I continued investigating, and ran sshd
with strace
(sudo strace /usr/sbin/sshd -ddd -p 44321
). This yielded the following:
sendto(4, "<87>Nov 20 20:35:21 sshd[2234]: "..., 110, MSG_NOSIGNAL, NULL, 0) = 110
close(5) = 0
stat("/etc/update-motd.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
umask(022) = 02
rt_sigaction(SIGINT, {SIG_IGN, , SA_RESTORER, 0x7f15dce784b0}, {SIG_DFL, , 0}, 8) = 0
rt_sigaction(SIGQUIT, {SIG_IGN, , SA_RESTORER, 0x7f15dce784b0}, {SIG_DFL, , 0}, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], , 8) = 0
clone(child_stack=0, flags=CLONE_PARENT_SETTID|SIGCHLD, parent_tidptr=0x7ffde6152d2c) = 2385
wait4(2385, # BLOCKS RIGHT HERE, BEFORE THE REST IS PRINTED OUT # [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 2385
The line /etc/update-motd.d
made me suspicious, apparently the process waits for the result of the stuff that is in /etc/update-motd.d
So I cd
'd into /etc/update-motd.d
and ran a sudo chmod -x *
in order to inhibit PAM to run all the files which generate this dynamic Message Of The Day
, which includes system load and if packages need to be upgraded, and this solved the issue.
This is a server based on an "energy-efficient" N3150 CPU which has a lot of work to do 24/7, so I think that collecting all this motd-data was just too much for it.
I may start to enable scripts in that folder selectively, to see which are less harmful, but specially calling landscape-sysinfo
is very slow, and 50-landscape-sysinfo
does call that command. I think that is the one which causes the biggest delay.
After reenabling most of the files I came to the conclusion that
50-landscape-sysinfo
and 99-esm
were the cause for my troubles. 50-landscape-sysinfo
took about 5 seconds to execute and 99-esm
about 3 seconds. All the remaining files about 2 seconds altogether.
Neither 50-landscape-sysinfo
and 99-esm
are crucial. 50-landscape-sysinfo
prints out interesting system stats (and also if you're low on space!), and 99-esm
prints out messages related to Ubuntu Extended Security Maintenance
Finally you can create a script with echo '/usr/bin/landscape-sysinfo' > info.sh && chmod +x info.sh
and get that printout upon request.
add a comment |
StackExchange.ready(function () {
$("#show-editor-button input, #show-editor-button button").click(function () {
var showEditor = function() {
$("#show-editor-button").hide();
$("#post-form").removeClass("dno");
StackExchange.editor.finallyInit();
};
var useFancy = $(this).data('confirm-use-fancy');
if(useFancy == 'True') {
var popupTitle = $(this).data('confirm-fancy-title');
var popupBody = $(this).data('confirm-fancy-body');
var popupAccept = $(this).data('confirm-fancy-accept-button');
$(this).loadPopup({
url: '/post/self-answer-popup',
loaded: function(popup) {
var pTitle = $(popup).find('h2');
var pBody = $(popup).find('.popup-body');
var pSubmit = $(popup).find('.popup-submit');
pTitle.text(popupTitle);
pBody.html(popupBody);
pSubmit.val(popupAccept).click(showEditor);
}
})
} else{
var confirmText = $(this).data('confirm-text');
if (confirmText ? confirm(confirmText) : true) {
showEditor();
}
}
});
});
22 Answers
22
active
oldest
votes
22 Answers
22
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
110
down vote
accepted
Try setting UseDNS
to no
in /etc/sshd_config
or /etc/ssh/sshd_config
.
5
+1 that is the most common cause of delay when logging in to ssh
– matthias krull
Jul 22 '10 at 9:22
2
"Solaris 11 note: I tried the UseDNS no setting on Solaris 11 and it corrupted the service start. Not exactly a friendly response by the service. YMMV with other *Nix variants but it seems UseDNS no may not be a valid option in Solaris 11." - comment by Keith Hoffman
– Sathya♦
Jan 11 '12 at 7:21
2
I was skeptical as I use to login using the IP address (home LAN), but this solution fixed my issue. For Google's sake, though it was occurring just after, the delay had nothing to do with the "key: /home/mylogin/.ssh/id_ecdsa ((nil))" message (when runningssh -vvv
).
– Skippy le Grand Gourou
May 19 '14 at 10:42
2
+1 for making it explicit, the file/etc/ssh/sshd_config
! I was adding in/etc/sshd_config
and seeing no difference at all!!
– vyom
Nov 20 '14 at 9:22
1
@SkippyleGrandGourou: Some Solaris versions were using a modified OpenSSH, called SunSSH, which had some annoying incompatibilities. Solaris 11.3 adds OpenSSH back and SunSSH will eventually be removed...
– Gert van den Berg
Jan 4 '16 at 13:01
|
show 3 more comments
up vote
110
down vote
accepted
Try setting UseDNS
to no
in /etc/sshd_config
or /etc/ssh/sshd_config
.
5
+1 that is the most common cause of delay when logging in to ssh
– matthias krull
Jul 22 '10 at 9:22
2
"Solaris 11 note: I tried the UseDNS no setting on Solaris 11 and it corrupted the service start. Not exactly a friendly response by the service. YMMV with other *Nix variants but it seems UseDNS no may not be a valid option in Solaris 11." - comment by Keith Hoffman
– Sathya♦
Jan 11 '12 at 7:21
2
I was skeptical as I use to login using the IP address (home LAN), but this solution fixed my issue. For Google's sake, though it was occurring just after, the delay had nothing to do with the "key: /home/mylogin/.ssh/id_ecdsa ((nil))" message (when runningssh -vvv
).
– Skippy le Grand Gourou
May 19 '14 at 10:42
2
+1 for making it explicit, the file/etc/ssh/sshd_config
! I was adding in/etc/sshd_config
and seeing no difference at all!!
– vyom
Nov 20 '14 at 9:22
1
@SkippyleGrandGourou: Some Solaris versions were using a modified OpenSSH, called SunSSH, which had some annoying incompatibilities. Solaris 11.3 adds OpenSSH back and SunSSH will eventually be removed...
– Gert van den Berg
Jan 4 '16 at 13:01
|
show 3 more comments
up vote
110
down vote
accepted
up vote
110
down vote
accepted
Try setting UseDNS
to no
in /etc/sshd_config
or /etc/ssh/sshd_config
.
Try setting UseDNS
to no
in /etc/sshd_config
or /etc/ssh/sshd_config
.
edited Dec 2 '13 at 10:07
answered Jul 22 '10 at 8:38
Paul R
4,18611627
4,18611627
5
+1 that is the most common cause of delay when logging in to ssh
– matthias krull
Jul 22 '10 at 9:22
2
"Solaris 11 note: I tried the UseDNS no setting on Solaris 11 and it corrupted the service start. Not exactly a friendly response by the service. YMMV with other *Nix variants but it seems UseDNS no may not be a valid option in Solaris 11." - comment by Keith Hoffman
– Sathya♦
Jan 11 '12 at 7:21
2
I was skeptical as I use to login using the IP address (home LAN), but this solution fixed my issue. For Google's sake, though it was occurring just after, the delay had nothing to do with the "key: /home/mylogin/.ssh/id_ecdsa ((nil))" message (when runningssh -vvv
).
– Skippy le Grand Gourou
May 19 '14 at 10:42
2
+1 for making it explicit, the file/etc/ssh/sshd_config
! I was adding in/etc/sshd_config
and seeing no difference at all!!
– vyom
Nov 20 '14 at 9:22
1
@SkippyleGrandGourou: Some Solaris versions were using a modified OpenSSH, called SunSSH, which had some annoying incompatibilities. Solaris 11.3 adds OpenSSH back and SunSSH will eventually be removed...
– Gert van den Berg
Jan 4 '16 at 13:01
|
show 3 more comments
5
+1 that is the most common cause of delay when logging in to ssh
– matthias krull
Jul 22 '10 at 9:22
2
"Solaris 11 note: I tried the UseDNS no setting on Solaris 11 and it corrupted the service start. Not exactly a friendly response by the service. YMMV with other *Nix variants but it seems UseDNS no may not be a valid option in Solaris 11." - comment by Keith Hoffman
– Sathya♦
Jan 11 '12 at 7:21
2
I was skeptical as I use to login using the IP address (home LAN), but this solution fixed my issue. For Google's sake, though it was occurring just after, the delay had nothing to do with the "key: /home/mylogin/.ssh/id_ecdsa ((nil))" message (when runningssh -vvv
).
– Skippy le Grand Gourou
May 19 '14 at 10:42
2
+1 for making it explicit, the file/etc/ssh/sshd_config
! I was adding in/etc/sshd_config
and seeing no difference at all!!
– vyom
Nov 20 '14 at 9:22
1
@SkippyleGrandGourou: Some Solaris versions were using a modified OpenSSH, called SunSSH, which had some annoying incompatibilities. Solaris 11.3 adds OpenSSH back and SunSSH will eventually be removed...
– Gert van den Berg
Jan 4 '16 at 13:01
5
5
+1 that is the most common cause of delay when logging in to ssh
– matthias krull
Jul 22 '10 at 9:22
+1 that is the most common cause of delay when logging in to ssh
– matthias krull
Jul 22 '10 at 9:22
2
2
"Solaris 11 note: I tried the UseDNS no setting on Solaris 11 and it corrupted the service start. Not exactly a friendly response by the service. YMMV with other *Nix variants but it seems UseDNS no may not be a valid option in Solaris 11." - comment by Keith Hoffman
– Sathya♦
Jan 11 '12 at 7:21
"Solaris 11 note: I tried the UseDNS no setting on Solaris 11 and it corrupted the service start. Not exactly a friendly response by the service. YMMV with other *Nix variants but it seems UseDNS no may not be a valid option in Solaris 11." - comment by Keith Hoffman
– Sathya♦
Jan 11 '12 at 7:21
2
2
I was skeptical as I use to login using the IP address (home LAN), but this solution fixed my issue. For Google's sake, though it was occurring just after, the delay had nothing to do with the "key: /home/mylogin/.ssh/id_ecdsa ((nil))" message (when running
ssh -vvv
).– Skippy le Grand Gourou
May 19 '14 at 10:42
I was skeptical as I use to login using the IP address (home LAN), but this solution fixed my issue. For Google's sake, though it was occurring just after, the delay had nothing to do with the "key: /home/mylogin/.ssh/id_ecdsa ((nil))" message (when running
ssh -vvv
).– Skippy le Grand Gourou
May 19 '14 at 10:42
2
2
+1 for making it explicit, the file
/etc/ssh/sshd_config
! I was adding in /etc/sshd_config
and seeing no difference at all!!– vyom
Nov 20 '14 at 9:22
+1 for making it explicit, the file
/etc/ssh/sshd_config
! I was adding in /etc/sshd_config
and seeing no difference at all!!– vyom
Nov 20 '14 at 9:22
1
1
@SkippyleGrandGourou: Some Solaris versions were using a modified OpenSSH, called SunSSH, which had some annoying incompatibilities. Solaris 11.3 adds OpenSSH back and SunSSH will eventually be removed...
– Gert van den Berg
Jan 4 '16 at 13:01
@SkippyleGrandGourou: Some Solaris versions were using a modified OpenSSH, called SunSSH, which had some annoying incompatibilities. Solaris 11.3 adds OpenSSH back and SunSSH will eventually be removed...
– Gert van den Berg
Jan 4 '16 at 13:01
|
show 3 more comments
up vote
34
down vote
When I ran ssh -vvv
on a server with a similar slow performance I saw a hang here:
debug1: Next authentication method: gssapi-with-mic
By editing /etc/ssh/ssh_config
and commenting out that authentication method I got the login performance back to normal. Here's what I have in my /etc/ssh/ssh_config
on the server:
GSSAPIAuthentication no
You can set this globally on the server, so it doesn't accept GSSAPI to authenticate. Just add GSSAPIAuthentication no
to /etc/ssh/sshd_config
on the server and restart the service.
I found this to be the case on with my RHEL5 servers once winbind/ad logins were configured.
– Chad
Feb 22 '13 at 20:57
Worked for me thanks +1.
– racic
Jan 12 '15 at 15:02
This works for me on a Ubuntu 14.04 server.
– Penghe Geng
Aug 3 '16 at 14:11
For CentOS 7, need to set bothGSSAPIAuthentication no
andUseDNS no
in/etc/ssh/sshd_config
file.
– Sunry
Feb 22 at 2:34
add a comment |
up vote
34
down vote
When I ran ssh -vvv
on a server with a similar slow performance I saw a hang here:
debug1: Next authentication method: gssapi-with-mic
By editing /etc/ssh/ssh_config
and commenting out that authentication method I got the login performance back to normal. Here's what I have in my /etc/ssh/ssh_config
on the server:
GSSAPIAuthentication no
You can set this globally on the server, so it doesn't accept GSSAPI to authenticate. Just add GSSAPIAuthentication no
to /etc/ssh/sshd_config
on the server and restart the service.
I found this to be the case on with my RHEL5 servers once winbind/ad logins were configured.
– Chad
Feb 22 '13 at 20:57
Worked for me thanks +1.
– racic
Jan 12 '15 at 15:02
This works for me on a Ubuntu 14.04 server.
– Penghe Geng
Aug 3 '16 at 14:11
For CentOS 7, need to set bothGSSAPIAuthentication no
andUseDNS no
in/etc/ssh/sshd_config
file.
– Sunry
Feb 22 at 2:34
add a comment |
up vote
34
down vote
up vote
34
down vote
When I ran ssh -vvv
on a server with a similar slow performance I saw a hang here:
debug1: Next authentication method: gssapi-with-mic
By editing /etc/ssh/ssh_config
and commenting out that authentication method I got the login performance back to normal. Here's what I have in my /etc/ssh/ssh_config
on the server:
GSSAPIAuthentication no
You can set this globally on the server, so it doesn't accept GSSAPI to authenticate. Just add GSSAPIAuthentication no
to /etc/ssh/sshd_config
on the server and restart the service.
When I ran ssh -vvv
on a server with a similar slow performance I saw a hang here:
debug1: Next authentication method: gssapi-with-mic
By editing /etc/ssh/ssh_config
and commenting out that authentication method I got the login performance back to normal. Here's what I have in my /etc/ssh/ssh_config
on the server:
GSSAPIAuthentication no
You can set this globally on the server, so it doesn't accept GSSAPI to authenticate. Just add GSSAPIAuthentication no
to /etc/ssh/sshd_config
on the server and restart the service.
edited Sep 9 '13 at 8:39
oKtosiTe
6,06883768
6,06883768
answered Sep 22 '10 at 17:42
Joshua
44143
44143
I found this to be the case on with my RHEL5 servers once winbind/ad logins were configured.
– Chad
Feb 22 '13 at 20:57
Worked for me thanks +1.
– racic
Jan 12 '15 at 15:02
This works for me on a Ubuntu 14.04 server.
– Penghe Geng
Aug 3 '16 at 14:11
For CentOS 7, need to set bothGSSAPIAuthentication no
andUseDNS no
in/etc/ssh/sshd_config
file.
– Sunry
Feb 22 at 2:34
add a comment |
I found this to be the case on with my RHEL5 servers once winbind/ad logins were configured.
– Chad
Feb 22 '13 at 20:57
Worked for me thanks +1.
– racic
Jan 12 '15 at 15:02
This works for me on a Ubuntu 14.04 server.
– Penghe Geng
Aug 3 '16 at 14:11
For CentOS 7, need to set bothGSSAPIAuthentication no
andUseDNS no
in/etc/ssh/sshd_config
file.
– Sunry
Feb 22 at 2:34
I found this to be the case on with my RHEL5 servers once winbind/ad logins were configured.
– Chad
Feb 22 '13 at 20:57
I found this to be the case on with my RHEL5 servers once winbind/ad logins were configured.
– Chad
Feb 22 '13 at 20:57
Worked for me thanks +1.
– racic
Jan 12 '15 at 15:02
Worked for me thanks +1.
– racic
Jan 12 '15 at 15:02
This works for me on a Ubuntu 14.04 server.
– Penghe Geng
Aug 3 '16 at 14:11
This works for me on a Ubuntu 14.04 server.
– Penghe Geng
Aug 3 '16 at 14:11
For CentOS 7, need to set both
GSSAPIAuthentication no
and UseDNS no
in /etc/ssh/sshd_config
file.– Sunry
Feb 22 at 2:34
For CentOS 7, need to set both
GSSAPIAuthentication no
and UseDNS no
in /etc/ssh/sshd_config
file.– Sunry
Feb 22 at 2:34
add a comment |
up vote
14
down vote
For me, the culprit was IPv6 resolution, it was timing out. (Bad DNS setting at my host provider, I guess.) I discovered this by doing ssh -v
, which showed which step was hanging.
The solution is to ssh
with the -4
option:
ssh -4 me@myserver.com
2
I suspect more and more of us are going to see this as time passes and things (badly and) slowly accommodate IPV6. Thanks!
– sage
Feb 3 '16 at 23:13
... and this answer is particularly unhelpful without the debug message that confirms this is the problem.
– E.P.
Apr 29 '16 at 12:21
It is my experience that this is a very common problem when SSH is listening on dualstack interfaces and the first thing i check when I can log in, but it takes more time time than expected.
– Mogget
Dec 17 '16 at 21:44
Is there any chance we can fix IPv6 rather than defaulting to IPv4?
– msrd0
Sep 26 at 16:15
add a comment |
up vote
14
down vote
For me, the culprit was IPv6 resolution, it was timing out. (Bad DNS setting at my host provider, I guess.) I discovered this by doing ssh -v
, which showed which step was hanging.
The solution is to ssh
with the -4
option:
ssh -4 me@myserver.com
2
I suspect more and more of us are going to see this as time passes and things (badly and) slowly accommodate IPV6. Thanks!
– sage
Feb 3 '16 at 23:13
... and this answer is particularly unhelpful without the debug message that confirms this is the problem.
– E.P.
Apr 29 '16 at 12:21
It is my experience that this is a very common problem when SSH is listening on dualstack interfaces and the first thing i check when I can log in, but it takes more time time than expected.
– Mogget
Dec 17 '16 at 21:44
Is there any chance we can fix IPv6 rather than defaulting to IPv4?
– msrd0
Sep 26 at 16:15
add a comment |
up vote
14
down vote
up vote
14
down vote
For me, the culprit was IPv6 resolution, it was timing out. (Bad DNS setting at my host provider, I guess.) I discovered this by doing ssh -v
, which showed which step was hanging.
The solution is to ssh
with the -4
option:
ssh -4 me@myserver.com
For me, the culprit was IPv6 resolution, it was timing out. (Bad DNS setting at my host provider, I guess.) I discovered this by doing ssh -v
, which showed which step was hanging.
The solution is to ssh
with the -4
option:
ssh -4 me@myserver.com
answered Aug 14 '15 at 0:50
Anthony
27528
27528
2
I suspect more and more of us are going to see this as time passes and things (badly and) slowly accommodate IPV6. Thanks!
– sage
Feb 3 '16 at 23:13
... and this answer is particularly unhelpful without the debug message that confirms this is the problem.
– E.P.
Apr 29 '16 at 12:21
It is my experience that this is a very common problem when SSH is listening on dualstack interfaces and the first thing i check when I can log in, but it takes more time time than expected.
– Mogget
Dec 17 '16 at 21:44
Is there any chance we can fix IPv6 rather than defaulting to IPv4?
– msrd0
Sep 26 at 16:15
add a comment |
2
I suspect more and more of us are going to see this as time passes and things (badly and) slowly accommodate IPV6. Thanks!
– sage
Feb 3 '16 at 23:13
... and this answer is particularly unhelpful without the debug message that confirms this is the problem.
– E.P.
Apr 29 '16 at 12:21
It is my experience that this is a very common problem when SSH is listening on dualstack interfaces and the first thing i check when I can log in, but it takes more time time than expected.
– Mogget
Dec 17 '16 at 21:44
Is there any chance we can fix IPv6 rather than defaulting to IPv4?
– msrd0
Sep 26 at 16:15
2
2
I suspect more and more of us are going to see this as time passes and things (badly and) slowly accommodate IPV6. Thanks!
– sage
Feb 3 '16 at 23:13
I suspect more and more of us are going to see this as time passes and things (badly and) slowly accommodate IPV6. Thanks!
– sage
Feb 3 '16 at 23:13
... and this answer is particularly unhelpful without the debug message that confirms this is the problem.
– E.P.
Apr 29 '16 at 12:21
... and this answer is particularly unhelpful without the debug message that confirms this is the problem.
– E.P.
Apr 29 '16 at 12:21
It is my experience that this is a very common problem when SSH is listening on dualstack interfaces and the first thing i check when I can log in, but it takes more time time than expected.
– Mogget
Dec 17 '16 at 21:44
It is my experience that this is a very common problem when SSH is listening on dualstack interfaces and the first thing i check when I can log in, but it takes more time time than expected.
– Mogget
Dec 17 '16 at 21:44
Is there any chance we can fix IPv6 rather than defaulting to IPv4?
– msrd0
Sep 26 at 16:15
Is there any chance we can fix IPv6 rather than defaulting to IPv4?
– msrd0
Sep 26 at 16:15
add a comment |
up vote
11
down vote
With systemd, login may hangs on dbus communication with logind after some upgrades, then you need to restart logind
systemctl restart systemd-logind
Saw that on debian 8, arch linx, and on a suse list
1
Oh wow, now that was the culprit! Thanks a bunch!
– mahatmanich
Mar 4 '17 at 22:29
Same for me. Took a while to rule out all possible DNS and SSH issues first. Note: If the issue applies to slow sudo as well, try this first.
– Michael
Aug 30 '17 at 10:36
add a comment |
up vote
11
down vote
With systemd, login may hangs on dbus communication with logind after some upgrades, then you need to restart logind
systemctl restart systemd-logind
Saw that on debian 8, arch linx, and on a suse list
1
Oh wow, now that was the culprit! Thanks a bunch!
– mahatmanich
Mar 4 '17 at 22:29
Same for me. Took a while to rule out all possible DNS and SSH issues first. Note: If the issue applies to slow sudo as well, try this first.
– Michael
Aug 30 '17 at 10:36
add a comment |
up vote
11
down vote
up vote
11
down vote
With systemd, login may hangs on dbus communication with logind after some upgrades, then you need to restart logind
systemctl restart systemd-logind
Saw that on debian 8, arch linx, and on a suse list
With systemd, login may hangs on dbus communication with logind after some upgrades, then you need to restart logind
systemctl restart systemd-logind
Saw that on debian 8, arch linx, and on a suse list
answered May 21 '15 at 9:41
Bastien Durel
21125
21125
1
Oh wow, now that was the culprit! Thanks a bunch!
– mahatmanich
Mar 4 '17 at 22:29
Same for me. Took a while to rule out all possible DNS and SSH issues first. Note: If the issue applies to slow sudo as well, try this first.
– Michael
Aug 30 '17 at 10:36
add a comment |
1
Oh wow, now that was the culprit! Thanks a bunch!
– mahatmanich
Mar 4 '17 at 22:29
Same for me. Took a while to rule out all possible DNS and SSH issues first. Note: If the issue applies to slow sudo as well, try this first.
– Michael
Aug 30 '17 at 10:36
1
1
Oh wow, now that was the culprit! Thanks a bunch!
– mahatmanich
Mar 4 '17 at 22:29
Oh wow, now that was the culprit! Thanks a bunch!
– mahatmanich
Mar 4 '17 at 22:29
Same for me. Took a while to rule out all possible DNS and SSH issues first. Note: If the issue applies to slow sudo as well, try this first.
– Michael
Aug 30 '17 at 10:36
Same for me. Took a while to rule out all possible DNS and SSH issues first. Note: If the issue applies to slow sudo as well, try this first.
– Michael
Aug 30 '17 at 10:36
add a comment |
up vote
9
down vote
You can always start ssh
with the -v
option which displays what is being done at the moment.
$ ssh -v you@host
With the information you gave I can only suggest some client side configurations:
Since you write that you are entering passwords manually, I would suggest that you use public key authentification if possible. This removes you as a speed bottleneck.
You could also disable X-forwarding with
-x
and authentication forwarding with-a
(these might already be disabled by default). Especially disabling X-forwarding can give you a big speed improvement if your client needs to start an X-server for thessh
command (e.g. under OS X).
Everything else really depends on what kinds of delays you experience where and when.
Good hint about verbosity, you can also increase it by having more v's. Up to 3 IIRC.
– vtest
Sep 22 '10 at 18:51
add a comment |
up vote
9
down vote
You can always start ssh
with the -v
option which displays what is being done at the moment.
$ ssh -v you@host
With the information you gave I can only suggest some client side configurations:
Since you write that you are entering passwords manually, I would suggest that you use public key authentification if possible. This removes you as a speed bottleneck.
You could also disable X-forwarding with
-x
and authentication forwarding with-a
(these might already be disabled by default). Especially disabling X-forwarding can give you a big speed improvement if your client needs to start an X-server for thessh
command (e.g. under OS X).
Everything else really depends on what kinds of delays you experience where and when.
Good hint about verbosity, you can also increase it by having more v's. Up to 3 IIRC.
– vtest
Sep 22 '10 at 18:51
add a comment |
up vote
9
down vote
up vote
9
down vote
You can always start ssh
with the -v
option which displays what is being done at the moment.
$ ssh -v you@host
With the information you gave I can only suggest some client side configurations:
Since you write that you are entering passwords manually, I would suggest that you use public key authentification if possible. This removes you as a speed bottleneck.
You could also disable X-forwarding with
-x
and authentication forwarding with-a
(these might already be disabled by default). Especially disabling X-forwarding can give you a big speed improvement if your client needs to start an X-server for thessh
command (e.g. under OS X).
Everything else really depends on what kinds of delays you experience where and when.
You can always start ssh
with the -v
option which displays what is being done at the moment.
$ ssh -v you@host
With the information you gave I can only suggest some client side configurations:
Since you write that you are entering passwords manually, I would suggest that you use public key authentification if possible. This removes you as a speed bottleneck.
You could also disable X-forwarding with
-x
and authentication forwarding with-a
(these might already be disabled by default). Especially disabling X-forwarding can give you a big speed improvement if your client needs to start an X-server for thessh
command (e.g. under OS X).
Everything else really depends on what kinds of delays you experience where and when.
answered Jul 22 '10 at 8:28
Benjamin Bannier
12.7k23637
12.7k23637
Good hint about verbosity, you can also increase it by having more v's. Up to 3 IIRC.
– vtest
Sep 22 '10 at 18:51
add a comment |
Good hint about verbosity, you can also increase it by having more v's. Up to 3 IIRC.
– vtest
Sep 22 '10 at 18:51
Good hint about verbosity, you can also increase it by having more v's. Up to 3 IIRC.
– vtest
Sep 22 '10 at 18:51
Good hint about verbosity, you can also increase it by having more v's. Up to 3 IIRC.
– vtest
Sep 22 '10 at 18:51
add a comment |
up vote
7
down vote
Regarding the 2. point, here is an answer that don't require to modify the server nor require to have root/administrative privileges.
You need to edit your "user ssh_config" file which is:
vi $HOME/.ssh/config
(Note: you would have to create the directory $HOME/.ssh if it does not exist)
And add:
Host *
GSSAPIAuthentication no
GSSAPIDelegateCredentials yes
You can do so on a per host basis if required :) example:
Host linux-srv
HostName 192.158.1.1
GSSAPIAuthentication no
GSSAPIDelegateCredentials yes
Make sure the IP address match your server IP. One cool advantage is that now ssh will provide autocomplete for this server. So you can type ssh lin
+ Tab
and it should autocomplete to ssh linux-srv
.
add a comment |
up vote
7
down vote
Regarding the 2. point, here is an answer that don't require to modify the server nor require to have root/administrative privileges.
You need to edit your "user ssh_config" file which is:
vi $HOME/.ssh/config
(Note: you would have to create the directory $HOME/.ssh if it does not exist)
And add:
Host *
GSSAPIAuthentication no
GSSAPIDelegateCredentials yes
You can do so on a per host basis if required :) example:
Host linux-srv
HostName 192.158.1.1
GSSAPIAuthentication no
GSSAPIDelegateCredentials yes
Make sure the IP address match your server IP. One cool advantage is that now ssh will provide autocomplete for this server. So you can type ssh lin
+ Tab
and it should autocomplete to ssh linux-srv
.
add a comment |
up vote
7
down vote
up vote
7
down vote
Regarding the 2. point, here is an answer that don't require to modify the server nor require to have root/administrative privileges.
You need to edit your "user ssh_config" file which is:
vi $HOME/.ssh/config
(Note: you would have to create the directory $HOME/.ssh if it does not exist)
And add:
Host *
GSSAPIAuthentication no
GSSAPIDelegateCredentials yes
You can do so on a per host basis if required :) example:
Host linux-srv
HostName 192.158.1.1
GSSAPIAuthentication no
GSSAPIDelegateCredentials yes
Make sure the IP address match your server IP. One cool advantage is that now ssh will provide autocomplete for this server. So you can type ssh lin
+ Tab
and it should autocomplete to ssh linux-srv
.
Regarding the 2. point, here is an answer that don't require to modify the server nor require to have root/administrative privileges.
You need to edit your "user ssh_config" file which is:
vi $HOME/.ssh/config
(Note: you would have to create the directory $HOME/.ssh if it does not exist)
And add:
Host *
GSSAPIAuthentication no
GSSAPIDelegateCredentials yes
You can do so on a per host basis if required :) example:
Host linux-srv
HostName 192.158.1.1
GSSAPIAuthentication no
GSSAPIDelegateCredentials yes
Make sure the IP address match your server IP. One cool advantage is that now ssh will provide autocomplete for this server. So you can type ssh lin
+ Tab
and it should autocomplete to ssh linux-srv
.
answered Jun 29 '12 at 7:41
Huygens
1,2651622
1,2651622
add a comment |
add a comment |
up vote
4
down vote
Check /etc/resolv.conf
on the server to be sure that DNS server, listed in this file, works ok, and delete any non-working DNS.
Sometimes it is very helpful.
add a comment |
up vote
4
down vote
Check /etc/resolv.conf
on the server to be sure that DNS server, listed in this file, works ok, and delete any non-working DNS.
Sometimes it is very helpful.
add a comment |
up vote
4
down vote
up vote
4
down vote
Check /etc/resolv.conf
on the server to be sure that DNS server, listed in this file, works ok, and delete any non-working DNS.
Sometimes it is very helpful.
Check /etc/resolv.conf
on the server to be sure that DNS server, listed in this file, works ok, and delete any non-working DNS.
Sometimes it is very helpful.
edited Jun 8 '17 at 9:34
Greenonline
1,2562823
1,2562823
answered Jun 8 '17 at 7:57
Elena Timoshkina
411
411
add a comment |
add a comment |
up vote
2
down vote
Besides the DNS issues already mentioned, if you're ssh'ing into a server with many NFS mounts, then there can be a delay between password and prompt as the quota
command checks for your usage/quota on all filesystems not mounted with the noquota
. On Solaris systems, you can see this in the default /etc/profile
and skip it by running touch $HOME/.hushlogin
.
add a comment |
up vote
2
down vote
Besides the DNS issues already mentioned, if you're ssh'ing into a server with many NFS mounts, then there can be a delay between password and prompt as the quota
command checks for your usage/quota on all filesystems not mounted with the noquota
. On Solaris systems, you can see this in the default /etc/profile
and skip it by running touch $HOME/.hushlogin
.
add a comment |
up vote
2
down vote
up vote
2
down vote
Besides the DNS issues already mentioned, if you're ssh'ing into a server with many NFS mounts, then there can be a delay between password and prompt as the quota
command checks for your usage/quota on all filesystems not mounted with the noquota
. On Solaris systems, you can see this in the default /etc/profile
and skip it by running touch $HOME/.hushlogin
.
Besides the DNS issues already mentioned, if you're ssh'ing into a server with many NFS mounts, then there can be a delay between password and prompt as the quota
command checks for your usage/quota on all filesystems not mounted with the noquota
. On Solaris systems, you can see this in the default /etc/profile
and skip it by running touch $HOME/.hushlogin
.
answered Jul 22 '10 at 14:24
alanc
968511
968511
add a comment |
add a comment |
up vote
1
down vote
Work fine.
# uname -a
SunOS oi-san-01 5.11 oi_151a3 i86pc i386 i86pc Solaris
# ssh -V
Sun_SSH_1.5, SSH protocols 1.5/2.0, OpenSSL 0x009080ff
# echo "GSSAPIAuthentication no" >> /etc/ssh/sshd_config
# echo "LookupClientHostnames no" >> /etc/ssh/sshd_config
# svcadm restart ssh
UseDNS no do not work with OpenIndiana !!!
Read "man sshd_config" for all the options
"LookupClientHostnames no" if your server can not resolve
add a comment |
up vote
1
down vote
Work fine.
# uname -a
SunOS oi-san-01 5.11 oi_151a3 i86pc i386 i86pc Solaris
# ssh -V
Sun_SSH_1.5, SSH protocols 1.5/2.0, OpenSSL 0x009080ff
# echo "GSSAPIAuthentication no" >> /etc/ssh/sshd_config
# echo "LookupClientHostnames no" >> /etc/ssh/sshd_config
# svcadm restart ssh
UseDNS no do not work with OpenIndiana !!!
Read "man sshd_config" for all the options
"LookupClientHostnames no" if your server can not resolve
add a comment |
up vote
1
down vote
up vote
1
down vote
Work fine.
# uname -a
SunOS oi-san-01 5.11 oi_151a3 i86pc i386 i86pc Solaris
# ssh -V
Sun_SSH_1.5, SSH protocols 1.5/2.0, OpenSSL 0x009080ff
# echo "GSSAPIAuthentication no" >> /etc/ssh/sshd_config
# echo "LookupClientHostnames no" >> /etc/ssh/sshd_config
# svcadm restart ssh
UseDNS no do not work with OpenIndiana !!!
Read "man sshd_config" for all the options
"LookupClientHostnames no" if your server can not resolve
Work fine.
# uname -a
SunOS oi-san-01 5.11 oi_151a3 i86pc i386 i86pc Solaris
# ssh -V
Sun_SSH_1.5, SSH protocols 1.5/2.0, OpenSSL 0x009080ff
# echo "GSSAPIAuthentication no" >> /etc/ssh/sshd_config
# echo "LookupClientHostnames no" >> /etc/ssh/sshd_config
# svcadm restart ssh
UseDNS no do not work with OpenIndiana !!!
Read "man sshd_config" for all the options
"LookupClientHostnames no" if your server can not resolve
edited Jun 12 '12 at 13:48
Hugues Lepesant
31
31
answered Apr 25 '12 at 13:37
Hugues
191
191
add a comment |
add a comment |
up vote
1
down vote
If none of the above answers works and you're facing dns reverse lookup problems you can also check if nscd
(name service cache daemon) is installed and running.
If this is the problem it is because you have no dns cache, and each time you query for a hostname that is not on your hostfile you send the question to your name server instead of looking in your cache
I tried all the above options and the only change that worked was start nscd
.
You should also verify the order to make dns query resolution in /etc/nsswitch.conf
to use hosts file first.
add a comment |
up vote
1
down vote
If none of the above answers works and you're facing dns reverse lookup problems you can also check if nscd
(name service cache daemon) is installed and running.
If this is the problem it is because you have no dns cache, and each time you query for a hostname that is not on your hostfile you send the question to your name server instead of looking in your cache
I tried all the above options and the only change that worked was start nscd
.
You should also verify the order to make dns query resolution in /etc/nsswitch.conf
to use hosts file first.
add a comment |
up vote
1
down vote
up vote
1
down vote
If none of the above answers works and you're facing dns reverse lookup problems you can also check if nscd
(name service cache daemon) is installed and running.
If this is the problem it is because you have no dns cache, and each time you query for a hostname that is not on your hostfile you send the question to your name server instead of looking in your cache
I tried all the above options and the only change that worked was start nscd
.
You should also verify the order to make dns query resolution in /etc/nsswitch.conf
to use hosts file first.
If none of the above answers works and you're facing dns reverse lookup problems you can also check if nscd
(name service cache daemon) is installed and running.
If this is the problem it is because you have no dns cache, and each time you query for a hostname that is not on your hostfile you send the question to your name server instead of looking in your cache
I tried all the above options and the only change that worked was start nscd
.
You should also verify the order to make dns query resolution in /etc/nsswitch.conf
to use hosts file first.
edited May 3 '13 at 14:08
answered May 2 '13 at 23:01
altmas5
19026
19026
add a comment |
add a comment |
up vote
1
down vote
This is probably only specific to the Debian/Ubuntu OpenSSH, which includes the user-group-modes.patch written by one of the Debian package maintainers. This patch allows the ~/.ssh files to have the group writable bit set (g+w) if there is only one user with the same gid as that of the file. The patch's secure_permissions() function does this check. One of the phases of the check is to go through each passwd entry using getpwent() and compare the gid of the entry with the gid of the file.
On a system with many entries and/or slow NIS/LDAP authentication, this check will be slow. nscd does not cache getpwent() calls, so every passwd entry will be read over the network if the server is not local. On the system I found this, it added about 4 seconds for each invocation of ssh or login into the system.
The fix is to remove the writable bit on all of the files in ~/.ssh by doing chmod g-w ~/.ssh/*
.
add a comment |
up vote
1
down vote
This is probably only specific to the Debian/Ubuntu OpenSSH, which includes the user-group-modes.patch written by one of the Debian package maintainers. This patch allows the ~/.ssh files to have the group writable bit set (g+w) if there is only one user with the same gid as that of the file. The patch's secure_permissions() function does this check. One of the phases of the check is to go through each passwd entry using getpwent() and compare the gid of the entry with the gid of the file.
On a system with many entries and/or slow NIS/LDAP authentication, this check will be slow. nscd does not cache getpwent() calls, so every passwd entry will be read over the network if the server is not local. On the system I found this, it added about 4 seconds for each invocation of ssh or login into the system.
The fix is to remove the writable bit on all of the files in ~/.ssh by doing chmod g-w ~/.ssh/*
.
add a comment |
up vote
1
down vote
up vote
1
down vote
This is probably only specific to the Debian/Ubuntu OpenSSH, which includes the user-group-modes.patch written by one of the Debian package maintainers. This patch allows the ~/.ssh files to have the group writable bit set (g+w) if there is only one user with the same gid as that of the file. The patch's secure_permissions() function does this check. One of the phases of the check is to go through each passwd entry using getpwent() and compare the gid of the entry with the gid of the file.
On a system with many entries and/or slow NIS/LDAP authentication, this check will be slow. nscd does not cache getpwent() calls, so every passwd entry will be read over the network if the server is not local. On the system I found this, it added about 4 seconds for each invocation of ssh or login into the system.
The fix is to remove the writable bit on all of the files in ~/.ssh by doing chmod g-w ~/.ssh/*
.
This is probably only specific to the Debian/Ubuntu OpenSSH, which includes the user-group-modes.patch written by one of the Debian package maintainers. This patch allows the ~/.ssh files to have the group writable bit set (g+w) if there is only one user with the same gid as that of the file. The patch's secure_permissions() function does this check. One of the phases of the check is to go through each passwd entry using getpwent() and compare the gid of the entry with the gid of the file.
On a system with many entries and/or slow NIS/LDAP authentication, this check will be slow. nscd does not cache getpwent() calls, so every passwd entry will be read over the network if the server is not local. On the system I found this, it added about 4 seconds for each invocation of ssh or login into the system.
The fix is to remove the writable bit on all of the files in ~/.ssh by doing chmod g-w ~/.ssh/*
.
answered Oct 13 '15 at 1:19
jamesy
411
411
add a comment |
add a comment |
up vote
1
down vote
I found that restarting systemd-logind.service only cured the problem for a few hours. Changing UsePAM from yes to no in sshd_config has resulted in fast logins, although motd is no longer displayed.
Comments about security issues?
I had gone through EVERY other suggestion here, and this is the only thing which fixed the issue on my Samba4 enable server... THANKS!
– Deven Phillips
Oct 21 '16 at 12:35
WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several problems.
– bbaassssiiee
Jan 3 '17 at 11:55
add a comment |
up vote
1
down vote
I found that restarting systemd-logind.service only cured the problem for a few hours. Changing UsePAM from yes to no in sshd_config has resulted in fast logins, although motd is no longer displayed.
Comments about security issues?
I had gone through EVERY other suggestion here, and this is the only thing which fixed the issue on my Samba4 enable server... THANKS!
– Deven Phillips
Oct 21 '16 at 12:35
WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several problems.
– bbaassssiiee
Jan 3 '17 at 11:55
add a comment |
up vote
1
down vote
up vote
1
down vote
I found that restarting systemd-logind.service only cured the problem for a few hours. Changing UsePAM from yes to no in sshd_config has resulted in fast logins, although motd is no longer displayed.
Comments about security issues?
I found that restarting systemd-logind.service only cured the problem for a few hours. Changing UsePAM from yes to no in sshd_config has resulted in fast logins, although motd is no longer displayed.
Comments about security issues?
answered Jul 13 '16 at 22:35
Chris Blake
111
111
I had gone through EVERY other suggestion here, and this is the only thing which fixed the issue on my Samba4 enable server... THANKS!
– Deven Phillips
Oct 21 '16 at 12:35
WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several problems.
– bbaassssiiee
Jan 3 '17 at 11:55
add a comment |
I had gone through EVERY other suggestion here, and this is the only thing which fixed the issue on my Samba4 enable server... THANKS!
– Deven Phillips
Oct 21 '16 at 12:35
WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several problems.
– bbaassssiiee
Jan 3 '17 at 11:55
I had gone through EVERY other suggestion here, and this is the only thing which fixed the issue on my Samba4 enable server... THANKS!
– Deven Phillips
Oct 21 '16 at 12:35
I had gone through EVERY other suggestion here, and this is the only thing which fixed the issue on my Samba4 enable server... THANKS!
– Deven Phillips
Oct 21 '16 at 12:35
WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several problems.
– bbaassssiiee
Jan 3 '17 at 11:55
WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several problems.
– bbaassssiiee
Jan 3 '17 at 11:55
add a comment |
up vote
1
down vote
To complete all the answers showing that DNS resolutions can slow your ssh login, sometimes, a firewall rules is missing.
For example, if you DROP all the INPUT paquets by default
iptables -t filter -P INPUT DROP
then you'll have to accept INPUT for ssh port and DNS request
iptables -t filter -A INPUT -p tcp --dport 53 -j ACCEPT
iptables -t filter -A INPUT -p udp --dport 53 -j ACCEPT
add a comment |
up vote
1
down vote
To complete all the answers showing that DNS resolutions can slow your ssh login, sometimes, a firewall rules is missing.
For example, if you DROP all the INPUT paquets by default
iptables -t filter -P INPUT DROP
then you'll have to accept INPUT for ssh port and DNS request
iptables -t filter -A INPUT -p tcp --dport 53 -j ACCEPT
iptables -t filter -A INPUT -p udp --dport 53 -j ACCEPT
add a comment |
up vote
1
down vote
up vote
1
down vote
To complete all the answers showing that DNS resolutions can slow your ssh login, sometimes, a firewall rules is missing.
For example, if you DROP all the INPUT paquets by default
iptables -t filter -P INPUT DROP
then you'll have to accept INPUT for ssh port and DNS request
iptables -t filter -A INPUT -p tcp --dport 53 -j ACCEPT
iptables -t filter -A INPUT -p udp --dport 53 -j ACCEPT
To complete all the answers showing that DNS resolutions can slow your ssh login, sometimes, a firewall rules is missing.
For example, if you DROP all the INPUT paquets by default
iptables -t filter -P INPUT DROP
then you'll have to accept INPUT for ssh port and DNS request
iptables -t filter -A INPUT -p tcp --dport 53 -j ACCEPT
iptables -t filter -A INPUT -p udp --dport 53 -j ACCEPT
answered Dec 6 '16 at 9:24
RGuillome
112
112
add a comment |
add a comment |
up vote
1
down vote
ssh -vvv
connection went really fine until it hung on the system trying to get the terminal for at least 20 Seconds:
debug1: channel 0: new [client-session]
debug3: ssh_session2_open: channel_new: 0
debug2: channel 0: send open
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
... waiting ... waiting ... waiting
After doing a systemctl restart systemd-logind
on the server I had instant connection again!
This was on debian8! So systemd was the issue here!
Note: Bastien Durel already gave an answer for this issue, however it lacks the debug information. I hope this is helpful to someone.
add a comment |
up vote
1
down vote
ssh -vvv
connection went really fine until it hung on the system trying to get the terminal for at least 20 Seconds:
debug1: channel 0: new [client-session]
debug3: ssh_session2_open: channel_new: 0
debug2: channel 0: send open
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
... waiting ... waiting ... waiting
After doing a systemctl restart systemd-logind
on the server I had instant connection again!
This was on debian8! So systemd was the issue here!
Note: Bastien Durel already gave an answer for this issue, however it lacks the debug information. I hope this is helpful to someone.
add a comment |
up vote
1
down vote
up vote
1
down vote
ssh -vvv
connection went really fine until it hung on the system trying to get the terminal for at least 20 Seconds:
debug1: channel 0: new [client-session]
debug3: ssh_session2_open: channel_new: 0
debug2: channel 0: send open
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
... waiting ... waiting ... waiting
After doing a systemctl restart systemd-logind
on the server I had instant connection again!
This was on debian8! So systemd was the issue here!
Note: Bastien Durel already gave an answer for this issue, however it lacks the debug information. I hope this is helpful to someone.
ssh -vvv
connection went really fine until it hung on the system trying to get the terminal for at least 20 Seconds:
debug1: channel 0: new [client-session]
debug3: ssh_session2_open: channel_new: 0
debug2: channel 0: send open
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
... waiting ... waiting ... waiting
After doing a systemctl restart systemd-logind
on the server I had instant connection again!
This was on debian8! So systemd was the issue here!
Note: Bastien Durel already gave an answer for this issue, however it lacks the debug information. I hope this is helpful to someone.
answered Mar 4 '17 at 22:38
mahatmanich
22527
22527
add a comment |
add a comment |
up vote
1
down vote
I have recently found another cause of slow ssh logins.
Even if you have UseDNS no
in /etc/sshd_config
, sshd may still perform reverse DNS lookups if /etc/hosts.deny
has an entry like:
nnn-nnn-nnn-nnn.rev.some.domain.com
That might happen if you have DenyHosts installed in your system.
It would be great if someone knew how to make DenyHosts avoid putting this kind of entry in /etc/hosts.deny
.
Here is a link, to the DenyHosts FAQ, on how to remove entries from /etc/hosts.deny
- see How can I remove an IP address that DenyHosts blocked?
add a comment |
up vote
1
down vote
I have recently found another cause of slow ssh logins.
Even if you have UseDNS no
in /etc/sshd_config
, sshd may still perform reverse DNS lookups if /etc/hosts.deny
has an entry like:
nnn-nnn-nnn-nnn.rev.some.domain.com
That might happen if you have DenyHosts installed in your system.
It would be great if someone knew how to make DenyHosts avoid putting this kind of entry in /etc/hosts.deny
.
Here is a link, to the DenyHosts FAQ, on how to remove entries from /etc/hosts.deny
- see How can I remove an IP address that DenyHosts blocked?
add a comment |
up vote
1
down vote
up vote
1
down vote
I have recently found another cause of slow ssh logins.
Even if you have UseDNS no
in /etc/sshd_config
, sshd may still perform reverse DNS lookups if /etc/hosts.deny
has an entry like:
nnn-nnn-nnn-nnn.rev.some.domain.com
That might happen if you have DenyHosts installed in your system.
It would be great if someone knew how to make DenyHosts avoid putting this kind of entry in /etc/hosts.deny
.
Here is a link, to the DenyHosts FAQ, on how to remove entries from /etc/hosts.deny
- see How can I remove an IP address that DenyHosts blocked?
I have recently found another cause of slow ssh logins.
Even if you have UseDNS no
in /etc/sshd_config
, sshd may still perform reverse DNS lookups if /etc/hosts.deny
has an entry like:
nnn-nnn-nnn-nnn.rev.some.domain.com
That might happen if you have DenyHosts installed in your system.
It would be great if someone knew how to make DenyHosts avoid putting this kind of entry in /etc/hosts.deny
.
Here is a link, to the DenyHosts FAQ, on how to remove entries from /etc/hosts.deny
- see How can I remove an IP address that DenyHosts blocked?
edited Jun 8 '17 at 8:54
karel
9,13793138
9,13793138
answered Oct 24 '16 at 21:49
Marcelo Roberto Jimenez
445
445
add a comment |
add a comment |
up vote
1
down vote
We may find that the preferred name resolution method isn't the host file and then DNS.
For example, this would be the usual configuration:
[root@LINUX1 ~]# cat /etc/nsswitch.conf|grep hosts
#hosts: db files nisplus nis dns
hosts: files dns myhostname
First, hosts file is reached (option: files) and then DNS (option: dns), however we can find that another name resolution system has been added that is not operational and is causing us the slowness in trying to do the reverse resolution.
If the name resolution order isn't correct, you can change it at: /etc/nsswitch.conf
Extracted from: http://www.sysadmit.com/2017/07/linux-ssh-login-lento.html
add a comment |
up vote
1
down vote
We may find that the preferred name resolution method isn't the host file and then DNS.
For example, this would be the usual configuration:
[root@LINUX1 ~]# cat /etc/nsswitch.conf|grep hosts
#hosts: db files nisplus nis dns
hosts: files dns myhostname
First, hosts file is reached (option: files) and then DNS (option: dns), however we can find that another name resolution system has been added that is not operational and is causing us the slowness in trying to do the reverse resolution.
If the name resolution order isn't correct, you can change it at: /etc/nsswitch.conf
Extracted from: http://www.sysadmit.com/2017/07/linux-ssh-login-lento.html
add a comment |
up vote
1
down vote
up vote
1
down vote
We may find that the preferred name resolution method isn't the host file and then DNS.
For example, this would be the usual configuration:
[root@LINUX1 ~]# cat /etc/nsswitch.conf|grep hosts
#hosts: db files nisplus nis dns
hosts: files dns myhostname
First, hosts file is reached (option: files) and then DNS (option: dns), however we can find that another name resolution system has been added that is not operational and is causing us the slowness in trying to do the reverse resolution.
If the name resolution order isn't correct, you can change it at: /etc/nsswitch.conf
Extracted from: http://www.sysadmit.com/2017/07/linux-ssh-login-lento.html
We may find that the preferred name resolution method isn't the host file and then DNS.
For example, this would be the usual configuration:
[root@LINUX1 ~]# cat /etc/nsswitch.conf|grep hosts
#hosts: db files nisplus nis dns
hosts: files dns myhostname
First, hosts file is reached (option: files) and then DNS (option: dns), however we can find that another name resolution system has been added that is not operational and is causing us the slowness in trying to do the reverse resolution.
If the name resolution order isn't correct, you can change it at: /etc/nsswitch.conf
Extracted from: http://www.sysadmit.com/2017/07/linux-ssh-login-lento.html
answered Jul 9 '17 at 9:12
Hans Gruber
111
111
add a comment |
add a comment |
up vote
1
down vote
This thread is already providing a bunch of solutions but mine is not given here =).
So here it is.
My problem (took about 1 minutes to ssh login into my raspberry pi), was due to a corrupted .bash_history file.
Since the file is read at login, this was causing the login delay. Once I removed the file, login time went back to normal, like instantaneous.
Hope this will help some other people.
Cheers
add a comment |
up vote
1
down vote
This thread is already providing a bunch of solutions but mine is not given here =).
So here it is.
My problem (took about 1 minutes to ssh login into my raspberry pi), was due to a corrupted .bash_history file.
Since the file is read at login, this was causing the login delay. Once I removed the file, login time went back to normal, like instantaneous.
Hope this will help some other people.
Cheers
add a comment |
up vote
1
down vote
up vote
1
down vote
This thread is already providing a bunch of solutions but mine is not given here =).
So here it is.
My problem (took about 1 minutes to ssh login into my raspberry pi), was due to a corrupted .bash_history file.
Since the file is read at login, this was causing the login delay. Once I removed the file, login time went back to normal, like instantaneous.
Hope this will help some other people.
Cheers
This thread is already providing a bunch of solutions but mine is not given here =).
So here it is.
My problem (took about 1 minutes to ssh login into my raspberry pi), was due to a corrupted .bash_history file.
Since the file is read at login, this was causing the login delay. Once I removed the file, login time went back to normal, like instantaneous.
Hope this will help some other people.
Cheers
answered Jan 3 at 15:02
user3320224
111
111
add a comment |
add a comment |
up vote
0
down vote
For me I needed GSSAPI, and I didn't want to turn off reverse DNS lookups. That just didn't seem like a good idea, so I checked out the main page for resolv.conf. It turns out that a firewall between me and the servers I was SSHing to, was interfering with DNS requests, because they weren't in a form that the firewall expected. In the end, all I needed to do was add this line to resolv.conf on the servers that I was SSHing to:
options single-request-reopen
add a comment |
up vote
0
down vote
For me I needed GSSAPI, and I didn't want to turn off reverse DNS lookups. That just didn't seem like a good idea, so I checked out the main page for resolv.conf. It turns out that a firewall between me and the servers I was SSHing to, was interfering with DNS requests, because they weren't in a form that the firewall expected. In the end, all I needed to do was add this line to resolv.conf on the servers that I was SSHing to:
options single-request-reopen
add a comment |
up vote
0
down vote
up vote
0
down vote
For me I needed GSSAPI, and I didn't want to turn off reverse DNS lookups. That just didn't seem like a good idea, so I checked out the main page for resolv.conf. It turns out that a firewall between me and the servers I was SSHing to, was interfering with DNS requests, because they weren't in a form that the firewall expected. In the end, all I needed to do was add this line to resolv.conf on the servers that I was SSHing to:
options single-request-reopen
For me I needed GSSAPI, and I didn't want to turn off reverse DNS lookups. That just didn't seem like a good idea, so I checked out the main page for resolv.conf. It turns out that a firewall between me and the servers I was SSHing to, was interfering with DNS requests, because they weren't in a form that the firewall expected. In the end, all I needed to do was add this line to resolv.conf on the servers that I was SSHing to:
options single-request-reopen
edited Aug 28 '14 at 12:34
jAce
1,14541427
1,14541427
answered Aug 28 '14 at 11:41
Sapan Ganguly
1
1
add a comment |
add a comment |
up vote
0
down vote
Remarkably, a package update of bind on CentOS 7 broke named, now stating in the log that /etc/named.conf had a permissions problem. It had worked well for months with 0640. Now it wants 0644. This makes sense as the named daemon belongs to the 'named' user.
With named down everything was slow, from ssh logins to page serving from the local web server, sluggish LAMP apps etc, most probably because every request would time out on the dead local server before looking up to an external, secondary DNS that is configured.
add a comment |
up vote
0
down vote
Remarkably, a package update of bind on CentOS 7 broke named, now stating in the log that /etc/named.conf had a permissions problem. It had worked well for months with 0640. Now it wants 0644. This makes sense as the named daemon belongs to the 'named' user.
With named down everything was slow, from ssh logins to page serving from the local web server, sluggish LAMP apps etc, most probably because every request would time out on the dead local server before looking up to an external, secondary DNS that is configured.
add a comment |
up vote
0
down vote
up vote
0
down vote
Remarkably, a package update of bind on CentOS 7 broke named, now stating in the log that /etc/named.conf had a permissions problem. It had worked well for months with 0640. Now it wants 0644. This makes sense as the named daemon belongs to the 'named' user.
With named down everything was slow, from ssh logins to page serving from the local web server, sluggish LAMP apps etc, most probably because every request would time out on the dead local server before looking up to an external, secondary DNS that is configured.
Remarkably, a package update of bind on CentOS 7 broke named, now stating in the log that /etc/named.conf had a permissions problem. It had worked well for months with 0640. Now it wants 0644. This makes sense as the named daemon belongs to the 'named' user.
With named down everything was slow, from ssh logins to page serving from the local web server, sluggish LAMP apps etc, most probably because every request would time out on the dead local server before looking up to an external, secondary DNS that is configured.
answered Dec 13 '16 at 17:07
David Ramirez
112
112
add a comment |
add a comment |
up vote
0
down vote
I tried all the answers but none of them worked. finally I find out my problem:
first I run sudo tail -f /var/log/auth.log
so I can see the log of ssh
then in another session run ssh 172.16.111.166
and noticed waiting on
/usr/bin/sss_ssh_knownhostsproxy -p 22 172.16.111.166
after searching I located this line in /etc/ssd/ssh_config
ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p %h
I commented it and the delay gone
add a comment |
up vote
0
down vote
I tried all the answers but none of them worked. finally I find out my problem:
first I run sudo tail -f /var/log/auth.log
so I can see the log of ssh
then in another session run ssh 172.16.111.166
and noticed waiting on
/usr/bin/sss_ssh_knownhostsproxy -p 22 172.16.111.166
after searching I located this line in /etc/ssd/ssh_config
ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p %h
I commented it and the delay gone
add a comment |
up vote
0
down vote
up vote
0
down vote
I tried all the answers but none of them worked. finally I find out my problem:
first I run sudo tail -f /var/log/auth.log
so I can see the log of ssh
then in another session run ssh 172.16.111.166
and noticed waiting on
/usr/bin/sss_ssh_knownhostsproxy -p 22 172.16.111.166
after searching I located this line in /etc/ssd/ssh_config
ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p %h
I commented it and the delay gone
I tried all the answers but none of them worked. finally I find out my problem:
first I run sudo tail -f /var/log/auth.log
so I can see the log of ssh
then in another session run ssh 172.16.111.166
and noticed waiting on
/usr/bin/sss_ssh_knownhostsproxy -p 22 172.16.111.166
after searching I located this line in /etc/ssd/ssh_config
ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p %h
I commented it and the delay gone
answered Jul 22 '17 at 8:21
HamedH
1012
1012
add a comment |
add a comment |
up vote
0
down vote
For me there was an issue in my local /etc/hosts
file. So ssh
was trying two different IP (one wrong) which took forever to time-out.
Using ssh -v
did the trick here:
$ ssh -vvv remotesrv
OpenSSH_6.7p1 Debian-5, OpenSSL 1.0.1k 8 Jan 2015
debug1: Reading configuration data /home/mathieu/.ssh/config
debug1: /home/mathieu/.ssh/config line 60: Applying options for remotesrv
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to remotesrv [192.168.0.10] port 22.
debug1: connect to address 192.168.0.10 port 22: Connection timed out
debug1: Connecting to remotesrv [192.168.0.26] port 22.
debug1: Connection established.
The/etc/hosts
of the server?
– Daniel F
Nov 20 at 19:17
add a comment |
up vote
0
down vote
For me there was an issue in my local /etc/hosts
file. So ssh
was trying two different IP (one wrong) which took forever to time-out.
Using ssh -v
did the trick here:
$ ssh -vvv remotesrv
OpenSSH_6.7p1 Debian-5, OpenSSL 1.0.1k 8 Jan 2015
debug1: Reading configuration data /home/mathieu/.ssh/config
debug1: /home/mathieu/.ssh/config line 60: Applying options for remotesrv
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to remotesrv [192.168.0.10] port 22.
debug1: connect to address 192.168.0.10 port 22: Connection timed out
debug1: Connecting to remotesrv [192.168.0.26] port 22.
debug1: Connection established.
The/etc/hosts
of the server?
– Daniel F
Nov 20 at 19:17
add a comment |
up vote
0
down vote
up vote
0
down vote
For me there was an issue in my local /etc/hosts
file. So ssh
was trying two different IP (one wrong) which took forever to time-out.
Using ssh -v
did the trick here:
$ ssh -vvv remotesrv
OpenSSH_6.7p1 Debian-5, OpenSSL 1.0.1k 8 Jan 2015
debug1: Reading configuration data /home/mathieu/.ssh/config
debug1: /home/mathieu/.ssh/config line 60: Applying options for remotesrv
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to remotesrv [192.168.0.10] port 22.
debug1: connect to address 192.168.0.10 port 22: Connection timed out
debug1: Connecting to remotesrv [192.168.0.26] port 22.
debug1: Connection established.
For me there was an issue in my local /etc/hosts
file. So ssh
was trying two different IP (one wrong) which took forever to time-out.
Using ssh -v
did the trick here:
$ ssh -vvv remotesrv
OpenSSH_6.7p1 Debian-5, OpenSSL 1.0.1k 8 Jan 2015
debug1: Reading configuration data /home/mathieu/.ssh/config
debug1: /home/mathieu/.ssh/config line 60: Applying options for remotesrv
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to remotesrv [192.168.0.10] port 22.
debug1: connect to address 192.168.0.10 port 22: Connection timed out
debug1: Connecting to remotesrv [192.168.0.26] port 22.
debug1: Connection established.
edited Nov 20 at 19:28
answered May 21 '15 at 13:01
malat
4851821
4851821
The/etc/hosts
of the server?
– Daniel F
Nov 20 at 19:17
add a comment |
The/etc/hosts
of the server?
– Daniel F
Nov 20 at 19:17
The
/etc/hosts
of the server?– Daniel F
Nov 20 at 19:17
The
/etc/hosts
of the server?– Daniel F
Nov 20 at 19:17
add a comment |
up vote
0
down vote
Note: This started as a "How to debug", tutorial, but ended up being the solution that helped me on an Ubuntu 16.04 LTS server.
TLDR: Run landscape-sysinfo
and check if that command takes a long time to finish; it's the system information printout on a new SSH login. Note that this command isn't available on all systems, the landscape-common
package installs it. ("But wait, there's more...")
Start a second ssh server on another port on the machine that has the problem, do so in debug mode, which won't make it fork and will print out debug messages:
sudo /usr/sbin/sshd -ddd -p 44321
connect to that server from another machine in verbose mode:
ssh -vvv -p 44321 username@server
My client outputs the following lines right before starting to sleep:
debug1: Entering interactive session.
debug1: pledge: network
Googling that isn't really helpful, but the server logs are better:
debug3: mm_send_keystate: Finished sending state [preauth]
debug1: monitor_read_log: child log fd closed
debug1: PAM: establishing credentials
debug3: PAM: opening session
---- Pauses here ----
debug3: PAM: sshpam_store_conv called with 1 messages
User child is on pid 28051
I noticed that when I change UsePAM yes
to UsePAM no
then this issue is resolved.
Not related to UseDNS
or any other setting, only UsePAM
affects this problem on my system.
I have no clue why, and I'm also not leaving UsePAM
at no
, because I do not know which the side-effects are, but this lets me continue investigating.
So please don't consider this to be an answer, but a first step to start finding out what's wrong.
So I continued investigating, and ran sshd
with strace
(sudo strace /usr/sbin/sshd -ddd -p 44321
). This yielded the following:
sendto(4, "<87>Nov 20 20:35:21 sshd[2234]: "..., 110, MSG_NOSIGNAL, NULL, 0) = 110
close(5) = 0
stat("/etc/update-motd.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
umask(022) = 02
rt_sigaction(SIGINT, {SIG_IGN, , SA_RESTORER, 0x7f15dce784b0}, {SIG_DFL, , 0}, 8) = 0
rt_sigaction(SIGQUIT, {SIG_IGN, , SA_RESTORER, 0x7f15dce784b0}, {SIG_DFL, , 0}, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], , 8) = 0
clone(child_stack=0, flags=CLONE_PARENT_SETTID|SIGCHLD, parent_tidptr=0x7ffde6152d2c) = 2385
wait4(2385, # BLOCKS RIGHT HERE, BEFORE THE REST IS PRINTED OUT # [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 2385
The line /etc/update-motd.d
made me suspicious, apparently the process waits for the result of the stuff that is in /etc/update-motd.d
So I cd
'd into /etc/update-motd.d
and ran a sudo chmod -x *
in order to inhibit PAM to run all the files which generate this dynamic Message Of The Day
, which includes system load and if packages need to be upgraded, and this solved the issue.
This is a server based on an "energy-efficient" N3150 CPU which has a lot of work to do 24/7, so I think that collecting all this motd-data was just too much for it.
I may start to enable scripts in that folder selectively, to see which are less harmful, but specially calling landscape-sysinfo
is very slow, and 50-landscape-sysinfo
does call that command. I think that is the one which causes the biggest delay.
After reenabling most of the files I came to the conclusion that
50-landscape-sysinfo
and 99-esm
were the cause for my troubles. 50-landscape-sysinfo
took about 5 seconds to execute and 99-esm
about 3 seconds. All the remaining files about 2 seconds altogether.
Neither 50-landscape-sysinfo
and 99-esm
are crucial. 50-landscape-sysinfo
prints out interesting system stats (and also if you're low on space!), and 99-esm
prints out messages related to Ubuntu Extended Security Maintenance
Finally you can create a script with echo '/usr/bin/landscape-sysinfo' > info.sh && chmod +x info.sh
and get that printout upon request.
add a comment |
up vote
0
down vote
Note: This started as a "How to debug", tutorial, but ended up being the solution that helped me on an Ubuntu 16.04 LTS server.
TLDR: Run landscape-sysinfo
and check if that command takes a long time to finish; it's the system information printout on a new SSH login. Note that this command isn't available on all systems, the landscape-common
package installs it. ("But wait, there's more...")
Start a second ssh server on another port on the machine that has the problem, do so in debug mode, which won't make it fork and will print out debug messages:
sudo /usr/sbin/sshd -ddd -p 44321
connect to that server from another machine in verbose mode:
ssh -vvv -p 44321 username@server
My client outputs the following lines right before starting to sleep:
debug1: Entering interactive session.
debug1: pledge: network
Googling that isn't really helpful, but the server logs are better:
debug3: mm_send_keystate: Finished sending state [preauth]
debug1: monitor_read_log: child log fd closed
debug1: PAM: establishing credentials
debug3: PAM: opening session
---- Pauses here ----
debug3: PAM: sshpam_store_conv called with 1 messages
User child is on pid 28051
I noticed that when I change UsePAM yes
to UsePAM no
then this issue is resolved.
Not related to UseDNS
or any other setting, only UsePAM
affects this problem on my system.
I have no clue why, and I'm also not leaving UsePAM
at no
, because I do not know which the side-effects are, but this lets me continue investigating.
So please don't consider this to be an answer, but a first step to start finding out what's wrong.
So I continued investigating, and ran sshd
with strace
(sudo strace /usr/sbin/sshd -ddd -p 44321
). This yielded the following:
sendto(4, "<87>Nov 20 20:35:21 sshd[2234]: "..., 110, MSG_NOSIGNAL, NULL, 0) = 110
close(5) = 0
stat("/etc/update-motd.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
umask(022) = 02
rt_sigaction(SIGINT, {SIG_IGN, , SA_RESTORER, 0x7f15dce784b0}, {SIG_DFL, , 0}, 8) = 0
rt_sigaction(SIGQUIT, {SIG_IGN, , SA_RESTORER, 0x7f15dce784b0}, {SIG_DFL, , 0}, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], , 8) = 0
clone(child_stack=0, flags=CLONE_PARENT_SETTID|SIGCHLD, parent_tidptr=0x7ffde6152d2c) = 2385
wait4(2385, # BLOCKS RIGHT HERE, BEFORE THE REST IS PRINTED OUT # [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 2385
The line /etc/update-motd.d
made me suspicious, apparently the process waits for the result of the stuff that is in /etc/update-motd.d
So I cd
'd into /etc/update-motd.d
and ran a sudo chmod -x *
in order to inhibit PAM to run all the files which generate this dynamic Message Of The Day
, which includes system load and if packages need to be upgraded, and this solved the issue.
This is a server based on an "energy-efficient" N3150 CPU which has a lot of work to do 24/7, so I think that collecting all this motd-data was just too much for it.
I may start to enable scripts in that folder selectively, to see which are less harmful, but specially calling landscape-sysinfo
is very slow, and 50-landscape-sysinfo
does call that command. I think that is the one which causes the biggest delay.
After reenabling most of the files I came to the conclusion that
50-landscape-sysinfo
and 99-esm
were the cause for my troubles. 50-landscape-sysinfo
took about 5 seconds to execute and 99-esm
about 3 seconds. All the remaining files about 2 seconds altogether.
Neither 50-landscape-sysinfo
and 99-esm
are crucial. 50-landscape-sysinfo
prints out interesting system stats (and also if you're low on space!), and 99-esm
prints out messages related to Ubuntu Extended Security Maintenance
Finally you can create a script with echo '/usr/bin/landscape-sysinfo' > info.sh && chmod +x info.sh
and get that printout upon request.
add a comment |
up vote
0
down vote
up vote
0
down vote
Note: This started as a "How to debug", tutorial, but ended up being the solution that helped me on an Ubuntu 16.04 LTS server.
TLDR: Run landscape-sysinfo
and check if that command takes a long time to finish; it's the system information printout on a new SSH login. Note that this command isn't available on all systems, the landscape-common
package installs it. ("But wait, there's more...")
Start a second ssh server on another port on the machine that has the problem, do so in debug mode, which won't make it fork and will print out debug messages:
sudo /usr/sbin/sshd -ddd -p 44321
connect to that server from another machine in verbose mode:
ssh -vvv -p 44321 username@server
My client outputs the following lines right before starting to sleep:
debug1: Entering interactive session.
debug1: pledge: network
Googling that isn't really helpful, but the server logs are better:
debug3: mm_send_keystate: Finished sending state [preauth]
debug1: monitor_read_log: child log fd closed
debug1: PAM: establishing credentials
debug3: PAM: opening session
---- Pauses here ----
debug3: PAM: sshpam_store_conv called with 1 messages
User child is on pid 28051
I noticed that when I change UsePAM yes
to UsePAM no
then this issue is resolved.
Not related to UseDNS
or any other setting, only UsePAM
affects this problem on my system.
I have no clue why, and I'm also not leaving UsePAM
at no
, because I do not know which the side-effects are, but this lets me continue investigating.
So please don't consider this to be an answer, but a first step to start finding out what's wrong.
So I continued investigating, and ran sshd
with strace
(sudo strace /usr/sbin/sshd -ddd -p 44321
). This yielded the following:
sendto(4, "<87>Nov 20 20:35:21 sshd[2234]: "..., 110, MSG_NOSIGNAL, NULL, 0) = 110
close(5) = 0
stat("/etc/update-motd.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
umask(022) = 02
rt_sigaction(SIGINT, {SIG_IGN, , SA_RESTORER, 0x7f15dce784b0}, {SIG_DFL, , 0}, 8) = 0
rt_sigaction(SIGQUIT, {SIG_IGN, , SA_RESTORER, 0x7f15dce784b0}, {SIG_DFL, , 0}, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], , 8) = 0
clone(child_stack=0, flags=CLONE_PARENT_SETTID|SIGCHLD, parent_tidptr=0x7ffde6152d2c) = 2385
wait4(2385, # BLOCKS RIGHT HERE, BEFORE THE REST IS PRINTED OUT # [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 2385
The line /etc/update-motd.d
made me suspicious, apparently the process waits for the result of the stuff that is in /etc/update-motd.d
So I cd
'd into /etc/update-motd.d
and ran a sudo chmod -x *
in order to inhibit PAM to run all the files which generate this dynamic Message Of The Day
, which includes system load and if packages need to be upgraded, and this solved the issue.
This is a server based on an "energy-efficient" N3150 CPU which has a lot of work to do 24/7, so I think that collecting all this motd-data was just too much for it.
I may start to enable scripts in that folder selectively, to see which are less harmful, but specially calling landscape-sysinfo
is very slow, and 50-landscape-sysinfo
does call that command. I think that is the one which causes the biggest delay.
After reenabling most of the files I came to the conclusion that
50-landscape-sysinfo
and 99-esm
were the cause for my troubles. 50-landscape-sysinfo
took about 5 seconds to execute and 99-esm
about 3 seconds. All the remaining files about 2 seconds altogether.
Neither 50-landscape-sysinfo
and 99-esm
are crucial. 50-landscape-sysinfo
prints out interesting system stats (and also if you're low on space!), and 99-esm
prints out messages related to Ubuntu Extended Security Maintenance
Finally you can create a script with echo '/usr/bin/landscape-sysinfo' > info.sh && chmod +x info.sh
and get that printout upon request.
Note: This started as a "How to debug", tutorial, but ended up being the solution that helped me on an Ubuntu 16.04 LTS server.
TLDR: Run landscape-sysinfo
and check if that command takes a long time to finish; it's the system information printout on a new SSH login. Note that this command isn't available on all systems, the landscape-common
package installs it. ("But wait, there's more...")
Start a second ssh server on another port on the machine that has the problem, do so in debug mode, which won't make it fork and will print out debug messages:
sudo /usr/sbin/sshd -ddd -p 44321
connect to that server from another machine in verbose mode:
ssh -vvv -p 44321 username@server
My client outputs the following lines right before starting to sleep:
debug1: Entering interactive session.
debug1: pledge: network
Googling that isn't really helpful, but the server logs are better:
debug3: mm_send_keystate: Finished sending state [preauth]
debug1: monitor_read_log: child log fd closed
debug1: PAM: establishing credentials
debug3: PAM: opening session
---- Pauses here ----
debug3: PAM: sshpam_store_conv called with 1 messages
User child is on pid 28051
I noticed that when I change UsePAM yes
to UsePAM no
then this issue is resolved.
Not related to UseDNS
or any other setting, only UsePAM
affects this problem on my system.
I have no clue why, and I'm also not leaving UsePAM
at no
, because I do not know which the side-effects are, but this lets me continue investigating.
So please don't consider this to be an answer, but a first step to start finding out what's wrong.
So I continued investigating, and ran sshd
with strace
(sudo strace /usr/sbin/sshd -ddd -p 44321
). This yielded the following:
sendto(4, "<87>Nov 20 20:35:21 sshd[2234]: "..., 110, MSG_NOSIGNAL, NULL, 0) = 110
close(5) = 0
stat("/etc/update-motd.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
umask(022) = 02
rt_sigaction(SIGINT, {SIG_IGN, , SA_RESTORER, 0x7f15dce784b0}, {SIG_DFL, , 0}, 8) = 0
rt_sigaction(SIGQUIT, {SIG_IGN, , SA_RESTORER, 0x7f15dce784b0}, {SIG_DFL, , 0}, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], , 8) = 0
clone(child_stack=0, flags=CLONE_PARENT_SETTID|SIGCHLD, parent_tidptr=0x7ffde6152d2c) = 2385
wait4(2385, # BLOCKS RIGHT HERE, BEFORE THE REST IS PRINTED OUT # [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 2385
The line /etc/update-motd.d
made me suspicious, apparently the process waits for the result of the stuff that is in /etc/update-motd.d
So I cd
'd into /etc/update-motd.d
and ran a sudo chmod -x *
in order to inhibit PAM to run all the files which generate this dynamic Message Of The Day
, which includes system load and if packages need to be upgraded, and this solved the issue.
This is a server based on an "energy-efficient" N3150 CPU which has a lot of work to do 24/7, so I think that collecting all this motd-data was just too much for it.
I may start to enable scripts in that folder selectively, to see which are less harmful, but specially calling landscape-sysinfo
is very slow, and 50-landscape-sysinfo
does call that command. I think that is the one which causes the biggest delay.
After reenabling most of the files I came to the conclusion that
50-landscape-sysinfo
and 99-esm
were the cause for my troubles. 50-landscape-sysinfo
took about 5 seconds to execute and 99-esm
about 3 seconds. All the remaining files about 2 seconds altogether.
Neither 50-landscape-sysinfo
and 99-esm
are crucial. 50-landscape-sysinfo
prints out interesting system stats (and also if you're low on space!), and 99-esm
prints out messages related to Ubuntu Extended Security Maintenance
Finally you can create a script with echo '/usr/bin/landscape-sysinfo' > info.sh && chmod +x info.sh
and get that printout upon request.
edited Nov 20 at 20:46
answered Nov 20 at 19:15
Daniel F
530622
530622
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f166359%2fwhy-is-my-ssh-login-slow%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
are you using local accounts ? - sometimes i find pam authentication can add a delay to logging in with ssh
– Sirex
Jul 22 '10 at 7:08
Usually local accounts. Sometimes NIS.
– Peter Lyons
Jul 24 '10 at 2:39