You may be running several independent but similar servers at the same time and wasting time by executing commands in all of them one by one.
Wouldn’t it be nice to send a command to all of them at once? or to monitor all of them at once.
The following script can be used as a building block to more complex automation tasks for a small size set of servers. (If you’re managing over 50 servers, I’d probably consider looking a different way to arrange servers (map/reduce cluster), but if you’re doing something below that number this might suffice)
[code lang=”python”]
#!/usr/bin/python
#########################################################
# Author: Angel Leon (gubatron@gmail.com) – October 2009
#
# Invokes a command locally and invokes the same command
# in all machines under the specified username, servers
#
# Requirement: Have a public ssh_key for that user on all
# the other machines so you don’t have to authenticate
# on all the other machines.
#########################################################
import sys
import os
# set the username that has access to all the machines here
user=’safeuser’
# add all your server names here
servers=[‘server1.mydomain.com’,’server2.mydomain.com’,’server3.mydomain.com’]
if __name__ == "__main__":
if len(sys.argv) < 2:
print "Usage: ssh_map_command <cmd>"
sys.exit(0)
cmd= ‘ ‘.join(sys.argv[1:])
#Execute locally first
print cmd
os.system(cmd)
#Execute for all the servers in the list
for server in servers:
remote_cmd="ssh %s@%s %s" % (user,server,cmd)
print remote_cmd
os.system(remote_cmd)
print
[/code]
Save as ssh_map_command and chmod +x it.
Sample uses
Check the average load of all machines at once (then use output to mitigate high load issues)
[code lang=”shell”]$ ssh_map_command uptime[/code]
Send HUP signal to all your web servers (put it in an alias or other script… and that’s how you start building more complex scripts)
[code lang=”shell”]$ ssh_map_command ps aux | grep [l]ighttpd | kill -HUP `awk {‘print $2’}`[/code]
Check if processes are alive, check memory usage on processes across different machines, grep remote all logs at once, svn up on all machines, rsync from one to many, hey, you can even tail -f and grep all the logs at once, you can go nuts with this thing. Depends on what you need to do.
Requirements
- For convenience you should probably set up ssh public/private keys between the main machine and the other machines (or they can all trust each other and you can execute on any) so that you can execute commands on all the machines without having to enter passwords.
- Put the script on your $PATH.
- python
Security Advisory
Make sure only the desired user has read/write/execute access to it and keep your private ssh keys safe (preferably only read and execute for the owner, and no permissions whatsoever to anybody else chmod 500 ssh_mod_map), if possible change them as often as possible, for it may become a big security whole if an attacker can manage to write code on this script, specially if you have cronjobs invoking it. Your attacker would only need to change code here to mess up all of your machines.
Disclaimer and Call for Knowledge
Please, if someone knows of a standard way to map commands to multiple servers, please let me know in the comment section, in my case I needed a solution and I wrote a quick and dirty python script and tried to secure it as best as I could, by no means I’m saying that this is the best solution to mapping commands, in fact I believe it might be the least efficient way, however it works good enough for my personal needs.
try checking out for a bunch of command line apps like this
http://www.ubuntugeek.com/execute-commands-simultaneously-on-multiple-servers-using-psshcluster-sshmultixterm.html