So we took on some vendors to help us out. I wanted to provide these individuals authenticated, read-only access to our git repos so they could stay current with the project, but not commit code directly (they’ll have their own repo). Google yielded these excellent results pages.
When read altogether, I had a few options.
git://
) protocol. This provides read-only access, but it provides it to the whole world. Not a solution. git-shell
. Each user gets a regular SSH account on the server, but each user gets a git-shell
instead of a regular login shell like bash
. The git-shell
restricts the user to git operations. Write permissions are restricted using standard Linux permissions on the repo itself. So #3 was what I needed. I created an SSH login for each user that needed access to the repo. I set the login shell for each user to /usr/bin/git-shell
. I put each user in a group that had read-only file system permissions to the repo. Testing it out worked well. Users could git clone and pull, but pushing failed and attempting to SSH directly failed.
One last note: as the man page mentions, you can create a directory called git-shell-commands
in home directories of git-shell users
. git-shell
users will be able to run any command in this directory. If there is a help program in the directory, it is run when a git-shell
user logs in. More details on git-shell-commands
here, including the location of sample git-shell
commands on your server.
Sometimes, I want to pop onto a database server, check the status of something, and then logout. So, for example, if I want to check on the number query cache free blocks, I run this long command:
% mysqladmin -u admin -p extended | grep -i qcache
Then I type in the password. Well, I grew tired of typing in the extra options, plus the password. Turns out, MySQL will look for the configuration file .my.cnf
in your home directory after it looks in /etc/my.cnf (it looks in a few other places as well). So I put this in my ~/.my.cnf
:
[client]
user=admin
password=secret
And now I can simply run:
% mysqladmin extended | grep -i qcache
and it works right away. Note that the password is stored in the clear.
Like most people, I did not know much about HTTP Keep-Alive headers other than that they could be very bad if used incorrectly. So I’ve kept them off, which is the default. But I ran across this blog post which explains the HTTP Keep-Alive, including its benefits and potential pitfalls pretty clearly.
It’s all pretty simple really. There is an overhead to opening and closing TCP connections. To alleviate this, Apache can agree to provide persistent connections by sending HTTP Keep-Alive headers. Then the browser can open a single connection to download multiple resources. But Apache won’t know when the browser is done downloading, so it simply keeps the connection open according to a Keep-Alive timeout, which is set to 15 seconds by default. The problem is the machine can only keep so many simultaneous requests open due to physical limitations (e.g. RAM, CPU, etc.) And 15 seconds is a long time.
To allow browsers to gain some parallelism on downloading files, without keeping persistent connections open too long, the Keep-Alive timeout value should be set to something very low, e.g. 2 seconds.
I’ve done this for static content only. Why only static content? It doesn’t really make much sense for the main page source itself since that’s the page the user wants to view.
I’ve mentioned before that by serving all static content on dedicated subdomains, we indirectly get the benefit of being able to optimize just those subdomains. So far, this meant:
.htaccess
files Now we can add to the list: enabling HTTP Keep-Alive headers. The VirtualHost
block might look like this now:
ServerName static0.yourdomain.com
ServerAlias static1.yourdomain.com
ServerAlias static2.yourdomain.com
ServerAlias static3.yourdomain.com
DocumentRoot /var/www/vhosts/yourdomain.com
KeepAlive On
KeepAliveTimeout 2
AllowOverride None
ExpiresActive On
ExpiresByType text/css "access plus 1 year"
ExpiresByType application/x-javascript "access plus 1 year"
ExpiresByType image/jpeg "access plus 1 year"
ExpiresByType image/gif "access plus 1 year"
ExpiresByType image/png "access plus 1 year"
Note the following applies to Windows Vista, but is probably easier on MacOS/Linux.
Is your hosts
file becoming monstrous? Do you have an alias or shortcut to your hosts
file because you edit it so often? Tired of manually adding every subdomain and domain you work on?
I was too when I thought there must be a better way. And there was.
The general idea is this: by installing a local DNS nameserver in BIND, we can set up local development domains that look like regular domains on the internet. For real domains, we’ll just forward the requests on to a real nameserver. This gives us a couple more benefits: 1) we can use the local nameserver as a caching nameserver to speed up DNS queries (in theory, I have not actually done this), and 2) we can choose to use any DNS service we wish, i.e. OpenDNS, or Google DNS.
Here are the steps.
C:\Windows\system32\dns
.named.conf
in its entirety.
options {
directory ";c:\windows\system32\dns\zones";
allow-transfer { none; };
forward only;
forwarders {
//208.67.222.222; // OpenDNS
//208.67.220.220;
8.8.8.8; // Google DNS
8.8.4.4;
};
query-source address * port 53;
};
/*
logging {
channel queries_log {
file "c:\windows\system32\dns\var\queries.log";
print-severity yes;
print-time yes;
};
category queries { queries_log ; };
};
*/
zone "work.local" IN {
type master;
file "work.local.txt";
};
key "rndc-key" {
algorithm hmac-md5;
secret "xxxxxxxxxxxxxxxxxxxxxxxx";
};
controls {
inet 127.0.0.1 port 953
allow { 127.0.0.1; } keys { "rndc-key"; };
};
CNAME
wildcard record.
$TTL 86400
@ IN SOA ns1.work.local. admin.work.local. (
2008102403
10800
3600
604800
86400 )
@ NS ns1.work.local.
IN A 127.0.0.1
ns1 IN A 127.0.0.1
www IN A 127.0.0.1
* IN CNAME www
C:\> ipconfig /flushdns
www.work.local
. If you have errors, you can uncomment the logging block in named.conf
.VirtualHost
in Apache for your development domain. Thanks to VirtualDocumentRoot
, we can map any number of subdomains to project roots. Here is my VirtualHost
block.
ServerName www.work.local
ServerAlias *.work.local
VirtualDocumentRoot "C:/_work/%1"
Options Indexes FollowSymLinks Includes ExecCGI
AllowOverride All
Order allow,deny
Allow from all
C:\_work
, for example, C:\_work\awesomeapp
. Create a test index.html file in that directory.Now, you should be able to repeat step 8 for any new website you create! No editing of hosts
files, no bouncing the webserver! Just create the project directory and it’s immediately available.
One other important note: Firefox has its own DNS cache independent of the OS. For sanity, restarting Firefox resets its DNS cache. You can also permanently disable DNS caching in Firefox.
As much as I wish we deployed builds from our continuous integration server, all but one of our products is deployed with good ol’ `svn up`
. Developers generally have access to only one web server, so I needed an rsync
command to propagate new code to the rest of the web servers. I wanted normal user accounts to be able to run it at any time in any directory with one command. Then developers would be instructed to run this command after updating any files.
So I whipped up an shell script that called rsync
with some predefined options and targets. Unfortunately, in order to preserve ownership and permissions in the destination, rsync
needed to be run as root
.
At first, I looked at the setuid
bit. By changing the ownership of the rsync
shell script and running `chmod u+s`
on the script, setting the setuid, any user could execute it and it would run as root
. Well, it turns out that the kernel will not honor setuid
on shell scripts for security reasons. But what if I wrote a C program instead of a shell script? That actually worked, and ran with root
privileges, but it still did not rsync
as root for some reason. So that was out.
The second solution was to insert sudo
before the rsync
command in the script. I modified /etc/sudoers
to allow the users group to run rsync
under sudo
. That worked perfectly. So if I put this script in /usr/local/bin
, I would be done. But I had already written this magnificent (two-line) C program. Why not make it even more secure (sudo
does not work on shell scripts either)? Instead of allowing all users to run rsync
under sudo
, I could limit them to running only my C program under sudo
, instead of rsync
in general. Then, in my script, I could replace rsync
with my C program. So that’s what I did. I again modified /etc/sudoers
and my shell script, threw both the script and C executable in /usr/local/bin
and I was done.
I named the final command `zipsync`
. Here is the shell script for that, anonymized a bit.
#!/bin/sh
cd /var/www/vhosts
# repeat for each web server
sudo zipsync.bin \
-av --delete \
--exclude=".svn" \
--exclude="logs" \
--exclude="tmp" \
--exclude="cache" \
--exclude="*.swp" \
* 192.168.1.101:/var/www/vhosts
cd -
And the C program, zipsync.bin
.
#include
int main(int argc, char** argv)
{
*argv = "rsync";
return execvp(*argv, argv);
}