Skip to main content
 

Replace all copies of a file under Linux

Today, after making a change to a php file, I had to update all copies of this file with the changes. The following command will update all files with the same name. Note that this will only work as expected if the name is unique. A theoretically better way to do it would be to find all files with a certain hash.

Anyways, here's the linux shell command. First copy the updated file into the current directory (/tmp for example):

find /target_path/ -iname "some_file.php" -exec cp new_file.php '{}' \;

 

Serve.sh updated for terminal use

After publishing the previous article about serve.sh - the shell script I created to easily serve websites in development - I made some changes to make it easier to run it from the terminal, for example over ssh. I thought I'd share it with you.

One of the problems with the original script was that it was optimized for a GUI setting - ie. start the script by clicking on it from the desktop. However usually I want to run it from the terminal. I don't like typing so I added code to make the script available from any directory to the ~/.bashrc:

export PATH=$PATH:~/bin

Then moved it to that path and renamed it to shave another 3 characters off the syntax:

mv ~/Desktop/serve.sh ~/bin/serve

In addition I noticed that prompting which site should be served was just a workaround for commandline arguments, which are now added. It doesn't check that you entered an argument though. So the new script now requires you to specify which directory you want to serve: to serve the current directory simply run:

serve .

Latest script contents:

#!/bin/bash
my_path=`readlink -f $1`
sudo rm /var/www
sudo ln -s $my_path /var/www
echo Now serving $my_path...

 

Serve.sh - A shell script for serving sites

I am experimenting with using a Linux virtual machine as my web development environment of choice. I store the vm on a removable drive so that I can develop from any location, without having to setup a working environment. Previously I had to check out the repositories, setup a local webserver and I had trouble keeping things working, because every configuration change had to be applied in every location. Now it is all centralized and my life is simpler.

The aim is to make working on projects as easy as possible. I have all projects checked out in a folder called /var/sites. They are mostly PHP projects and because of my shared hosting environment they share a single apache configuration. How can I easily serve them? Having seperate virtualhosts for each project would result in me having to make manual changes on every location again, so this was not the way to go.

Instead I created a simple shell script that creates a symbolic link from the Apache's webroot to the project I am working on:

[gallery link="file"]

#!/bin/bash
echo "Available sites:"
ls  /var/sites
echo
echo -n "Type site to serve: "
read site
if [ -z "$site" ]
then
exit
fi
sudo rm /var/www
sudo ln -s /var/sites/$site /var/www

 

Rotate PHP logs

Our php.log was nearing 550MB so I was investigating how to rotate the logs. The easiest solution seems to be using logrotate (ubuntu linux) with a script like follows.

sudo nano /etc/logrotate.d/php5
/var/log/php5/*.log {
   daily
  daily
   14 days
  rotate 14
  missingok
  nocompress
  sharedscripts
  postrotate
  apache2ctl graceful
  endscript
}

You can check the configuration by running logrotate --force /etc/logrotate.d/php5

 

 

Resume RSync until finished

I noticed a backup of ours did not finish and that the temporary file rsync uses was still listed. A quick google search indicated that rsync had lost its connection. Thanks to Ian Young's article on the subject rsync now resumes when cut off and hopefully the backup will complete now.