Sometimes, repeating tasks are annoying especially if the process takes time. This is what happens with my current routine. My supervisor wants to see the immediate changes into our staging server, but my local repository is four steps away from the staging copy. Even with the smallest change in font I need to commit to my repository, pull from a mediator repo then commit to SVN server, then update the staging server SVN checkout. This is why I’m creating a simple bash script to accomplish the task.
My setup
Currently, I’m using GIT for my local development machine and just synchronize it with SVN via an intermediate GIT/SVN repository just like what I did before. When I want to commit changes to SVN and will eventually be viewed in staging server, it takes four steps.
- Finalize my local GIT repo, therefore committing everything to devel branch.
- Pull changes from my devel branch to the intermediate GIT repository on my local machine, which is also an SVN checkout.
- Commit changes to SVN – we currently do not impose SVN branches – its just me that do it (on GIT).
- Update the staging server’s SVN checkout from the latest SVN version.
Why too complex?
Yes it is too complex! That is because our company does not impose the usage of those tools and we don’t even have deployment strategy. I have to build my own deployment strategy for new projects (it was seamless using GIT and it was fast!) but for old projects (projects that exists before I join the company), there is only one way! FTP upload.
First, for me to upload freely (on a GUI based client), I don’t need those .svn
directories. It is annoying to skip those files individually per directory. With GIT, there is only 1 .git
directory so I can even upload all files at once!
Second, it is also annoying when your peers’ code conflict with yours. Using GIT as secondary repository, you can easily throw away somebody’s code (or even yours) and even merge them gracefully. That is why I’m having those setup.
Another reason why I’m using GIT for my local repository is that it is faster for mini-commits. I always commit a lot, usually very small but significant changes. To avoid data lose, I need to commit frequently. When using SVN, mini-commits would be too time-consuming. I’d rather bulk commit and went outside for a while, while waiting for the commit to complete.
Why automate? Why not?
First, why not automate? I should never automate this process simply because there are differences in GIT and SVN and for conflicts, you need to properly resolve conflicts thus it is a bad idea to automate everything. For upstream updates (updates from peers via SVN), I need to inspect there updates before merging them into my GIT repository that’s why I’m not going to automate upstream updates.
Why automate? For downstream updates (my updates that will be committed to company’s SVN server), if I’m sure that I’m the only one making changes at that time, it is a good time to automate the task. Hence I’m making this very simple bash script.
The code
After several search from the internet, finally I have came up with this. It is just a bunch of if statement. For future reference.
#!/bin/sh COMMENT=$1 if [[ -z "$COMMENT" ]] ; then echo "No comment passed" exit 1 fi cd ~/www-repo/projectname if [[ $? -ne 0 ]] ; then echo "Cannot CD to ~/www-repo/projectname" exit 1 fi git add . git commit -m "$COMMENT" cd ~/www-repo/svn_checkout/projectname if [[ $? -ne 0 ]] ; then echo "Cannot CD to ~/www-repo/svn_checkout/projectname" exit 1 fi git pull ~/www-repo/projectname devel svn commit -m "$COMMENT" echo "Done..."
To run the code, I simply change directory to the local git repository and execute the script and pass parameter as the comment. Assuming that our script is named ~/scripts/sync-projectname-svn.sh
, we run it like this:
#cd /path/to/project/repo cd ~/www-repo/projectname ~/scripts/sync-projectname-svn.sh "Decreased banner font size"