Friday, October 19, 2012

Reformatting for 80 characters in vim

After reading The Pragmatic Programmer and Clean Code, I've pretty much settled on using text files (specifically in markdown format) for all my personal notes and documentation and editing them with vim.  That being said, I like to keep my files limited to 80 columns wide.  That way I can look at and review multiple documents simultaneously in splits and vertical split easily.

In my .vimrc I have the following setting to get the auto wrapping working for me:

set textwidth=80

That way as I type along, vim is keeping track of the file's column width and when I get to the 80th column it automatically moves me to the next line.  Pretty handy.

Here's my problem though, if I change some text that I've already entered and reword it, my paragraph loses it's nicely formatted set column width.  Here's an example but with the textwidth setting set at 40:

Before my edit change:

Lorem ipsum dolor sit amet, consectetur
adipiscing elit.  Proin neque sapien,
facilisis eget tincidunt ut, porttitor
laoreet lacus. Class aptent taciti
sociosqu ad litora torquent per conubia
nostra, per inceptos himenaeos.  Etiam
semper elementum congue.

After my edit change:

Lorem ipsum dolor sit amet, consectetur
adipiscing elit.  Proin neque sapien,
facilisis eget tincidunt ut, porttitor
laoreet lacus. Class aptent taciti
sociosqu ad litora torquent per conubia
nostra, per inceptos himenaeos.  I
forgot some text. Etiam
semper elementum congue.

Notice the problem:  After editing the paragraph,  the second to last line doesn't go to the 40th column before wrapping.  This is a little annoying to me because I feel the need to reformat the paragraph to get it back to looking like the "before".  It's not too big of a deal when I only have to reformat a line or two but if I've change something high up in a fairly long paragraph, reformatting dozens of lines can get tedious.

Here's a little tip that I stumbled across that can make that reformatting go a bit quicker:

1.  Select the whole paragraph with V. and then h,j,k and/or l.
2.  Then strip the new-line characters with J.
3.  Finally reformat it with gq.

Hope you find this helpful!

Friday, July 6, 2012

CGI Script to display clone urls


After some discussion, I was able to convince my client to convert from gitorious to gitolite.  One nice feature that people like about gitorious is that the web interface provides an easy way to look up urls for cloning repositories.

In my mind, that's a pretty legitimate need.  To that end, I threw together this bash script, which acts as a cgi and dropped it in the cgi-bin directory of the server that's running gitolite.  Hope you find this helpful.

#!/bin/bash

REPOSITORY_DIR="/home/git/repositories/"
URL_PREFIX="git clone git@internal-build-server:"

cat <<DONE
Content-type: text/html

<html>
<head>
<title>Repository URLs</title>
<link rel="stylesheet" type="text/css" href="/index.css" />
</head>
<body>
<div id="page_container">
<h1>
Repository URLs
</h1>
<p>
Repository URLs on this server follow a specific pattern.  The pattern is
as follows:
</p>
<center>
git@internal-build-server:<i><font color="darkblue">{category}</font></i>/<i><font color="darkblue">{project}</font></i>
</center>
<p>
These URLs are both pull and push URLs.  You do not need separate URLs
for pulling and pushing.  Access control will be handled by a
server git update hook that is provided by gitolite.
</p>
<p>
In an effort to make life a little easier in locating your URLs, this script
enumerates URLs for the repositories located on this machine below.
</p>
DONE

CATEGORIES=$(find $REPOSITORY_DIR -type d -maxdepth 1 -mindepth 1 -not \
-iname '*.git' | sed -e "s|$REPOSITORY_DIR||g")

for CATEGORY in $CATEGORIES; do
echo "<h2>Category: $CATEGORY</h2>"
CAT_REPOSITORIES=$(find $REPOSITORY_DIR$CATEGORY -type d -iname \
'*.git' | sed -e "s|$REPOSITORY_DIR||g" -e 's/.git$//g')
for REPOSITORY in $CAT_REPOSITORIES; do
echo "$URL_PREFIX$REPOSITORY<br />"
done
done

ROOT_REPOSITORIES=$(find $REPOSITORY_DIR -type d -maxdepth 1 -mindepth 1 \
-iname '*.git' | sed -e "s|$REPOSITORY_DIR||g" -e 's/.git$//g')
echo "<h2>Uncategorized Repositories</h2>"
for REPOSITORY in $ROOT_REPOSITORIES; do
echo "$URL_PREFIX$REPOSITORY<br />"
done

cat <<DONE
<br />
</div>
</body>
</html>
DONE

Monday, June 25, 2012

Bash script for pulling/fetching multiple git clones


In my current assignment, I'm acting as the main build guy for a number of projects that use git for source control.  As such, I find it very useful to keep all my git clones up to date whether I'm actively developing in them or not.  Additionally, I need to review the changes other developers are committing, so I'd like to get a summary of recent git activities.

Over time, I've put this little bash script together to help me with that.  I've included the script in this posting so I can remember later what/why I did this.  Disclaimer, I wrote this script and run this script in bash on Linux (not via git-bash in Windows).  Also, I'm using Zenity for some nice UI look/feel.

  1 #!/bin/bash
  2
  3 pushd ~/dev/repos > /dev/null
  4
  5 # The log file
  6 PULL_LOG="$(mktemp)"
  7
  8 # Get a list of all the clones in this directory.
  9 CLONES=$(find -maxdepth 2 -mindepth 2 -type d -name ".git" | sed -e 's|\./||' -e 's|/\.git||')
 10
 11 # Get a list of all the branches in clone/branch format
 12 ALL_BRANCHES=$(for clone in $CLONES; do cd $clone; for branch in $(git branch -l | sed 's/\s\|\*//g'); do echo $clone/$branch; done; cd ..; done)
 13
 14 # Count the branches
 15 BRANCH_COUNT=$(echo $ALL_BRANCHES | sed 's/ /\n/g' | wc -l)
 16
 17 # Start the log file
 18 echo "Pull log for $(date)" >> $PULL_LOG
 19 echo "--------------------------------------------------------------------------------" >> $PULL_LOG
 20
 21 # Function for pipping output to zenity progress dialog
 22 function pull_clones() {
 23     clone_counter=0
 24     for clone in $CLONES; do
 25         echo "Pulling branches for clone $clone" >> $PULL_LOG
 26         echo "--------------------------------------------------------------------------------" >> $PULL_LOG
 27         cd $clone
 28         echo "# Fetching changes for clone $clone"
 29         git fetch origin 2>> $PULL_LOG
 30         for branch in $(git branch -l | sed 's/\s\|\*//g'); do
 31             echo "# Merging branch $clone/$branch"
 32             echo "Merging branch $branch" >> $PULL_LOG
 33             git checkout $branch 2> /dev/null
 34             git merge origin/$branch >> $PULL_LOG
 35             echo | awk '{print count / total * 100}' count=$clone_counter total=$BRANCH_COUNT
 36             let clone_counter=clone_counter+1
 37         done
 38         cd ..
 39         echo >> $PULL_LOG
 40     done
 41 }
 42
 43 # Do it
 44 pull_clones | zenity --progress --title='Pulling development clones' --width=512
 45 zenity --text-info --filename=$PULL_LOG --title="Pull log" --width=500 --height=450
 46
 47 #Clean up
 48 rm $PULL_LOG
 49
 50 popd > /dev/null



Wednesday, June 6, 2012

Patching tip using mocks in python unit tests

I use the mock library by Michael Foord in my python unit tests and one problem always plagued me.  Here's the problem and the solution.

Sometimes when I import a package/module in my code I use this pattern (let's call it pattern A):

"""file_module_pattern_a.py"""
import os

def get_files(path):
    """Return list of files"""
    return os.listdir(path)


Other times, I use this pattern (let's call it pattern B):

"""file_module_pattern_b.py"""
from os import listdir

def get_files(path):
    """Return list of files"""
    return listdir(path_variable)

Note the differente.  In pattern A, I import the whole os package, while in pattern B, I only import the listdir function.  Now in my unit tests, here's what I use for pattern A:

"""Unit tests for module file_module_pattern_a"""

from file_module_pattern_a import get_files
from unittest import TestCase
from mock import patch, sentinel

class StandloneTests(TestCase):
    """Test the standalone functions"""
    
    @patch('os.listdir')
    def test_get_files(self, mock_listdir):
        """Test the get_files function"""
        test_result = get_files(sentinel.PATH)
        mock_listdir.assert_called_once_with(sentinel.PATH)
        self.assertEqual(test_result, mock_listdir.return_value)

This works great.  The only problem is... if I use pattern B with this unit test, the mock_listdir never gets called.  The unit test tries to use the REAL os.listdir function.

Here's the issue at hand.  When I use pattern B, I'm actually adding the function to my module, not the global scope.  As a result, the patch directive needs to reference my module, not os.  Here's the correct unit test patch syntax:

"""Unit tests for module file_module_pattern_b"""

from file_module_pattern_b import get_files
from unittest import TestCase
from mock import patch, sentinel

class StandloneTests(TestCase):
    """Test the standalone functions"""
    
    @patch('file_module_pattern_b.listdir')
    def test_get_files(self, mock_listdir):
        """Test the get_files function"""
        test_result = get_files(sentinel.PATH)
        mock_listdir.assert_called_once_with(sentinel.PATH)
        self.assertEqual(test_result, mock_listdir.return_value)

Monday, June 4, 2012

Using SSHFS from OSX

SSHFS is a FUSE (Filesystem in Userspace) plugin that allows you to mount a drive/filesystem on your systems via SSH.  This is a great way to transfer files securely over the public Internet.  Best of all, it's free!

OSX supports FUSE through a program called OSXFUSE (or Fuse for OSX).  You can download it at:


Note, you'll have to download the plugins for OSXFUSE separately.  The two most popular are:  SSHFS and NTFS-3G (a plugin that allows you to mount NFTS volumes in read/write mode... which I will not be covering in this post).

Computers hosting sshfs directories don't need to have any special software installed on them.  They simply need to be running sshd (the secure shell daemon) and not have file transfers disabled.  Usually, sshd is configured by default to have file transfers enabled.

One thing that often confuses people coming from the Mac world is that SSHFS has no GUI.  So how do you mount remote drives?  Well, you'd normally do it from a terminal prompt.

In this blog entry, I will go over three ways to mount sshfs drives/filesystems using SSHFS:
  1. Using terminal.app and some bash commands
  2. Using Automator
  3. By creating an Alfred.app extension

Get the software

Before we begin, first make sure you have the OSXFUSE software installed.  Download it from the link listed above.  You'll know you have it properly installed when you see it in your System Preferences:

You'll also want to ensure that you've installed the SSHFS plugin from the same site.  To verify that you've installed it correctly, you'll need to run sshfs -h from a terminal window:
If you've got these two items installed, you're ready to go!

First way: From the bash prompt

This method is the most traditional way to mount a sshfs filesystem... and actually the other two methods will be invoking the same commands...we'll just be hiding them with the graphical interface.

First start a terminal window and create a directory to house the filesystem.  I always put my filesystems in the /Volumes directory.  You aren't required to put your directory there.  You can put it anywhere you'd like.  I just do so out of convention.  Here's the command to do so:

steve@l00-1nsv01 $ mkdir /Volumes/ssh_fs_mount

I named my mount ssh_fs_mount.  You can name yours anything you'd like.
Then you need to run the sshfs command to mount the remote computer's directory to your hosting directory. The command takes the following format:

sshfs username@hostname:/path/to/directory /local/directory

...where username is your username on the remote computer, hostname is the name of the remote computer contains the directory that you'd like to mount, /path/to/directory is the full path to the directory on the remote computer that you'd like to mount and /local/directory is the directory on your local computer that you'd like to house the mount on.  Maybe an example will make it more clear.

I have a Linux computer that I do a lot of development on.  It's called ubuntu64.local.  My username on that computer it steve and my development folder on that computer is at /home/steve/dev.  Here's the sshfs command that I would issue to mount my development folder from the Linux computer into the /Volumes/ssh_fs_mount folder on my Mac

sshfs steve@ubuntu64.local:/home/steve/dev /Volumes/ssh_fs_mount

Let's try it real quick:
A couple of items to note...  First, I'm prompted for a password.  This is prompting me for my password on the remote computer (ubuntu64.local in this instance).  It's not prompting me for my Mac password.  This is the remote computer verifying that I'm indeed steve@ubuntu64.local and not some other impostor.    If I have key-based authentication working between my Mac account and my Linux account, I would not be prompted for the password.  I normally do have key-based authentication between these two accounts, but I turned it off for this demonstration.  If you're not using key-based authentication when ssh'ing between computers... you should.  In fact, the other two methods in this blog entry (using Automator and the Alfred extension) assume you are using key-based authentication and aren't prompted for a password. You can find out more about setting-up key-based authentication at any of these links:


Second, there's no other output from the command.  No output means that sshfs was able to successfully complete the mount.  If there had been a problem, sshfs would have complained with error messages.  Once the mount is completed, the drive should appear on the desktop:


It should behave like a normal Mac drive.  You can get info on it.  You can browse it in Finder.  You can even add/edit/remote files (assuming your account on the remote computer has the appropriate privileges).
When you're done and want to unmount the drive, simply right-click on it and select Eject.


Note, when the SSHSFS volume has been ejected, OSX automatically deletes the /Volumes/ssh_fs_mount directory.  If you put your mounting directories in /Volumes, you'll always have to recreate them  after ejecting.

Second Way: Create an Automator task

Using Automator, you can create a workflow to automate the steps you did in the first method.  After all, that's what Automator is for.. automating repetitive things.

First, start Automator and create a new Application workflow:
Next, in the Text Library, drag the Ask For Text action to the workflow.
Check the Ignore this action's input and Require an answer checkboxes.  Enter Enter a remote sshfs url and click OK in the question textbox.
Now, from the Utilities library, drag the Run Shell Script action to the workflow and drop it below the Ask for Text action.
Change the Pass input pulldown from to stdin to as arguments.  Then past the following text into the textarea:

volume_name=/Volumes/sshfs_volume_$$
mkdir $volume_name
/usr/local/bin/sshfs $1 $volume_name
if [ "$?" == "0" ]; then
open $volume_name
else
echo "Unable to mount sshfs volume."
rmdir $volume_name
fi

Save it to your desktop as SSHFS Workflow.  It should now look like this:
Now, if you run it from your desktop, it will prompt you for a sshfs url.  The format is the same from the bash method.  I'll reuse my example:

steve@ubuntu64.local:/home/steve/dev

Again, like I mentioned before, this method assumes that you've already set up key-based authentication between your Mac account and your remote account.
If all goes well, the automator will automatically open the mounted volume.  Just like the first method, when you're done, you simply right-click the volume and eject it via Finder.

Third Way: Create an Alfred Extension

Now if you're running Alfred, you're probably thinking, "I can just run the automator task from Alfred and be done with it."  Yes.  You could.  But you can also create an extension.  That way you can enter the sshfs url directly into the Alfred window and save yourself one last data-entry step.

To create the extension go to the Extensions tab of the Alfred preferences window.  Click the + and select Shell Script.
In the Extension Name field, enter SSHFS and click Create.
In the Title field, enter SSHFS Mounter.  In the Description field, enter Mount a sshfs volume.  Make sure the Keyword checkbox is checked and the enter sshfs in the textbox next to it.  Make sure the Silent checkbox is checked.  Finally in the Command field, enter the following text:

volume_name=/Volumes/sshfs_volume_$$
mkdir $volume_name
/usr/local/bin/sshfs {query} $volume_name
if [ "$?" == "0" ]; then
open $volume_name
else
echo "Unable to mount sshfs volume."
rmdir $volume_name
fi

If should look like this:
If you have Growl installed on your Mac, click the Advanced button and check the Display script output in Growl checkbox.
Save the extension and close the preferences window.  Now from the the Alfred prompt, you can enter sshfs your_sshfs_url and hit enter, where your_sshfs_url is an actual sshfs url.  Here's me mounting my ubuntu64.local system's tmp directory:


Just like the Automator method, the Alfred method presumes you are using key-based authentication between your Mac account and the remote computer's account.  It should automatically open the folder you mounted.  When you're done, simply right-click on the volume and eject it.

Wednesday, May 30, 2012

Run multiple python versions on your system

I'm a software development consultant.  I write python (as well as other languages) code for many clients and I don't get to dictate what their environment looks like.  I've got clients running python as old as 2.4 while others are on the bleeding edge.  Additionally, each client may have their own packages installed as well as differing lists of third party packages.

This post is a description of how I went about getting multiple versions of python installed in my ubuntu development machine and how I go about managing different package sets for different clients.

Get multiple pythons installed

Ubuntu typically only supports one python 2.x version and one 3.x version at a time.  There's a popular ppa (personal package archive) called deadsnakes that contains older versions of python.  You can find it:


To install it (per the instructions in the link above), you do the following:

steve@ubuntu64 ~ $ sudo add-apt-repository ppa:fkrull/deadsnakes

Then you need to update your cache:

steve@ubuntu64 ~ $ sudo apt-get update

Finally, simply install the other versions (I'm running on ubuntu 12.04 LTS, so I have python 2.7 already):

steve@ubuntu64 ~ $ sudo apt-get install python2.4 python2.5 python2.6

If you're following along, we now have python versions 2.4 through 2.7 installed the computer.  If you run 'python', you'll see the default version is still 2.7.


steve@ubuntu64 ~ $ python
Python 2.7.3 (default, Apr 20 2012, 22:39:59) 
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>

Here's why:  Each python version is stored in /usr/bin as python2.X where X is the version.  There is a symbolic link named python that points to the version you want to be the default.  Instead of typing python from the bash prompt, you could just as easily type python2.5:


steve@ubuntu64 ~ $ ls -l /usr/bin/python*
lrwxrwxrwx 1 root root       9 Apr 17 13:20 /usr/bin/python -> python2.7
lrwxrwxrwx 1 root root       9 Apr 17 13:20 /usr/bin/python2 -> python2.7
-rwxr-xr-x 1 root root 1216520 May 21 12:13 /usr/bin/python2.4
-rwxr-xr-x 1 root root 1403624 May  3 00:17 /usr/bin/python2.5
-rwxr-xr-x 1 root root 2652056 May 12 08:43 /usr/bin/python2.6
-rwxr-xr-x 1 root root 2993560 Apr 20 19:37 /usr/bin/python2.7


steve@ubuntu64 ~ $ python2.5
Python 2.5.6 (r256:88840, May  3 2012, 04:16:14) 
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 


This is a point of interest.  I would not mess with the symbolic link.  Ubuntu runs python for many internal maintenance scripts and those scripts are expecting the python version that shipped with ubuntu.

Use virtualenv to manage your python installations and package sets

So now that you have multiple versions of python on your system, how to manage them?  How do you keep packages installed for one version separate from packages installed for another.  What if you want to run one version of django for client X and a different version for client Y?

That's where virtualenv comes in.  If your a ruby programmer, this is analogous to rvm.  Virtualenv lets you manage the python versions and package installations separately for different projects or clients.

Installing virtualenv is simple

As always, you're just a single apt-get command away from having virtualenv ready to go:

steve@ubuntu64 ~ $ sudo apt-get install python-virtualenv

That it.  Virtualenv is ready to go now.

Quick example

Say your starting a new project for a client.  They are running python2.5 and want to use the mocks, nose and coverage packages for testing.  Here a walkthrough of how to use virtualenv to manage the project.

First, let's create a directory for the project:

steve@ubuntu64 ~ $ mkdir -p ~/dev/project1
steve@ubuntu64 ~ $ cd ~/dev/project1

Next, run virtualenv to create the environment for the project:


steve@ubuntu64 ~/dev/project1 $ virtualenv -p /usr/bin/python2.5 .env
Running virtualenv with interpreter /usr/bin/python2.5
New python executable in .env/bin/python2.5
Also creating executable in .env/bin/python
Installing distribute.............................................................................................................................................................................................done.
Installing pip...............done.
steve@ubuntu64 ~/dev/project1 $ 

This command tells virtualenv to create a .env directory and to place a copy of the 2.5 version of python in it.  This copy of the 2.5 python is brand-spankin' new.  It doesn't have any packages (beyond the standard library) installed.  You will need to install them yourself.  Any packages that you install in this instance of python will not be available to main python installation or other virtualenv instances.

Before your can use this new copy, you need to activate it:


steve@ubuntu64 ~/dev/project1 $ source .env/bin/activate
(.env)steve@ubuntu64 ~/dev/project1 $

The activate script manipulates your path environment variable, placing the new python instance first in your path.  This makes is so that when you run python, it will use the version from your instance:


(.env)steve@ubuntu64 ~/dev/project1 $ which python
/home/steve/dev/project1/.env/bin/python
(.env)steve@ubuntu64 ~/dev/project1 $ python
Python 2.5.6 (r256:88840, May  3 2012, 04:16:14) 
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>

Also, notice that your prompt starts with (.env).  This tells you that you're running with the virtualenv instance activated.  To install packages in your instance, use the pip command:


(.env)steve@ubuntu64 ~/dev/project1 $ pip install mock nose coverage
Downloading/unpacking mock
  Downloading mock-0.8.0.tar.gz (749Kb): 749Kb downloaded
  Running setup.py egg_info for package mock
    warning: no files found matching '*.png' under directory 'docs'
    warning: no files found matching '*.css' under directory 'docs'
    warning: no files found matching '*.html' under directory 'docs'
    warning: no files found matching '*.js' under directory 'docs'
Downloading/unpacking nose
  Downloading nose-1.1.2.tar.gz (729Kb): 729Kb downloaded
  In the tar file /tmp/pip-dH_WYa-unpack/nose-1.1.2.tar.gz the member nose-1.1.2/doc/doc_tests/test_selector_plugin/support/tests/mymodule/my_function$py.class is invalid: 'filename None not found'
  In the tar file /tmp/pip-dH_WYa-unpack/nose-1.1.2.tar.gz the member nose-1.1.2/doc/doc_tests/test_restricted_plugin_options/restricted_plugin_options.rst.py3.patch is invalid: 'filename None not found'
  Running setup.py egg_info for package nose
Downloading/unpacking coverage  Downloading coverage-3.5.2.tar.gz (115Kb): 115Kb downloaded
  Running setup.py egg_info for package coverage
    no previously-included directories found matching 'test'Installing collected packages: mock, nose, coverage
  Running setup.py install for mock
    warning: no files found matching '*.png' under directory 'docs'
    warning: no files found matching '*.css' under directory 'docs'
    warning: no files found matching '*.html' under directory 'docs'
    warning: no files found matching '*.js' under directory 'docs'
  Running setup.py install for nose
    Installing nosetests script to /home/steve/dev/project1/.env/bin
    Installing nosetests-2.5 script to /home/steve/dev/project1/.env/bin
  Running setup.py install for coverage
    building 'coverage.tracer' extension
    gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.5 -c coverage/tracer.c -o build/temp.linux-x86_64-2.5/coverage/tracer.o
    coverage/tracer.c:3:20: fatal error: Python.h: No such file or directory
    compilation terminated.
    **
    ** Couldn't install with extension module, trying without it...
    ** SystemExit: error: command 'gcc' failed with exit status 1
    **
    no previously-included directories found matching 'test'
    Installing coverage script to /home/steve/dev/project1/.env/bin
Successfully installed mock nose coverage
Cleaning up...
(.env)steve@ubuntu64 ~/dev/project1 $

To see that the packages have been installed, simply use them:




(.env)steve@ubuntu64 ~/dev/project1 $ python
Python 2.5.6 (r256:88840, May  3 2012, 04:16:14) 
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mock
>>> import nose
>>> import coverage
>>>

When you're done working on the project, deactivate it.  You can always come back later and activate it again.


(.env)steve@ubuntu64 ~/dev/project1 $ deactivate
steve@ubuntu64 ~/dev/project1 $


Notice that when you deactivate the environment, your packages are no longer available:


steve@ubuntu64 ~/dev/project1 $ python
Python 2.7.3 (default, Apr 20 2012, 22:39:59) 
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mock
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named mock
>>> import nose
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named nose
>>> import coverage
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named coverage
>>>

You can create as many virtualenv environments as you like.  I create one for each project that I work on.


Bonus material


To make life even easier, here's a couple additional things I do that you might find helpful!

First, once you've started to use virtualenv with some frequency, you start to get tired of downloading and installing the same packages over and over.  Pip has the ability to cache your downloaded packages for reuse.  To do that, you'll need to create a directory to store the download packages in:

steve@ubuntu64 ~ $ mkdir ~/.pip_download_cache

Then you'll need to set a variable to inform pip of the new directory.  Add the following to your .bashrc file:

export PIP_DOWNLOAD_CACHE=/home/steve/.pip_download_cache

Now when you do a pip install, it will keep the downloaded files in the ~/.pip_download_cache directory.  The next time you do a pip install of the same package, it will just use the copy from the directory instead of downloading it again.


Second, it can be tedious to always have to type 'source .env/bin/activate' every time you want to activate an environment.  Since I always put my virtual environments in a .env directory I can count on the command to activate always being the same.  So I create an alias for it.  I added the the following to my ~/.bash_aliases file:

alias activate='source .env/bin/activate'

Now once I cd into the projects directory, I simply type activate to activate my virtual environment.






Friday, May 25, 2012

File encryption with vim

You may not know it, but you can encrypt your files using vim... and it's pretty easy to do.

Turning encryption on for a file

If you're editing a file, to encrypt it, simply enter X at the : prompt.

Vim will then prompt you for a key.  A key is the password for decrypting the file in the future.  You'll have to enter it twice to ensure you typed it correctly.

Once you've entered your key, the next time you save/write, the file should be encrypted, see the status text at the bottom to be sure.

Once the file is encrypted, it will be unreadable to others.
Vim auto-detects whether files are encrypted.  If you try to open an encrypted file, vim will prompt you for the key.
Vim uses whatever you enter for the key to decrypt the file.  If you enter the correct things will look great.  If you enter an invalid key, vim will present you with garbage to edit.
The encryption will stay on for the file until you tell vim you'd like to remove it.

Be mindful of your key.  If you forget it, you will not be able to retrieve the contents of your file.

Turning off encryption for a file

Turning encryption off is equally as easy.  Simply set the key setting to nothing.  With the command:

:set key=

Be sure to write the file.  Note that it is no longer encrypted (status text)

Wait!  Before you go

As of vim 7.3, vim uses one of two encryption methods: zip and blowfish.  Zip is the same encryption that PkZip uses and is somewhat weak (can be cracked).  Conversely, blowfish is more contemporary and is much harder to crack.  Zip, unfortunately, is the default encryption method in vim.  I would strongly suggest that you use blowfish when encrypting your files.  To do this is simple enough.  Simply enter the command:

:set cm=blowfish

I put it in my .vimrc file so it's always set.

Thursday, May 24, 2012

Some light string manipulation in bash

This post is more for me than you.  I had to do a little bash work this morning and thought I'd keep a record of some of the syntax and concepts that I used in case I have to do more in the future.

There are two files here: a function library and a script that consumes it.  The function library is used to do different kinds of string justification (left justify, right justify and centering).  Kind of like 'echo' on steroids.  This could have easily been done in python, perl or ruby, but I wanted it to stay in bash this time to keep the calling scripts uniform.

Here's the sample file that consumes the library:

#!/bin/bash

. echo_helper

x='Steve is here!'
left_justify "$x" 40 .
right_justify "$x" 40 .
center_justify "$x" 40 .

echo $(pad_string 40 -)

x='Steve was here.'
left_justify "$x" 40 .
right_justify "$x" 40 .
echo $(center_justify "$x" 40 .)

And here's what the output should look like:

Steve is here!..........................
..........................Steve is here!
.............Steve is here!.............
----------------------------------------
Steve was here..........................
.........................Steve was here.
............Steve was here..............

Finally, here's the echo_helper library code:



#!/bin/bash



# Syntaxes to remember:
#
# String length of a variable:  ${#varname}
# String offsetting of a variable: ${varname:offset} or ${varname:offset:length}


# Usage: pad_string LENGTH {PADDING_CHARACTER=' '}
function pad_string {
local line=''
local length=$1
local padding_character=' '
if [[ "$2" != "" ]]; then
padding_character=$2
fi
while((${#line} < $length)); do
line="$line$padding_character"
done
echo -n "$line"
}


# Usage: left_justify MESSAGE LENGTH {PADDING_CHARACTER=' '}
function left_justify {
local message=$1
local length=$2
local padding_character=' '
if [[ "$3" != "" ]]; then
padding_character=$3
fi
echo "$message$(pad_string $length-${#message} $padding_character)"
}


# Usage: right_justify MESSAGE LENGTH {PADDING_CHARACTER=' '}
function right_justify {
local message=$1
local length=$2
local padding_character=' '
if [[ "$3" != "" ]]; then
padding_character=$3
fi
echo "$(pad_string $length-${#message} $padding_character)$message"
}


# Usage: center_justify MESSAGE LENGTH {PADDING_CHARACTER=' '}
function center_justify {
local message=$1
local length=$2
local padding_character=' '
if [[ "$3" != "" ]]; then
padding_character=$3
fi
local half_length=$(expr $length / 2)
local half_message_length=$(expr ${#message} / 2)
local padding_length=$(expr $half_length - $half_message_length)
padding="$(pad_string $padding_length $padding_character)"
message="$padding$message$padding"
if [[ ${#message} > $length ]]; then # handle even/odd length issue
echo "${message:1}"
else
echo "$message"
fi
}