Archiving on Microsoft Outlook 2010

Google Apps is my primary choice for email at home, however Microsoft Outlook 2010 is the staple mail client supplied on my corporate laptop. We’re pretty lucky as the mailbox limit is pretty large, compared to all previous organisations I’ve worked for – I’ve had anything from 50MB, to 150MB on average, but even my current 2GB limit won’t hold everything. Since starting at Savvis in 2011, I’ve adopted an archive everything policy, rather than deleting any messages – this has been and continues to be a very useful source of information, as a record of all previous conversations, but also as a reference library.

The downside to effectively keeping everything you send and receive, is that it has a constantly growing storage requirement – the old data needs to be taken out of the mailbox, and stored off-line in PST files. If you’ve ever used archiving in Outlook, something you may not have realised is that messages are archived based on their received date and the last modified date and time, whichever is later. This is a problem in the way I work since I move messages between folders and update categories on a regular basis, all of which affects the last modified date. I’ve have also recently been trialling Taglocity and re-tagging a lot of mail which has ended up changing the last modified date on 95% of mail currently in my mailbox – when I ran an archive on all mail over 3 months old, hardly any mail was moved out to the archive.

Fortunately, it is possible to change the default behaviour by adding the following registry key:

Key: HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Outlook\Preferences
Name: ArchiveIgnoreLastModifiedTime
Value: 1

This will remove the check against last modified date, and only check against the received date when processing messages as part of the archive process. In Outlook 2010, this feature was added in KB2516474, so you need to make sure to have this patch installed. Once I had made this change and re-ran archive, 600MB of data was now moved out to my local PST. More information on this registry change can be found in KB2553550.


Setting Up Vagrant for Octopress

Back last year in the summer of 2012, I switched my site from Posterous over to Octopress having then decided to remove the dependency on a proprietary blogging platform, primarily because all the site data was held with an organisation that may on a whim choose to cease the service. Case in point, in February of this year Posterous announced they were closing down the service – having been acquired by Twitter back in March 2012, you would probably agree that this was inevitable.

If you’re not familiar with Octopress, it’s a static blogging platform framework built on top of Jekyll, which itself is a simple, blog aware, static site generator. It takes a template directory containing raw text files in various formats, runs it through Markdown and Liquid converters, and spits out a complete, ready-to-publish static website suitable for serving on a preferred web server. Jekyll is very minimalistic and very efficient. The most important thing to realize about Jekyll is that it creates a static representation of a website requiring only a static web-server. Traditional dynamic blogs like WordPress require a database and server-side code. Heavily trafficked dynamic blogs must employ a caching layer that ultimately performs the same job Jekyll sets out to do; serve static content.

Since there are no server side requirements, the dependencies for building out the site therefore exist on your own machine. If you’ve ever followed the documentation on the Octopress site, you’ll have an implementation similar to what I had in place last year – on my Windows 7 machine, I had installed Git for Windows, along with Yari to install and manage Ruby). Once all the relevant RubyGems were installed, I was able to create and maintain my Octopress site, but since I was publishing to Amazon S3, I also needed to use CloudBerry Explorer for Amazon S3 to manually upload the generated content. SkyDrive was also used to store a synced copy of the site data.

Initially using an Amazon S3 bucket to host the static content and Amazon CloudFront as the CDN, in the past few days I’ve just switched over to hosting the site content on Heroku and using CloudFlare as the CDN. It has been quite a number of months since I last made a post, and while working on the move it has quickly become apparent that the local system I was previously using no longer existed (had since been rebuilt), along with the various software components required to post to and maintain the site.

This time around, rather than re-install all the various components again, I wanted to find a more suitable solution which would allow me to update the site from a number of different machines; i.e. a more portable or reproducible method of setting up the environment to allow ongoing maintenance of the site. Some Goggling later, I came across a very interesting piece of software called Vagrant, which allows exactly that.

Vagrant provides easy to configure, reproducible, and portable work environments by isolating dependencies and their configuration within a single disposable, consistent environment. From a single Vagrantfile, regardless of the platform being Mac, Windows or Linux, the environment will be brought up in the exact same way, against the same dependencies, all configured in the exact same way.

Not without it’s own requirements, Vagrant requires an installation of VirtualBox to be present (used to manage the virtual machines), and also an SSH client (used to connect into the virtual machines). On my Windows 7 system, my preference was to install Cygwin with the OpenSSH package – if you’re installing Cygwin to a system where you don’t have administrative rights, make sure you rename SETUP.EXE (retrieved from the Cygwin site) to anything else, such as CYGWIN.EXE, else the system will trigger UAC and prompt for administrative credentials.

With Cygwin, VirtualBox and Vagrant now installed (with default configuration), I then opened up a Cygwin Terminal. By default, I was my in my Cygwin home directory (C:\cygwin\home\andrew.allen) – for example’s sake, this is where I built my Vagrant environment, but if you have your own preference where to store your project data, you should change to the relevant directory on your system. I then typed the following commands:

vagrant init precise32
vagrant up
vagrant ssh

Once these three commands completed, I had a new virtual machine started in VirtualBox, running Ubuntu 12.04 LTS 32-bit to which I had was now connected via SSH – now that was pretty cool, and took little to no effort.

Next, I configured the machine with the dependencies I needed for Octopress. In the Cygwin Terminal window, I typed the following:

sudo su -
apt-get update && apt-get install -y git curl
curl -L | bash -s stable --ruby=1.9.3
source /usr/local/rvm/scripts/rvm
rvm rubygems latest
gem install bundler
gem install heroku

This installed Git, cURL, RVM, Ruby 1.9.3 with RubyGems dependencies including support for publishing to Heroku, then disconnected from the VM. Here is the clever bit – by updating the Vagrantfile with these commands, bringing up this environment in future would automatically install these components. When I first ran the ‘vagrant init’ command, it created the Vagrantfile in the current directory – I opened this file up in notepad, and added the following lines, just before the last ‘end’:

config.vm.synced_folder "/Personal Data/SkyDrive/Projects/", "/projects/"
config.vm.provision :shell, :path => ""

The first line creates a mapping between my host system where I wanted to store my Octopress site, and a folder within the virtual machine, while the second line contains a reference to a script file which will contain the commands I manually executed previously. I saved the file, closed notepad, created a new file in the same directory called ‘’ then added the following lines:

#!/usr/bin/env bash
apt-get update && apt-get install -y git curl
curl -L | bash -s stable --ruby=1.9.3
source /usr/local/rvm/scripts/rvm
rvm rubygems latest
gem install bundler

Now when the virtual machine is brought on-line with ‘vagrant up’, this script file will be transferred into the VM and executed as root. Saving the file and closing notepad, I proved this to be the case – switching back to the Cygwin Terminal window, I ran the following commands:

vagrant destroy -f
vagrant up
vagrant ssh

Once I was connected back to a newly created VM instance, I ran ‘ruby —version’ which confirmed that ruby 1.9.3 was successfully installed. Taking a copy of the Vagrantfile I had created, anyone can now bring up the exact same environment, ready for use by Octopress. Back in the Cygwin Terminal window, I ran the following commands to configure a clean Octopress site, within the project directory I had mapped within the Vagrantfile:

cd /projects/
git clone git:// octopress
rm octopress/.rvmrc
rm octopress/.rbenv-version
cd octopress
bundle install
rake install

These files were directly written to the mapped folder on my host machine, which also happens to be within my SkyDrive folder, and not within the VM itself.

Following this work, I am now in the position of being able to bring up an environment to support the management of my site quickly, easily and consistently, but also that all my site data is stored safely, synced automatically, and version controlled within my SkyDrive account. On any machine, I am now able to quickly recreate my blogging environment.

Using the Vagrantfile that I’ve created, you can now bring up an environment with all the correct pre-requisites and dependencies to support an Octopress site, by just installing Vagrant and VirtualBox onto your preferred platform (Mac, Windows, Linux) along with a suitably installed SSH client.


CNAME Is Out; Hello ANAME!

Entry Updated: July 2nd, 2012

How familiar are you with DNS? Wikipedia states:

The Domain Name System (DNS) is a hierarchical distributed naming system for computers, services, or any resource connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities. A Domain Name Service resolves queries for these names into IP addresses for the purpose of locating computer services and devices worldwide. By providing a worldwide, distributed keyword-based redirection service, the Domain Name System is an essential component of the functionality of the Internet.

In simple terms, it is the address book system of the internet, converting human-friendly addresses (such as in to computer-friendly IP addresses.

Ok, so you knew that much already. How familiar are you with DNS resource records?

Wikipedia continues:

A Resource Record (RR) is the basic data element in the domain name system. Each record has a type (A, MX, etc.), an expiration time limit, a class, and some type-specific data. Resource records of the same type define a resource record set (RRset). The order of resource records in a set, returned by a resolver to an application, is undefined, but often servers implement round-robin ordering to achieve Global Server Load Balancing. DNSSEC, however, works on complete resource record sets in a canonical order.

When dealing with a web site, there are two particular types of resource record that people mostly care about – the A record, which translates a hostname to an IP address, and the CNAME record, which allows you to alias one hostname to another.

How about the ANAME record? Ever heard of that before? Nope, neither did I until today.

For an increasing number of years, I’ve chosen to host all my domain zone files with DNS Made Easy – not only are they cost effective, but they guarantee 100% uptime across their geographically distributed IP Anycast network. When I moved my blog across to Amazon S3 / Amazon CloudFront from Posterous, I did do some research into whether I should also migrate to Amazon Route 53, Amazon’s own DNS service, however I found many comments showing people were actually seeing DNS Made Easy outperform all other similar service offerings, including Route 53.

Not only that, DNS Made Easy offer a number of other services, which simply isn’t offered anywhere else, and one such unique service announced today is the ANAME resource record type.

What is an ANAME record?

“DNS Made Easy are the first provider in the world to revolutionize the way DNS zones can be configured using the ANAME record”

Their words, not mine, however:

CNAME records cannot be created for the apex, or root record, of a domain. This is invalid based on DNS RFC’s, yet required in certain configurations.
CNAME records must be unique based on DNS RFC’s. Administrators can not create any other record type with the same name as a CNAME record or use multiple records in a round robin configuration. This is also required in certain configurations.
CNAME record resolution is slower based on the fact that they require a double lookup. Once to find the CNAME record itself, and a second to find the referred IP address.
What are the advantages?

When an ANAME record is created, DNS Made Easy internally monitored the fully qualified domain name (FQDN) of the IP address. We then create the associated A records that point to the IP address of the FQDN. When the IP address changes, the A records are updated immediately across all DNS Made Easy name servers.

ANAME records can be used as the root record for a domain as the resulting records created are A records which bypasses the limitation of allowing the alias at the root record.
Multiple ANAME records can be configured with the same name and all additional IP’s will be added in a round robin configuration in DNS Made Easy.
ANAME record speed up DNS performance as the correct IP address is returned on the first lookup rather than requiring multiple queries. Faster DNS lookups result in fast website load times which improves SEO.
So… what?

Just this evening I switched over a number of CNAME entries on, which currently reference Amazon S3 and CloudFront hostnames, over to ANAME records instead. When you now carry out a DNS lookup against, you’ll now see that instead of a CNAME redirection to an Amazon CloudFront hostname, A records of the resolved IP address for the CloudFront hostname are returned instead.

This means your client machine has had to carry out one less DNS lookup in order to resolve my site, so basically speeding up the whole process of viewing the site. In the world of user experience, response times are everything – that’s what Jakob Neilsen’s research shows anyway…

Update: July 2nd, 2012

Following a question posted in the comments by iwod, I developed a few further questions in my head about the service.

Depending on the type of DNS Made Easy account you purchase, you’ll be allocated a finite number of DNS queries that can be made against your zones – in my case, I have the business account and so allowed 10 millon queries per month, totalled across all my managed zones. I wondered, due to the nature of how ANAME records work, if this would impact the monthly query amount allocated to my account, as I have no control over how often the referenced hostname is checked? I can only control the TTL on my resource records, i.e. the resulting A records that are returned to lookups against my zone.

A reply back to this post from Richard at DNS Made Easy confirmed that…

“… ANAME records would not impact your monthly query count anymore than a CNAME would. In fact if you are creating a ANAME to another domain within DNS Made Easy it would actually save queries since there is no longer the requirement to do a double lookup.”

“There will be a minimal amount of checks against your domain if your target to your ANAME record is within DNS Made Easy, but nothing that should exceed a few thousand queries per month. This will generally save users on queries as well though since it would involve a double-lookup normally.”

The other query I had was around the use of ANAME records against my assigned Amazon CloudFront domain name. Once you’ve created a distribution within Amazon CloudFront, assigned the origins and setup the CNAME entries you want recognised against the distribution, you’ll then be assigned a unique domain name – you can then create a CNAME resource record against this within your own DNS zone.

As briefly mentioned, I have setup two CNAME resource records in my DNS Made Easy account:, which references my website enabled Amazon S3 bucket (, which references my Amazon CloudFront domain name (
When DNS Made Easy launched their ANAME resource records last week, I replaced both the above CNAME records with ANAME records instead – once updated, I could see that A record responses were now being returned, but I’ve now just noticed an unintended side effect.

I originally decided to use Amazon CloudFront because it can deliver my entire website, including dynamic, static and streaming content using a global network of edge locations – requests for content are automatically routed to the nearest edge location, so content is delivered with the best possible performance.

Because I have now switched to using ANAME records, when my referenced Amazon CloudFront domain name ( is resolved to an IP address by DNS Made Easy, the request is directed to the nearest edge location to the requester, i.e. nearest to the DNS Made Easy servers. The response is then stored to use for lookups against my zone, which won’t necessarily be the optimal location for everyone else – so in effect, all requests against my website are being directed to the same edge location, regardless of the requesters location, removing the effect and benefits of using Amazon CloudFront.

By carrying out a traceroute from my location in the UK, I am routed 17 hops to, with 100ms latency to the final hop, compared to 13 hops to, with 21ms latency to the final hop.

I’ve checked my AWS Usage Reports which also confirms the same – all traffic since the DNS update has been served out of the same region.

The end result appears that configuring ANAME records against a CDN provider such as Amazon CloudFront is not recommended, due to it not taking into consideration the latency based routing used to connect you to the nearest edge location. There are still benefits for using ANAME records against all other addresses not distributed through a content delivery network, such as a website enabled Amazon S3 bucket, as it will remove the extra lookup, as designed.

If ANAME records are to be useful alongside Amazon CloudFront, DNS Made Easy would have to recognise the various edge locations within Amazon’s Global Infrastructure, and adapt automatically on how those results are then returned.

For my own site, I’ve swapped the ANAME entry referencing my Amazon CloudFront domain name back to a CNAME resource record (, since overall lower latency is more important than the initial DNS lookup, but I have left in place the ANAME directly referencing my Amazon S3 bucket (


Performance Analysis of Logs

If you’ve ever had to investigate an issue on a Windows box, apart from the Event Viewer, the other most common tool you’ve probably (and should have) used is Performance Monitor. However, it’s not uncommon that you don’t know which of the (potentially hundreds of) counters to collect, or even how to analyse the data, especially for particularly nasty or complicated issues with no clear indicators on where the problems lie.

Sometime around the beginning of 2011, I came across the Performance Analysis of Logs (PAL) Tool, which reads in a performance monitor counter log and analyzes it using known thresholds – it has since become one of the most useful utilities I keep in my tool box.

Key Features:

  • Thresholds files for most of the major Microsoft products such as IIS, MOSS, SQL Server, BizTalk, Exchange, and Active Directory.
  • An easy to use GUI interface, automatically creating batch files for the PAL.ps1 script.
  • A GUI editor for creating or editing your own threshold files.
  • Creates an HTML based report for ease of copy/pasting into other applications.
  • Analyzes performance counter logs using thresholds that change their criteria based on the computer’s role or hardware specs.
  • It’s a free, no cost utility!


On the system to which you choose to use the tool, there are a number of pre-requisites to install first – I recommend you install PAL onto to your Windows 7 64-bit workstation (be aware that you don’t need to install PAL directly on the system which you want to monitor):

Once you’ve got all the pre-requisites installed, download the latest version of the Performance Analysis of Logs (PAL) Tool from the website – it’s actually quite small at only around 1MB in size. Unpack the Zip file, and run setup.exe, following any prompts to complete installation – if you’re still missing any requirements, you’ll be directed to the appropriate website automatically.

Something else to note – the installer will ‘Set-ExecutionPolicy’ for PowerShell to ‘Unrestricted’ to allow the included PowerShell scripts to be able to run, primarily the PAL.ps1 script.

Quick Start Guide:

It’s worth reading more about how to use the tool by digging around the site, or downloading the Intro to PAL video, but as a really quick start, run up the PAL application from the Start menu, switch to the ‘Threshold File’ tab, select ‘Quick System Overview’ from the ‘Threshold file title’ dropdown, then click ‘Export to Perfmon template file’. Copy the resulting XML file onto the system you want to monitor, import into Performance Monitor (hint: click ‘Start’, run and type ‘perfmon’ and press enter) as a new ‘Data Collection Set’, then set it running.

Once you’ve collected sufficient data for what ever length of time you feel appropriate, copy the performance logs (by default, in %systemdrive%\PerfLogs) back to your workstation, and re-launch PAL. Specify the ‘Counter Log Path’ on the ‘Counter Log’ tab, ensure you’ve selected ‘Quick System Overview’ on the ‘Threshold File’ tab, then jump to the ‘Execute’ tab and click ‘Finish’.

You’ll now see the PowerShell scripts kick in, as the resulting HTML report is generated – this will take some time, depending on the amount of data collected. By default, the report will be written to ‘[My Documents]\PAL Reports’, and you’re web browser will automatically open once the report generation is complete.

Scroll through the report, and you’ll see all kinds of alerts, recommendations, and graphs, analysing in detail the various performance counters that were monitored.

Sample Report:

If you have no time at all to try the tool out for yourself, download the Sample PAL Report, to understand what you’re missing out on.


Recover Your Lost Product Key

Another one for your tool kit… When you find yourself rebuilding a corrupt system, or just carrying out a straight-out rebuild, you’ve got all the install CDs, verified backups of your important data, found the storage driver disks, and… you can’t find the product key.

Not uncommon, especially if you’re rebuilding a system on someone else’s behalf. But over at NirSoft, among the many, many useful utilities is one called ProduKey.

ProduKey is a small utility that displays the ProductID and the CD-Key of Microsoft Office (Microsoft Office 2003, Microsoft Office 2007), Windows (Including Windows 7 and Windows Vista), Exchange Server, and SQL Server installed on your computer. You can view this information for your current running operating system, or for another operating system/computer – by using command-line options. This utility can be useful if you lost the product key of your Windows/Office, and you want to reinstall it on your computer.

There are a few different versions of the app available for download on the site, including 32-bit Zip64-bit Zip, and full installer versions – just download and run on your system of choice, and you’ll be able to ‘recover’ the product key that was used during installation of the operating system.

Taking it a little step further, I highly recommend you grap a copy of the NirLauncher, which is a complete package of all the NirSoft portable freeware utilities which you can just unpack to your local drive or favourite USB stick – you can even integrate the SysInternals Suite into the NirLauncher using the additional downloads available on the site.


The Pomodoro Technique

Entry Updated: July 4th, 2012

I’ve always been interested in trying to find ways of improving my time management – like everybody, it’s not perfect, and with so many distractions all around us all the time, it’s sometimes too easy to procrastinate.

While reading through Scott Hanselman’s 2011 Ultimate Developer and Power Users Tool List for Windows, I noticed a link to Tomighty, along with a reference to The Pomodoro Technique. After digging in further, and reading through the links and the material on the site itself, the simple idea of managing tasks in 25-minute segments appealed. I’ve tried various methods before, and anything complicated or hard to maintain quickly gets dropped by the wayside as I fall back into old habits – I know I am not the only one.

The Pomodoro Technique was actually created by Francesco Cirillo back in the 1980s and is practiced by professional teams and individuals around the world. The basic unit of work can be split into five simple steps:

  • Choose a task to be accomplished
  • Set the Pomodoro to 25 minutes (the Pomodoro is the timer)
  • Work on the task until the Pomodoro rings, then put a check on your sheet of paper
  • Take a short break (5 minutes is OK)
  • Every 4 Pomodoros take a longer break

And although that’s pretty much the main ideas, there are some other primary objectives needed to get the most out of the technique. On the official site, you can download the following free resources, which is pretty much everything you need, and I highly recommend reading through the book as your first step (there is a print copy available on Amazon if you prefer):

Like Scott, I’ll be using Tomighty on my laptop – it’s also a bonus that you don’t need to install it. I know there are lots of other similar software solutions available, including on the iPhone, but this looks to do the job, and I am sure that Scott has already evaluated loads of them before deciding to stick to using Tomighty.

I’ve also ordered the official Pomodoro Technique Timer from Amazon UK – technically any timer will do, but I’ve always had something about wanting ‘official’ products 🙂 It’s ~£8.00, not really that much more than any other timer. I like the idea of something physical, so that I can apply the technique away from the computer or technology in general.

So I’ve actually written this up in my first Pomodoro, and I am going to see if I can continue to use this technique to stay as productive as possible. We’ll see 🙂

Update: July 4th, 2012

Almost 2 weeks later, and it’s actually been pretty hard to try and stick to using the technique, but where persistance has won out, it has been rewarding. It’s also true that while initially you think you can do 8 to 12 Pomorodi in a day, I’ve yet to reach that target simply because the number of distractions in a single day are considerable, and while this will differ for everyone, working in an open plan office ensures that distractions are all around.

Building out your record sheet, depending on what statistics you decide to collect, actually helps make the obvious distractions standout, and from there helping you to avoid them. If I need to concentrate fully (the whole idea by the way), I’ll set messenger to busy, close Outlook, and if possible, work from one of the hot offices – but any room where you can work on your own and shut the door will do.

I did recieve the Pomodoro Technique Timer I ordered from Amazon, but the only downside so far is not being able to use it more than a few times in the office, since everyone else around you will quickly get irritated with it – however, I do make a point of using it at home, since it’s also about associating the motion of winding up the timer.

Back in the office, although I’ve got Tomighty installed onto my laptop, I’ve actually found the most useful timer is the Pomodoro Timer app on my iPhone. I tried out a number of different timers from the App Store, but I didn’t want any that included the ability to record tasks in the app directly, or be over complicated with features, or laden with advertisments. I liked the Pomodoro Timer most because it keeps the screen active while running, but will still popup / alert once its counted down, even if you do switch to another application – you can also write a little reminder note under the timer itself to help keep your focus on your current task. By using the iPhone, I can cut down on annoying anyone else by just adjusting the volume, or wearing headphones.

When it came to finding the most suitable way of actually tracking tasks as ‘Activity Inventory’ and ‘To Do Today’ worksheets, keeping these on paper didn’t work for me. Instead, I’ve settled on using Microsoft OneNote synced into my Microsoft SkyDrive account – because there is now a OneNote client on the iPhone, I’ve now got full access at work, at home, and always with me on my phone.

I did come across a few websites that are designed for working with the Pomorodo Technique, allowing you a place to record and track all your pending activities, along with some automated reporting capability. Personally, I didn’t want to use yet another tool for storing my personal data, or become dependant on yet another 3rd party website which may not be around in 12 months once the developer has lost interest.

If you’ve started to look at the Pomorodo Technique yourself, I have also since read the Pomodoro Technique Illustrated by Staffan Noteberg, which I can also highly recommend, even over the original book. It breaks it all down into easier to digest illustrated sections with lots of other ideas discussed as well, including using the technique amoung a team of people.

I’d be interested to hear if you’ve tried to use the technique yourself, even heard of it before, or have any other suggestions from your own experiences – add your comments below.


Dillinger: Markdown Editor

Markdown is a lightweight markup language, originally created by John Gruber and Aaron Swartz allowing people “to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML)”. The language takes many cues from existing conventions for marking up plain text in email.

Put simply, Markdown is a text-to-HTML conversion tool for web writers. Markdown allows you to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML).

The goal for Markdown’s formatting syntax is to be as readable as possible. A Markdown-formatted document should be publishable as-is, as plain text, without looking like it’s been marked up with tags or formatting instructions.

In short, you can create beautiful HTML documents without knowing any HTML.

Since switching my site over to Octopress, I’ve needed to write all the site content in Markdown syntax. While working at home on my Windows machine, I’ve been using MarkdownPad, which is a pretty good implementation (but could do with a spell check utility – hint!), but when I’m away from home, I needed someway of being able to write up new Markdown content while still being able to easily preview the output formatting.

While searching on Google, I came across an article on AddictiveTips, which highlighted the potential solution with Dillinger.

You only need to browse to, and you can start using the tool straight away – one pretty nifty feature is the ability to save your documents straight into Dropbox. Also, any preference changes you make (such as adjusting the theme) is remembered, so there is no need to reconfigure the tool on each return visit.