Python and QRCodes

Recently, I wanted to generate QR codes for a Django project I’ve been working on for a while.  As I always say, in Django, there’s a plugin for almost everything.  So to get things rolling, first you need to install the qrcode on PyPi Just run this:

pip install qrcode

You’ll also need PIL (I recommend you use Pillow though).  So once you have those two, time to roll up our sleeves and get down to the code.

First, import the library somewhere in your script:

import qrcode

Then instanciate the qrcode object

qr = qrcode.QRCode(

The first parameter, version, is of course the QR version.  It should be an integer between 0 and 40 which define the size of the barcode and the amount of data we’ll be able to store.  The second parameter, error_correction, is the redundancy level.  This can be:

  • ERROR_CORRECT_L: 7% of codewords can be restored
  • ERROR_CORRECT_M (default): 15% can be restored
  • ERROR_CORRECT_Q: 25% can be restored
  • ERROR_CORRECT_H: 30% can be restored

This basically ensures decoding even if the data is damaged. More info on these redunancy levels can be obtained from Wikipedia. The box_size parameter controls how many pixels each “box” of the resulting QR code is, while the border parameter controls how many boxes thick the border should be (the default is 4, which is the minimum according to the specs).

So once you have the initiliased object, you can add the data like so:

qr.add_data("This is the data")
# im contains a PIL.Image.Image object
img = qr.make_image()

From there, depending on your backend, you need to save this image.  For Django I used the InMemoryUploadedFile class to convert this Image object into a File object that I could pass into the model class as a variable so that it goes into the usual File handling flow.  There could be another way to do this, but this worked for me:

from StringIO import StringIO
from django.core.files.uploadedfile import InMemoryUploadedFile
buffer = StringIO(), "PNG")
image_file = InMemoryUploadedFile(buffer,  None,  ("%s.png" % identifier),  "image/png",  buffer.len,  None)

Then finally, you save the QR code onto your model class object like normal:"%s.png" % identifier), image_file)


Gdal – a swiss knife


I worked with Global Inventory Modeling and Mapping Studies (GIMMS) dataset few weeks back, it is based on AVHRR and goes back to 1981.  For analysis, I had to resample GIMMS dataset from 8 km to 5 km (the version of GIMMS I was working with had 8 km resolution). I was working with approx. 736 images. Since I was performing an other task in R, thus, decided to combine resampling with it. I wrote a bash script to call my R script and passed source image and desired resolution image. The process was very simple as shown below (full script available here).

# target image folder

#desired resolution image

for myfile in *.tif
  # target path and image name
  Rscript /media/external/muhammad/myscripts/scale.r $src1folder $myfile $src2folder $src2img $target
  # R temp folder is being filled by temp files, we need to clean it as well. 
  rm /tmp/R_raster_geoscilab/*.gri
  rm /tmp/R_raster_geoscilab/*.grd


The R script was executed inside the loop for each image in folder “path-to-files”. R script (available here) read desired and source images and performed resampling using projectRaster() method. There were couple of other tasks performed by my R script but resampling was major.  When I tested this approach,  a single image took 12 minutes (5 images per hour). WOW! my script will run for 6 days to finish this task. Yes, scripts worked but 6 days were too much. I found out that it is projectReater() function which is taking almost 11 minutes. After a quick discussion with a friend of mine (R guru and big fan of gdal + Open Source) I tried gdal utilities. Quickly, I wrote a new script utilizing gdalwarp utility (command below, full script is available here).

gdalwarp -tr 0.050000000000000 -0.050000000000000 $outputfile $output_newfile

Notice the values for -tr switch which dictates the output image to be in 5km pixel size.  When I tested this image, WOW it took only 5 seconds for a single image. Still R was required for filtering but now I will not run my scripts for 6 days. It is important to note here that I used the default method for resampling in gdalwarp. Below is a GIMMS 5km resolution image (with nodata for further processing).


Moral of story: Try to use other tools rather than focusing on one.

Remove Specific HTML Tags in Django

It seems every day I learn something new in Django.  Recently, I was working on a few customizations to a zinnia installation and wanted to get the latest 4 blog posts on the home page.  Now, in zinnia, this is very straightfoward:

{% load zinnia_tags %}

{% get_recent_entries 4 template="zinnia/tags/entries_recent.html" %}

The extra template variable is because I was using a custom layout to render the snippets.  My main issue was that some of the blog posts might have embedded iframes.  So I wanted to find a way to remove the iframe tags.  Luckily, it’s quite easy to do this in Django (versions 1.7 downwards…for the latest versions, use bleach):

At code level (this can then be extended into a custom templatetag)

from django.template.defaultfilters import removetags
html = '

stripped = removetags(html, 'iframe')

At the template level

{{ value|removetags:"iframe"|safe }}

And it’s as simple as that

Enabling iframe and other content in django-ckeditor-updated

One of the most used features in any Django project that allows page editing…while I have my favourite (django-summernote), I recently integrated ckeditor using the django-ckeditor-updated package from the cheeseshop.  It was an excellent choice and I was loving all the new features and extensibility.  However, I hit a stop when I wanted to insert HTML content from and ajax request I had being fired from the UI.  First of all, inserting just plain HTML is actually very easy:


The fun all starts when you want to insert an iframe (or the more popular term, embedding content eg a Youtube video or for my case, a link to a Geonode map).  This is due to the Advanced Content Filter introduced in CKEditor 4.1.x. To start things off, you can check whether your editor will display iframes correctly by executing this simple JS code from your console:

CKEDITOR.instances.yourInstance.filter.check( 'iframe' );
>>> true // it's allowed

If the result is false, you can:

  • enable the mediaembed plugin in your editor instance: more info from the docs
  • extend config.extraAllowedContent to re-enable it

For the second solution, you need to add this code toy your editor’s config:

config.extraAllowedContent = 'iframe[*]'

or you can also just simply have it as:

CKEDITOR.config.allowedContent = true;

The beauty of this is you don’t have to enable the mediaembed plugin.  SO, that’s for the JS version of the plugin.  FOr Django users, this all goes into  Mine looks something like this:

    'default': {
        'toolbar': 'Full',
        'height': 300,
        'width': '100%',
        'removePlugins': 'stylesheetparser',
        'extraAllowedContent': 'iframe[*]',

And with those simple changes, you’ll be able to insert iframe content without and problems

Upgrading R to 3.1.0 on Ubuntu 12.04

I had R 2.14.1 on one of my servers. I needed packages like raster rgdal but could not install due to the error that these packages are not available for this version.

I tried to install installr package (below) so that I can run  updateR() and move to R 3.1.0


but got the following error

Warning message:
In getDependencies(pkgs, dependencies, available, lib) :
 package ‘installr’ is not available (for R version 2.14.1)

found out that it is only for window.

Then i read this post (How to install R ver 3.0) and followed it from section “Uninstall old R”. The update and upgrade part will take some time. R it self will take time.

Finally, R 3.1.0 is installed (Spring Dance is ON) and then I started installing packages. First was raster


an error occurred

Error : package ‘sp’ was built before R 3.0.0: please re-install it
ERROR: lazy loading failed for package ‘raster’
* removing ‘/home/myserver/R/x86_64-pc-linux-gnu-library/3.1/raster’
The downloaded source packages are in
Warning message:
In install.packages("raster") :
 installation of package ‘raster’ had non-zero exit status

Thus installed sp first by following command


Then raster


then came


Now, before rgdal installation (remember I ran into problems previously, see blog) I checked gdal version with following command

 gdalinfo --version

I had gdal 1.7.3, aaaaaaaaaah needed to install version 1.10. Thus followed my own blog 🙂 “Installing rgdal package for R 3

First I removed gdal 1.7.x by following command

sudo apt-get remove libgdal-dev gdal-bin 

then I made connection to Ubuntu Unstable PPA by following commands

sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable 
sudo apt-get update

Finally I installed the gdal 1.10 with following command

sudo apt-get install libgdal-dev gdal-bin

WOW it took a lot of time, may be due to slow internet at my side. Now I started R and installed rgdal package


and last package to install was ggplot


Finally I got my script running successfully.


Tile Cache and Seeding – Geo Web Cache


Ever wonder how come we can view contents of a web map in few seconds when the actual corresponding datasets are in GBs. The web maps we see are basically pngs/jpegs generated and then cut into small portions by GIS servers. These small portions of pngs/jpegs are called tiles and it is due to this technique we can view big datasets within few seconds in a web mapping application. Now, if ten users are viewing a layer (with almost same extent) through a similar GIS server, the GIS server have to perform rendering of image and tiling for each user. What if tiles can be saved and served from storage rather than rendering and slicing.

Good new is that such software exists which perform the above mentioned process and are called tile cache. Some of the options in Open Source are TileCache and Geo Web Cache (GWC). I like GWC becuase of its ease of use and seamless integration with GeoServer. GWC is java based and since it is integrated into GeoServer thus for GeoServer users it automatically caches and serves tiles (provided that you have not switched off the option in layer’s Tile Caching tab).

Now, how a tile cache (to be precise GWC) works. Client software (qGIS, web based and mobile) sends a request (Get Map) for a map with standard parameters like name, EPSG, extent , format. GIS Server performs the following operations

  1. Reads the required data source
  2. Generates the required format,
  3. Cuts the newly generated image into tiles
  4. Sends it back to client.

For every client, this process is repeated.

Now we introduce tile cache into this architecture. Before step 1, a tile cache will check if it has tiles for required map (extent and format). If it does, GIS Server is bypassed and tiles are served from cache. If it does not, same process is followed till step 3 and then tiles are stored in a separate location before response operation (step 4). For first client of a dataset, the process is same  as above (except before step 1 and between step 3 and 4 ), but for  second client (and third and so on and on) response of a map request will be much more faster. Now for each zoom level, tiles are generated and for higher zoom levels more tiles are generated than the lower zoom levels. It is like a pyramid, top level contains few tiles and bottom level contains few hundred (to few thousand) tiles.


Since GWC is integrated into GeoServer, thus the above process is just one click for end users. GWC generates tiles only in EPSG 4326 (WGS84) and 900913 (Spherical Mercator aka Google) projections. Another good thing about tile cache software is that administrators can generate tiles in advance for faster response. The process is know as seeding. Seeding will increase the performance (faster maps) but at the cost of memory, thus, administrators usually go for a hybrid solution. For first few zoom level seeding is used and for rest of the zoom levels (remember tiles could be in thousands) tiles are generated on the fly and cached. This saves a lot of memory as tiles for detailed zoom level will be generated only if a user requests them and only for a specific extent (although GWC can generated tiles for specific extents as well, e.g. if an administrator thinks that his users are interested in Africa he can generated detailed level tiles for Africa Continent only).

Few weeks back, I seeded one of the layers on our GeoServer (it was around 10 GB). Before seeding, this layer was very slow. I decided to seed till level 08. It took a lot of time but our efforts paid with faster layer response. Next time, I will divide my big layer’s seeding on  multiple Virtual machines (based on different zoom levels e.g. 01 to 04 on one vm, 05 to 07 on an other etc) to speed up the seeding process.