IMS Spectral Simulator: Arduino Edition

TL/DR — Code and wiring diagram to output a simulated spectrum WITH noise on a specified microcontroller output pin. Requires hardware interrupts which simulate a gating pulse.

When developing new approaches to signal processing or simply designing a new data acquisition system, having a reasonable reflection of the target signal is helpful during the development/testing stage. In an effort to supply the community with such a resource, below is a set of arduino code that is designed to output a simulated spectrum from a microcontroller following a hardware interrupt (i.e. a gate opening event). Using the variables in the code it is possible to space the output of the sequence. Though a standard arduino (e.g. 16 MHz clock) may be able to simply output the spectrum, if we want to add a level of random noise a call to generate a random integer is required. In that case a few extra clock cycles are necessary to generate such a number. For such a situation, a slightly faster clock speed is warranted.

The Adafruit WICED is an entirely capable little beast that fits the bill. In addition to sporting a WIFI chip and an additional flash module, this unit boast a 120 MHz ARM Core 3 processor. When considering the target goal (i.e. hardware simulation of an IMS spectrum), that speed comes in handy. More specifically, the code below illustrates that after each interrupt the next element in the simulated spectrum is output BUT it includes and extra call that generates a random integer that is added to the spectral element. The net effect is that a spectrum with a user-defined level of noise is output. When combined with a scope or data acquisition system, the impact of signal averaging can be explored.

To aid anyone interested in adapting the arduino code a sample spectrum from raw spectrum is provided along with output from the WICED platform with added noise. Additionally, a wiring diagram is provided though be sure that the input trigger is not too large as to overload the WICED input levels. For reference, we are a big fan of the Digilent Analog Discovery units as they provide a wide degree of functionality (i.e. two 100 MS/s ADC inputs and two 100 MS/s DAC outputs), an intuitive graphical interface, and the capacity to script the data acquisition.

Raw Simulated Spectrum:

Wiring Diagram:

PA5 is the interrupt pin (Trigger In)

A4 is the simulated spectrum out (Spectral Output)

Arduino Code for WICED Feather Platform

int irqpin = PC5;
int ledpin = BOARD_LED_PIN;

#include <libmaple/dac.h>

volatile int ledstate = LOW;

int spec[1000] = {3,4,0,4,3,1,3,3,0,1,1,2,2,0,2,0,5,0,3,1,7,0,0,0,1,1,2,0,2,1,1,3,1,3,3,4,3,1,0,2,5,5,2,1,0,2,3,3,0,4,0,0,1,1,4,0,0,5,3,1,1,2,3,0,0,2,3,0,4,1,0,6,1,0,1,1,3,0,0,3,4,3,0,6,1,0,1,5,0,5,4,9,12,11,15,16,23,22,32,33,41,48,50,61,61,68,73,75,80,90,86,94,97,96,96,94,90,85,86,89,78,75,60,54,46,30,30,13,6,1,14,18,37,43,53,57,71,78,76,83,85,99,101,101,96,97,95,88,89,86,78,74,74,64,58,53,40,43,35,36,20,23,23,15,18,16,8,4,8,2,6,5,2,1,3,0,1,0,2,0,0,5,1,1,2,2,0,2,4,0,3,1,2,1,5,2,1,2,1,0,2,4,4,2,3,5,0,0,4,1,1,0,2,2,4,1,1,2,0,0,0,0,1,3,1,2,0,1,2,1,4,2,2,3,1,2,2,2,4,1,4,0,3,4,4,0,5,2,3,2,4,1,2,1,0,1,4,0,2,5,2,3,2,5,7,2,5,1,1,3,3,0,1,4,2,3,2,1,4,10,1,4,1,3,0,2,2,0,1,1,1,2,0,5,3,10,2,0,10,12,3,22,13,16,30,30,38,38,49,60,67,78,88,99,118,126,146,157,179,193,213,228,245,258,278,289,300,309,317,322,325,324,325,318,315,308,292,287,270,254,242,218,211,189,176,155,144,121,116,92,78,75,62,60,50,44,42,38,39,35,43,44,51,54,66,71,83,92,105,125,135,159,175,191,218,243,263,280,303,323,346,362,388,398,407,420,430,440,434,432,430,429,419,403,391,376,354,339,310,295,270,247,226,208,188,168,146,124,117,97,84,71,63,47,47,36,28,25,22,20,11,12,10,4,14,5,4,8,0,5,0,0,2,1,1,1,2,2,3,0,2,0,1,1,1,0,1,3,0,2,0,6,4,3,0,2,0,2,1,0,4,0,0,0,0,2,2,2,1,1,1,3,1,2,0,0,3,1,1,6,0,4,1,1,2,3,6,0,3,4,1,1,3,0,2,0,0,1,3,0,0,0,2,2,2,2,4,4,1,1,3,0,0,1,4,3,4,4,3,5,6,6,11,14,22,19,24,29,27,38,49,56,66,74,82,89,108,123,134,156,180,194,213,235,253,275,301,319,347,361,387,402,422,441,448,463,463,465,476,473,476,473,463,452,441,422,409,393,370,350,328,313,287,261,241,217,199,178,162,141,131,110,101,90,73,62,55,50,38,32,24,24,20,15,13,9,9,8,4,8,3,2,2,2,3,0,2,0,11,5,3,3,0,2,0,5,0,4,1,0,2,1,2,1,1,1,7,4,0,2,1,2,1,1,4,0,4,3,1,1,2,3,1,2,2,5,0,4,3,0,0,2,5,4,1,1,0,1,4,0,4,10,6,2,6,8,16,15,14,17,22,24,21,30,33,37,44,44,57,61,69,72,89,89,96,105,110,124,131,139,149,151,161,170,171,176,177,185,188,183,192,188,184,187,177,173,170,167,160,148,151,139,129,122,115,113,103,92,87,82,74,63,62,56,46,38,38,35,25,23,19,13,18,14,12,8,11,6,4,6,8,3,8,3,6,10,11,15,9,14,19,24,30,33,35,38,48,47,51,60,71,82,87,92,102,110,124,135,136,151,156,174,179,195,200,208,214,219,224,228,234,239,236,239,237,237,228,229,221,209,208,202,192,184,174,165,157,151,141,126,119,113,110,92,78,74,65,63,57,50,41,37,37,28,27,23,21,14,17,16,14,6,4,10,0,4,3,7,0,0,2,3,2,3,0,0,1,1,5,4,1,2,0,1,5,4,3,5,2,5,3,4,1,1,3,0,1,4,0,5,4,4,2,6,2,3,3,3,3,1,1,1,1,6,4,0,1,0,1,2,3,1,0,0,2,4,1,2,2,1,4,0,0,0,4,3,0,5,0,1,2,0,2,7,0,1,3,5,0,6,4,0,0,7,3,2,2,2,2,0,2,1,1,2,2,5,0,0,2,0,0,0,0,0,0,4,0,0,2,1,1,3,1,1,3,3,0,4,4,2,2,1,3,1,0,3,7,1,3,2,2,5,0,3,2,1,1,0,3,1,5,2,2,0,3,5,9,0,2,0,3,1,2,2,0,3,0,4,7,5,4,1,6,1,3,1,3,1,2,1};

uint32_t i = 0;

void setup() 
{
  // Setup the LED pin as an output
  pinMode( ledpin, OUTPUT );
  // Setup the IRQ pin as an input (pulled high)
  pinMode( irqpin, INPUT_PULLUP );
  // Attach 'blink' as the interrupt handler when IRQ pin changes
  // Note: Can be set to RISING, FALLING or CHANGE
  attachInterrupt( irqpin, blink, CHANGE );

  dac_enable_channel(DAC, 1); // Configures Pin A4 as DAC1
  dac_init(DAC, DAC_CH1);     // Start DAC1
}

void loop() 
{
  // Set the LED to the current led state
  digitalWrite(ledpin, ledstate);
}

void blink() 
{
  ledstate = !ledstate;
  i=0;
  for(i = 0; i<1000; i++){
    dac_write_channel(DAC, DAC_CH1, spec[i]+random(10, 500));
    delayMicroseconds(20);
  }
}

Determining Proton Affinities using psi4

This is the 3rd post in a series outlining a workflow using freely available computational chemistry resources with python interfaces to evalute properties of gas-phase ions. A cursory search illustrates that there are a variety of computational packages with a direct python interface but interestingly, not all of these packages are current. PySCF appears to be a solid choice, however, some of the documentation/examples do not provide a direct means to calculate thermochemistry. GAMESS is another option but the python wrapper for this system has not been updated in almost a year and appears only compatible with select python 2.7 installations. After testing all of these options, it became clear that psi4 provided a tractable approach to optimize the geometry of molecules followed by a detailed thermochemical and frequency evaluation. The ipynb notebook illustrates the mechanism to not only optimize the geometry of water, but also determine the proton affinity. This latter property remains essential for describing the ionization behavior of target molecules along with a host of other chemical properties. In many literature reports a more detailed treatment of the energy terms is often presented, however, as a first pass this workflow yields a result that is in good aggreement with the literature value for water.

Geometry Optimization in Python

This is the second post in a series aiming at generating a range of candidate structures for evaluation in the context of molecular modeling in the field of ion mobility spectrometry. In a previous post, the use of rdkit to generate structures was introduced. However, closer inspection of the code highlights a few funciton calls aimed at optimizing the conformer structures. Given that the tetraalkylammonium ions were the focus of that effort, the optimization step was quite rapid. This brought into question as to whether any geometry optimization was being performed. In the following jupyter notebook, ibuprofen generated from SMILES input is optimized using the same function call as found in the previous post. This degree of optimization does not reach the level needed for more advanced calculations but can be a decent start when trying to group the different conformers into structural families.

Required python modules include: rdkit

Optional modules: pymol and an instance of this program running as a server.

Conformational Searching using Python

This is the first of a series examining the use of python to generate candidate structures of molecules. These conformations may serve a variety of functions, though our particular purpose is to identify candidates for additional optimization and ultimate use in ion mobility modeling experiments. After considering a range of tools (e.g. Avogadro or ChemDraw), it was apparent that a more automated, open-source work-flow was needed. In full disclosure, there are surely other mechanisms to make this happen but the following jupyter notebook is a reasonable approach. Visualization of the conformers can be accomplished using pymol if you that module is installed and a server instance running in the background (i.e. pymol -R).

Savitzky-Golay Smoothing GUI

Simple Smoothing GUI

In an effort to create a set of simple tools that are useful for data processing and realtime analysis of data we’ve been exploring a range of tools.  Granted there are a number of canned solutions in existence (e.g. National Instruments), however, to avoid the long-term challenges of compatibility we are looking for tools that can better serve our research goals.  Two packages that we’ve began to lean more heavily upon include pyqtgraph and guidata.  Both use PyQt4 and are compatible with Pyside for GUI rendering and construction.  Matplotlib is quite mature but it has been our experience that pyqtgraph is quite a bit faster for plotting data in realtime.

The code below integrates pyqtgraph directly into the guidata framework.  This is not a huge stretch as the pyqtgraph widgets integrate directly with the QWidget class in PyQt4.  For those looking for an example the following code illustrate very simply how to integrate one of these plots and update it using simulated data along with the ability to alter the smoothing parameters of the raw data on the fly.  One might envision the use of this approach to capture data from a streaming device (more on that later). It should be noted that the file loading feature has been disabled but it would’t be a huge stretch to re-enable this functionality for single spectra.


# -*- coding: utf-8 -*-
# Adapted from guidata examples:
# Copyright © 2009-2010 CEA
# Pierre Raybaut
# Licensed under the terms of the CECILL License
# (see guidata/__init__.py for details)
# Adapted by Brian Clowers brian.clowers@wsu.edu

"""
DataSetEditGroupBox and DataSetShowGroupBox demo

These group box widgets are intended to be integrated in a GUI application
layout, showing read-only parameter sets or allowing to edit parameter values.
"""

SHOW = True # Show test in GUI-based test launcher

import tempfile, atexit, shutil, datetime, numpy as N

from guidata.qt.QtGui import QMainWindow, QSplitter
from guidata.qt.QtCore import SIGNAL, QTimer
from guidata.qt import QtCore

from guidata.dataset.datatypes import (DataSet, BeginGroup, EndGroup, BeginTabGroup, EndTabGroup)
from guidata.dataset.dataitems import (FloatItem, IntItem, BoolItem, ChoiceItem, MultipleChoiceItem, ImageChoiceItem, FilesOpenItem, StringItem, TextItem, ColorItem, FileSaveItem, FileOpenItem, DirectoryItem, FloatArrayItem, DateItem, DateTimeItem)
from guidata.dataset.qtwidgets import DataSetShowGroupBox, DataSetEditGroupBox
from guidata.configtools import get_icon
from guidata.qthelpers import create_action, add_actions, get_std_icon

# Local test import:
from guidata.tests.activable_dataset import ExampleDataSet

import sys, os
import pyqtgraph as PG

#-----------------------------------
def simpleSmooth(fileName, polyOrder, pointLength, plotSmoothed = False, saveSmoothed = True):
    if not os.path.isfile(fileName):
        return False
    rawArray = get_ascii_data(fileName)
    #savitzky_golay(data, kernel = 11, order = 4)
    smoothArray = savitzky_golay(rawArray, kernel = pointLength, order = polyOrder)
    if plotSmoothed:
        plot_smoothed(smoothArray, rawArray, True)

    if saveSmoothed:
        newFileName = fileName.split(".")[0]
        newFileName+="_smth.csv"
 
    N.savetxt(newFileName, smoothArray, delimiter = ',', fmt = '%.4f')

    return smoothArray

#-----------------------------------

def get_ascii_data(filename):
    data_spectrum=N.loadtxt(filename,delimiter = ',', skiprows=0)##remember to change this depending on file format
    return data_spectrum

#-----------------------------------
def savitzky_golay(data, kernel = 11, order = 4):
 """
 applies a Savitzky-Golay filter
 input parameters:
 - data => data as a 1D numpy array
 - kernel => a positive integer > 2*order giving the kernel size
 - order => order of the polynomal
 returns smoothed data as a numpy array

 invoke like:
 smoothed = savitzky_golay(<rough>, [kernel = value], [order = value]

 From scipy website
 """
 try:
 kernel = abs(int(kernel))
 order = abs(int(order))
 except ValueError, msg:
 raise ValueError("kernel and order have to be of type int (floats will be converted).")
 if kernel % 2 != 1 or kernel < 1:
 raise TypeError("kernel size must be a positive odd number, was: %d" % kernel)
 if kernel < order + 2:
 raise TypeError("kernel is to small for the polynomals\nshould be > order + 2")

 # a second order polynomal has 3 coefficients
 order_range = range(order+1)
 half_window = (kernel -1) // 2
 b = N.mat([[k**i for i in order_range] for k in range(-half_window, half_window+1)])
 # since we don't want the derivative, else choose [1] or [2], respectively
 m = N.linalg.pinv(b).A[0]
 window_size = len(m)
 half_window = (window_size-1) // 2

 # precompute the offset values for better performance
 offsets = range(-half_window, half_window+1)
 offset_data = zip(offsets, m)

 smooth_data = list()

 # temporary data, with padded zeros (since we want the same length after smoothing)
 #data = numpy.concatenate((numpy.zeros(half_window), data, numpy.zeros(half_window)))
 # temporary data, with padded first/last values (since we want the same length after smoothing)
 firstval=data[0]
 lastval=data[len(data)-1]
 data = N.concatenate((N.zeros(half_window)+firstval, data, N.zeros(half_window)+lastval))

 for i in range(half_window, len(data) - half_window):
 value = 0.0
 for offset, weight in offset_data:
 value += weight * data[i + offset]
 smooth_data.append(value)
 return N.array(smooth_data)

#-----------------------------------

def first_derivative(y_data):
 """\
 calculates the derivative
 """
 
 y = (y_data[1:]-y_data[:-1])
 
 dy = y/2#((x_data[1:]-x_data[:-1])/2)

 return dy

#-----------------------------------
class SmoothGUI(DataSet):
 """
 Simple Smoother
 A simple application for smoothing a 1D text file at this stage. 
 Follows the KISS principle.
 """
 fname = FileOpenItem("Open file", ("txt", "csv"), "")

 kernel = FloatItem("Smooth Point Length", default=7, min=1, max=101, step=2, slider=True) 
 order = IntItem("Polynomial Order", default=3, min=3, max=17, slider=True)
 saveBool = BoolItem("Save Plot Output", default = True)
 plotBool = BoolItem("Plot Smoothed", default = True).set_pos(col=1)
 #color = ColorItem("Color", default="red")
 
#-----------------------------------
class MainWindow(QMainWindow):
 def __init__(self):
 QMainWindow.__init__(self)
 self.setWindowIcon(get_icon('python.png'))
 self.setWindowTitle("Simple Smoother")
 
 # Instantiate dataset-related widgets:
 self.smoothGB = DataSetEditGroupBox("Smooth Parameters",
 SmoothGUI, comment='')

 self.connect(self.smoothGB, SIGNAL("apply_button_clicked()"),
 self.update_window)

 self.fileName = ''

 self.kernel = 15
 self.order = 3
 self.pw = PG.PlotWidget(name='Plot1')
 self.pw.showGrid(x=True, y = True)

 self.p1 = self.pw.plot()
 self.p1.setPen('g', alpha = 1.0)#Does alpha even do anything?
 self.p2 = self.pw.plot(pen = 'y')
 self.pw.setLabel('left', 'Value', units='V')
 self.pw.setLabel('bottom', 'Time', units='s')

 splitter = QSplitter(QtCore.Qt.Vertical, parent = self)

 splitter.addWidget(self.smoothGB)
 splitter.addWidget(self.pw)
 self.setCentralWidget(splitter)
 self.setContentsMargins(10, 5, 10, 5)
 
 # File menu
 file_menu = self.menuBar().addMenu("File")
 quit_action = create_action(self, "Quit",
 shortcut="Ctrl+Q",
 icon=get_std_icon("DialogCloseButton"),
 tip="Quit application",
 triggered=self.close)
 add_actions(file_menu, (quit_action, ))
 
 ## Start a timer to rapidly update the plot in pw
 self.t = QTimer()
 self.t.timeout.connect(self.updateData)
 self.t.start(1000) 

 def rand(self,n):
 data = N.random.random(n)
 data[int(n*0.1):int(n*0.23)] += .5
 data[int(n*0.18):int(n*0.25)] += 1
 data[int(n*0.1):int(n*0.13)] *= 2.5
 data[int(n*0.18)] *= 2
 data *= 1e-12
 return data, N.arange(n, n+len(data)) / float(n)
 

 def updateData(self):
 yd, xd = self.rand(100)
 ydSmooth = savitzky_golay(yd, kernel = self.kernel, order = self.order)
 
 if self.smoothGB.dataset.plotBool:
 self.p2.setData(y=ydSmooth, x = xd, clear = True)
 self.p1.setData(y=yd*-1, x=xd, clear = True)
 else:
 self.p1.setData(y=yd, x=xd, clear = True)
 self.p2.setData(y=[yd[0]], x = [xd[0]], clear = True)

 if self.smoothGB.dataset.saveBool:
 if os.path.isfile(self.fileName):
 newFileName = self.fileName.split(".")[0]
 
 else:
 newFileName = "test"
 newFileName+="_smth.csv"
 
 N.savetxt(newFileName, ydSmooth, delimiter = ',')#, fmt = '%.4f')


 
 def update_window(self):
 dataset = self.smoothGB.dataset
 self.order = dataset.order
 self.kernel = dataset.kernel
 self.fileName = dataset.fname

 
 
if __name__ == '__main__':
 from guidata.qt.QtGui import QApplication
 app = QApplication(sys.argv)
 window = MainWindow()
 window.show()
 sys.exit(app.exec_())

Gantt Charts in Matplotlib

GanttPlotLove it or hate it, the lack of a tractable options to create Gantt charts warrants frustration at times.  A recent post on Bitbucket provides a nice implementation using matplotlib and python as a platform.  In order to expand the basic functionality a few modifications enable a set of features that highlight the relative contributions of the team participants.  In the example provided above the broad tasks are indicated in yellow while the two inset bars (red:student and blue:PI) illustrate the percent effort.  See the source below for the details.

"""
Creates a simple Gantt chart
Adapted from https://bitbucket.org/DBrent/phd/src/1d1c5444d2ba2ee3918e0dfd5e886eaeeee49eec/visualisation/plot_gantt.py
BHC 2014
"""

import datetime as dt
import matplotlib.pyplot as plt
import matplotlib.font_manager as font_manager
import matplotlib.dates
from matplotlib.dates import MONTHLY, DateFormatter, rrulewrapper, RRuleLocator

from pylab import *

def create_date(month,year):
"""Creates the date"""

date = dt.datetime(int(year), int(month), 1)
mdate = matplotlib.dates.date2num(date)

return mdate

# Data

pos = arange(0.5,5.5,0.5)

ylabels = []
ylabels.append('Hardware Design & Review')
ylabels.append('Hardware Construction')
ylabels.append('Integrate and Test Laser Source')
ylabels.append('Objective #1')
ylabels.append('Objective #2')
ylabels.append('Present at ASMS')
ylabels.append('Present Data at Gordon Conference')
ylabels.append('Manuscripts and Final Report')

effort = []
effort.append([0.2, 1.0])
effort.append([0.2, 1.0])
effort.append([0.2, 1.0])
effort.append([0.3, 0.75])
effort.append([0.25, 0.75])
effort.append([0.3, 0.75])
effort.append([0.5, 0.5])
effort.append([0.7, 0.4])

customDates = []
customDates.append([create_date(5,2014),create_date(6,2014)])
customDates.append([create_date(6,2014),create_date(8,2014),create_date(8,2014)])
customDates.append([create_date(7,2014),create_date(9,2014),create_date(9,2014)])
customDates.append([create_date(10,2014),create_date(3,2015),create_date(3,2015)])
customDates.append([create_date(2,2015),create_date(6,2015),create_date(6,2015)])
customDates.append([create_date(5,2015),create_date(6,2015),create_date(6,2015)])
customDates.append([create_date(6,2015),create_date(7,2015),create_date(7,2015)])
customDates.append([create_date(4,2015),create_date(8,2015),create_date(8,2015)])

task_dates = {}
for i,task in enumerate(ylabels):
task_dates[task] = customDates[i]
# task_dates['Climatology'] = [create_date(5,2014),create_date(6,2014),create_date(10,2013)]
# task_dates['Structure'] = [create_date(10,2013),create_date(3,2014),create_date(5,2014)]
# task_dates['Impacts'] = [create_date(5,2014),create_date(12,2014),create_date(2,2015)]
# task_dates['Thesis'] = [create_date(2,2015),create_date(5,2015)]

# Initialise plot

fig = plt.figure()
# ax = fig.add_axes([0.15,0.2,0.75,0.3]) #[left,bottom,width,height]
ax = fig.add_subplot(111)

# Plot the data

start_date,end_date = task_dates[ylabels[0]]
ax.barh(0.5, end_date - start_date, left=start_date, height=0.3, align='center', color='blue', alpha = 0.75)
ax.barh(0.45, (end_date - start_date)*effort[0][0], left=start_date, height=0.1, align='center', color='red', alpha = 0.75, label = "PI Effort")
ax.barh(0.55, (end_date - start_date)*effort[0][1], left=start_date, height=0.1, align='center', color='yellow', alpha = 0.75, label = "Student Effort")
for i in range(0,len(ylabels)-1):
labels = ['Analysis','Reporting'] if i == 1 else [None,None]
start_date,mid_date,end_date = task_dates[ylabels[i+1]]
piEffort, studentEffort = effort[i+1]
ax.barh((i*0.5)+1.0, mid_date - start_date, left=start_date, height=0.3, align='center', color='blue', alpha = 0.75)
ax.barh((i*0.5)+1.0-0.05, (mid_date - start_date)*piEffort, left=start_date, height=0.1, align='center', color='red', alpha = 0.75)
ax.barh((i*0.5)+1.0+0.05, (mid_date - start_date)*studentEffort, left=start_date, height=0.1, align='center', color='yellow', alpha = 0.75)
# ax.barh((i*0.5)+1.0, end_date - mid_date, left=mid_date, height=0.3, align='center',label=labels[1], color='yellow')

# Format the y-axis

locsy, labelsy = yticks(pos,ylabels)
plt.setp(labelsy, fontsize = 14)

# Format the x-axis

ax.axis('tight')
ax.set_ylim(ymin = -0.1, ymax = 4.5)
ax.grid(color = 'g', linestyle = ':')

ax.xaxis_date() #Tell matplotlib that these are dates...

rule = rrulewrapper(MONTHLY, interval=1)
loc = RRuleLocator(rule)
formatter = DateFormatter("%b '%y")

ax.xaxis.set_major_locator(loc)
ax.xaxis.set_major_formatter(formatter)
labelsx = ax.get_xticklabels()
plt.setp(labelsx, rotation=30, fontsize=12)

# Format the legend

font = font_manager.FontProperties(size='small')
ax.legend(loc=1,prop=font)

# Finish up
ax.invert_yaxis()
fig.autofmt_xdate()
#plt.savefig('gantt.svg')
plt.show()

XKCD-style Plots in Matplotlib

Now incorporated directly into the latest version of matplotlib (v1.3) here is a great alternative that brings some style to your plotting routines. I haven’t tried it out on plots with a huge number of points but I imagine it should work just fine.  Below are some simple examples.  Simple as matplotlib.pyplot.xkcd()…

Pseudo-Random Sequence with XKCD:

PRS_XKCD

No XKCD:

PRS_noXKCD

Cheers Jake Vanderplas:  http://jakevdp.github.com/blog/2012/10/07/xkcd-style-plots-in-matplotlib/

More Examples:  http://matplotlib.org/xkcd/examples/showcase/xkcd.html