Donnerstag, 21. März 2013

Automated pubmed searching with RSS

If you're regularly searching pubmed on a particular topic, you can make your life a lot more comfortable using automated searches. Everything that is required is a web browser and a RSS-aggregator, such as the soon-to-fade-out Google Reader, feedly, or a Tiny Tiny RSS instance (if you're geek enough to set one up for this purpose).

Here's how it works:

  1. Go to pubmed.
  2. Type in your favorite search query.
  3. Click "Search".
  4. After the result appeared, click on the button "RSS" below the query field.
  5. Adjust the settings to your liking and click "Create RSS".
  6. Use the xml-link to subscribe to this search in your favorite aggregator.
Et voila, you're done! Next time you log in to your feed aggregator, you'll see all the new articles that match the query. The cool thing is, you don't have to check the search on a regular basis anymore - your feed aggregator will keep track on all the new results as they tumble in via the feed. Unread items will be highlighted. If you use a mobile client, you can use the time on the train to browse through the list and star interesting articles for later reading. 

You can take this a step further, and subscribe to TOCs from your favorite journals. Never face a pile of unread TOC alerts in your inbox again. Every decent journal out there offers an RSS feed. 

Replace Google Reader with Tiny Tiny RSS

Google Reader was an essential part of my workflow for keeping up with the literature. I had configured several PubMed-searches as RSS-Feeds. For example, I created an automated search for olf*. Hence, every time a new article appeared on pubmed that contained that term, it would appear in my Google Reader. That was extremely convenient, as I could simply log in to Google Reader whenever I felt like reading new stuff, then browse the RSS with the pubmed query, star interesting abstracts, download those articles and push them to my cloud storage where I keep the PDFs. 
Now Google announced to shut down the Reader Service. Booooooh, bad Google! However, I can't afford to let the decision of some internet company disrupt my workflow ;) So I looked for alternatives, and found Tiny Tiny RSS. TT-RSS is a server-side application to be installed on some webserver. It provides similar service as Google Reader, and even an Android client!
It took me two hours to set it all up, including dusting off my Mysql knowledge and learning how to use systemd. I got along mostly with the install instructions from the TT-RSS website. The only thing which took some time was figuring out how to set up a MySQL database for ttrss, and configuring systemd to run the update daemon. In the end it was pretty straightforward. 

Now my workflow's saved! Happy! ;) 

Note to self: Here's the systemd-service file that is required to start the update daemon for ttrss. To enable it, store it at /lib/systemd/system/ttrss-update.service, and enable it permanently using systemctl enable ttrss-update.service.

Description=Update daemon for ttrss

ExecStart=/usr/bin/php /srv/www/htdocs/feeds/update.php --daemon



!!! Update!!! (April 12.2013): the script was missing a second hyphen before -update, which caused the update to be run exactly once on startup, and never again. Fixed.

Second Update Oct 30, 2013: Added Restart flag to the service that takes care of firing the update daemon up again if it exits.

Freitag, 6. Juli 2012

Migrating from SVN to Git

As I'm waiting for Martin to give me feedback on my most recent manuscript on a neuromorphic classifier, I had some time to do things I wanted to do for a long time, but always pushed back because there was something more important. In addition, today was Friday, and an very hot Friday in particular, so normal thinking was hardly possible - the perfect day for playing around with some nerd stuff like migrating my version control repository from SVN to Git!
I maintain code, figures, documents and all other stuff relevant to one project in a version control repository. I'm working on different machines remotely, e.g., the gaia cluster at FU, the server at KIP Heidelberg to which the neuromorphic hardware is connected, our local numbercrunchers and so on. I used to have a central SVN repository on our server, but this became the more cumbersome the more I had to deal with firewalls, limited bandwidth and working offline.
Briefly, I used two step-by-step guides: the one from John Albin, and the one in the Git Book. Both will essentially get you there, but the first one has a slightly more elegant solution to transform the authors, while the latter contains more details on what is actually going on. In addition, the Git Book is a good read to bridge the long, long time until a decent SVN repository is fully converted to Git. After all, if you migrate from SVN to Git you really want to know about the fundamental differences in Git compared to VCSs like SVN, and even more so the way Git stores your data.
Finally, where SVN needs 3.1 GB for the repo alone and another 8.5 GB for the checkout, I ended up with a git repository that contains the repo and the current version of all the stuff in a mere 2.9 GB. Wow. The entire process took three hours or so (plus some more hours of trial and error before ;) ). Now I'm hoping that Git will make it easier to keep all my data in sync across servers, irrespective the firewalls or stubborn svn clients insisting on correct https certificates, as long as I can ssh into them. But that's something for another day.
So finally this hot and humid Friday came to a productive end! Time to celebrate :) 

Donnerstag, 28. Juni 2012

From netbeans to eclipse for python development

Netbeans has been my favorite IDE for a very long time. It was unbeatable when I was still developing in Java, and when I switched to Python for my main work it also provided support for it. But lately the netbeans community seemed to have abandoned Python - at least, there is no built-in python support for any release of the 7.x series, and the available plugins are hackish and lack some of the functionality that was present still in 6.9.
So it was clear: Python has no future in netbeans. And this meant that netbeans had no future as my IDE ;)
I was inclined to actually believe the hype about eclipse - all eclipse users I know are very enthusiastic about telling you how great their IDE is. I already tried it several times, but never really got into it, always falling back to netbeans 6.9. With the recent release of Eclipse Juno I decided to give it another shot. 
Installation is no problem, you just have to unpack the zipped file and take care that the binary is in your PATH. Netbeans had this nice installer which creates directories for you and stuff, but that's not really an important point. Installation of plugins was a bit rough - Netbeans gives you a much smoother experience in this regard. But hey, it's a developer tool, and a Real Programmer should be able to figure out, right? ;)
What I didn't understand was the concept of the workspace, which eclipse forces you to choose on every startup. What the hell was so important about that workspace to justify asking me all the time which one I want to use? I never figured out until David Higgins explained to me at a recent workshop: The workspace is simply a place where your settings are stored, and has nothing to do with where your projects are located. For example, it allows you for example to use different settings when you're developing in Java and Python, or C++, for that matter. So I checked that "Don't ask anymore" box and was done with workspace selection  ;) .
I had to learn a few new shortcuts and customized other ones to be like Netbeans, which went smoothly once I discovered that hitting Ctrl-Shift-L twice gets me directly into the Keyboard Shortcuts dialog, which sports a nice quick filter widget to quickly identify any command you'd like Eclipse to perform. On the downside, I had to say goodbye to nice things like a new, separate console popping up for every process I start, or the more straightforward integration of SVN in netbeans. But then again, eclipse embraces Python while Netbeans just dumped support for it, so the decision to migrate is a no-brainer. And in addition I switched to Git anyway ;) 

Donnerstag, 19. April 2012

G-Node workshop on neuronal GPU computing: symposium

Last week, the G-Node workshop on neuronal GPU computing took place at LMU Munich. I had the pleasure organizing the scientific part, while Christian Kellner and Thomas Wachtler from G-Node did an extremely good job taking care of local organization (with solid support from lovely Manuela Brandenburg).
We had a one-day symposium with talks, followed by a two-day hands-on developer workshop.
First speaker of the symposium was Romain Brette from ENS Paris, who presented a graph-theoretical approach to optimizing the memory arrangement of neuronal networks for efficient simulation on GPUs.
Giovanni Idili from the openwork project presented their work on a simulation of C. elegans. The cool thing about that project is that the simulation is not restricted to the neuronal part of that worm, but actually linking worm physics to worm physiology. Very interesting work, as it is one of the still very few approaches that aim at an embodiment of a simulated neuronal network. I also learnt a lot on software engineering, as Giovanni described their approach to linking the various simulations using a service-oriented architecture based on OSGi-bundles.
Afterwards, Dave Higgins from ENS Paris gave an account on how he learnt to use OpenCL the hard way. It was interesting to see what OpenCL requires you to do that CUDA doesn't. Also, he somehow paved the way for the code generation talks by Thomas Nowotny and Damien Drix in the afternoon.
The last speaker of the morning session was Javier Baladron from Olivier Faugeras' workgroup at INRA Sophia-Antipolis. His project dealt with stochastic neuronal models, in which random number generation soon turned out to be the major bottleneck. They could overcome the bottleneck by using the cuRAND library on a cluster of 14 (fourteen!) GPUs.
After lunch, Dan Goodman gave a short and to-the-point overview on the use of GPU computing in the brian simulator, including the integration of the NeMo Simulator and the model fitting toolbox.
Andreas Fidjeland (Imperial College London), the developer of NeMo presented his algorithm for efficient delivery of synaptic events on GPUs, considering memory layout and access patterns. He also showed some convincing benchmark data.
After a short coffee break Thomas Nowotny (Sussex University) gave a very nice talk on his experience which neuronal GPU computing. Thomas first encounter with neuronal GPU computing dates back to 2009 (or was it 2007?), and he experienced all the improvements made on the software and hardware level since then. It was also very instructive to see how he ended up with doing code generation on GPUs instead of trying to pass arguments to kernels, somehow complementing Dave's talk from this morning.
Damien Drix from Eilif Muller's group at EPFL Lausanne showed an impressive code generation framework (including a nice GUI with sliders and all :) ) that converts NeuroML into code to be compiled and executed on GPUs .
The final slot of the symposium belonged to Pierre Yger (Imperial College London), who demonstrated the power of PyNN for GPU-based neuronal simulations. He nicely layed out the advantage of being able to validate (or falsify!) your simulation results by running the exact same network on different simulators. One of the most impressive things he showed was how strongly some simulations were affected by factors like the timestep used for numerical integration, the integration algorithm itself, of even details of the implementation of models and simulators.

Sonntag, 14. August 2011

Interview on has published an interview with me. is a relatively new platform which is about people in science, with a special focus on young scientists. It also has a "Future" section with interesting graphs... still growing, but interesting to watch anyhow. That part reminded me a bit of Seed Magazine, a magazine about science culture, the print version of which I subscribed to some time ago and which I enjoyed quite a lot. I still read their RSS feed occasionally.

Donnerstag, 7. Juli 2011


Happy Matlab licensing trouble - yay!

We have 5 Matlab licenses in our lab. Originally, we bought them as "Concurrent licenses", that is, Matlab allowed us to have 5 instances of Matlab running in parallel, regardless which computer they are running on. At some point, Mathworks somehow transformed these licenses into "Designated Computer" licenses, that is, each license must be associated with a designated computer and will only run on that machine. Although this was an obviously bad deal, we didn't care too much about that change back then, since we were busy doing more important stuff than caring about licensing issues.

Anyway, Matlab is a dying species in our lab since most of us are using Python for scientific computing, except for a few legacy scripts. But every now and then, I need to run one of those legacy scripts.

I do much development on my laptop, but for numbercrunching I use our compute server. Hence, I need my computing environment on both machines, although not necessarily at the same time. I had one of these designated computer licenses, and thanks to Mathworks' provident care, I was able to deactivate and reactivate them over the web when switching between computers. So I changed the designated computer a few times between those machines. Today I wanted to change again, but Mathworks wouldn't let me:

"No more machine transfers available for this license."


OK, you're forcing me to port even my old scripts to python. Pity you. I spent already too much time struggling with licensing issues - time which I would much more like to spend on research. Goodbye Matlab.