tag:blogger.com,1999:blog-66216548514527774872024-02-21T04:19:55.842+01:00My brain extensionThe public notebook of a computational neuroscientistmschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.comBlogger39125tag:blogger.com,1999:blog-6621654851452777487.post-838276392940791752013-03-21T14:27:00.001+01:002013-03-21T14:28:02.916+01:00Automated pubmed searching with RSSIf you're regularly searching pubmed on a particular topic, you can make your life a lot more comfortable using automated searches. Everything that is required is a web browser and a RSS-aggregator, such as the soon-to-fade-out Google Reader, feedly, or a <a href="http://tt-rss.org/" target="_blank">Tiny Tiny RSS</a> instance (if you're geek enough to <a href="http://mybrainextension.blogspot.de/2013/03/keeping-up-with-literature-with-tiny.html" target="_blank">set one up</a> for this purpose).<br />
<br />
Here's how it works:<br />
<br />
<ol>
<li>Go to <a href="http://pubmed.gov/" target="_blank">pubmed</a>.</li>
<li>Type in your favorite search query.</li>
<li>Click "Search".</li>
<li>After the result appeared, click on the button "RSS" below the query field.</li>
<li>Adjust the settings to your liking and click "Create RSS".</li>
<li>Use the xml-link to subscribe to this search in your favorite aggregator.</li>
</ol>
<div>
Et voila, you're done! Next time you log in to your feed aggregator, you'll see all the new articles that match the query. The cool thing is, you don't have to check the search on a regular basis anymore - your feed aggregator will keep track on all the new results as they tumble in via the feed. Unread items will be highlighted. If you use a mobile client, you can use the time on the train to browse through the list and star interesting articles for later reading. </div>
<div>
<br /></div>
<div>
You can take this a step further, and subscribe to TOCs from your favorite journals. Never face a pile of unread TOC alerts in your inbox again. Every decent journal out there offers an RSS feed. </div>
<div>
<br /></div>
mschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-59356643463744172452013-03-21T11:57:00.000+01:002013-10-31T03:07:30.811+01:00Replace Google Reader with Tiny Tiny RSSGoogle Reader was an essential part of my workflow for keeping up with the literature. I had configured several PubMed-searches as RSS-Feeds. For example, I created an automated search for<span style="font-family: "Courier New", Courier, monospace;"> olf*</span>. Hence, every time a new article appeared on pubmed that contained<span style="font-family: inherit;"> that term, it would appear in my Google Reader. That was extremely convenient, as I could simply log in to Google Reader whenever I felt like reading new stuff, then browse the RSS with the pubmed query, star interesting abstracts, download those articles and push them to my cloud storage where I keep the PDFs. </span><br />
<span style="font-family: inherit;">Now Google announced to shut down the Reader Service. Booooooh, bad Google! However, I can't afford to let the decision of some internet company disrupt my workflow ;) So I looked for alternatives, and found <a href="http://tt-rss.org/" target="_blank">Tiny Tiny RSS</a>. TT-RSS is a server-side application to be installed on some webserver. It provides similar service as Google Reader, and even an Android client!</span><br />
<span style="font-family: inherit;">It took me two hours to set it all up, including dusting off my Mysql knowledge and learning how to use systemd. I got along mostly with the install instructions from the TT-RSS website. The only thing which took some time was figuring out how to set up a MySQL database for ttrss, and configuring systemd to run the update daemon. In the end it was pretty straightforward. </span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Now my workflow's saved! Happy! ;) </span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Note to self: Here's the systemd-service file that is required to start the update daemon for ttrss. </span>To enable it, store it at <span style="font-family: "Courier New",Courier,monospace;">/lib/systemd/system/ttrss-update.service<span style="font-family: inherit;">, <span style="font-family: Arial,Helvetica,sans-serif;">and enable it permanently using</span> <span style="font-family: "Courier New",Courier,monospace;">systemctl enable ttrss-update.service<span style="font-family: inherit;">.</span></span></span></span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">[Unit]<br />Description=Update daemon for ttrss<br />After=network.target<br /><br />[Service]<br />Type=simple<br />User=wwwrun<br />Group=www<br />ExecStart=/usr/bin/php /srv/www/htdocs/feeds/update.php --daemon</span><br />
<span style="font-family: "Courier New", Courier, monospace;">Restart=always</span><br />
<span style="font-family: "Courier New", Courier, monospace;">RestartSec=60<br /><br />[Install]<br />WantedBy=multi-user.target</span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;"><span style="font-family: Arial,Helvetica,sans-serif;"><b>!!! Update!!! (April 12.2013):</b> the script was missing a second hyphen before </span>-update<span style="font-family: Arial,Helvetica,sans-serif;">, which caused the update to be run exactly once on startup, and never again</span>.<span style="font-family: Arial,Helvetica,sans-serif;"> Fixed.</span></span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;"><span style="font-family: Arial,Helvetica,sans-serif;"><b>Second Update Oct 30, 2013:</b> Added Restart flag to the service that takes care of firing the update daemon up again if it exits. </span></span>mschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-2031548288384504702012-07-06T23:48:00.001+02:002012-07-07T00:22:43.321+02:00Migrating from SVN to GitAs I'm waiting for <a href="http://www.biologie.fu-berlin.de/en/arbeitsgruppen/neurobiologie_verhalten/ag_nawrot/people/members/nawrot/index.html" target="_blank">Martin</a> to give me feedback on my most recent manuscript on a <a href="http://precedings.nature.com/documents/6547/version/1" target="_blank">neuromorphic classifier</a>, I had some time to do things I wanted to do for a long time, but always pushed back because there was something more important. In addition, today was Friday, and an very hot Friday in particular, so normal thinking was hardly possible - the perfect day for playing around with some nerd stuff like migrating my version control repository from <a href="http://subversion.tigris.org/" target="_blank">SVN</a> to <a href="http://git-scm.com/" target="_blank">Git</a>!<br />
I maintain code, figures, documents and all other stuff relevant to one project in a version control repository. I'm working on different machines remotely, e.g., the gaia <a href="https://www.mi.fu-berlin.de/w/IT/ComputeServer" target="_blank">cluster at FU</a>, the server at <a href="http://www.kip.uni-heidelberg.de/" target="_blank">KIP Heidelberg</a> <span style="background-color: white;">to </span><span style="background-color: white;">which </span><span style="background-color: white;">the neuromorphic hardware</span><span style="background-color: white;"> is connected, our local numbercrunchers and so on. I used to have a central SVN repository on our server, but this became the more cumbersome the more I had to deal with firewalls, limited bandwidth and working offline.</span><br />
Briefly, I used two step-by-step guides: <a href="http://john.albin.net/git/convert-subversion-to-git" target="_blank">the one from John Albin</a>, and the <a href="http://git-scm.com/book/en/Git-and-Other-Systems-Migrating-to-Git" target="_blank">one in the Git Book</a>. Both will essentially get you there, but the first one has a slightly more elegant solution to transform the authors, while the latter contains more details on what is actually going on. In addition, <a href="http://git-scm.com/book" target="_blank">the Git Book</a> is a good read to bridge the long, long time until a decent SVN repository is fully converted to Git. After all, if you migrate from SVN to Git you really want to know about the <a href="http://git-scm.com/book/en/Getting-Started-Git-Basics" target="_blank">fundamental differences in Git</a> compared to VCSs like SVN, and even more so <a href="http://git-scm.com/book/en/Git-Internals-Git-Objects" target="_blank">the way Git stores your data</a>.<br />
Finally, <span style="background-color: white;">where SVN needs 3.1 GB for the repo alone and another 8.5 GB for the checkout,</span><span style="background-color: white;"> I ended up with a git repository that contains the repo and the current version of all the stuff in a mere 2.9 GB. Wow. The entire process took three hours or so (plus some more hours of trial and error before ;) ). Now I'm hoping that Git will make it easier to keep all my data in sync across servers, irrespective the firewalls or stubborn svn clients insisting on correct https certificates, as long as I can ssh into them. But that's something for another day.</span><br />
<span style="background-color: white;">So finally this hot and humid Friday came to a productive end! Time to celebrate :) </span>mschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-49436371394428155232012-06-28T16:54:00.000+02:002012-07-07T00:24:17.241+02:00From netbeans to eclipse for python developmentNetbeans has been my favorite IDE for a very long time. It was unbeatable when I was still developing in Java, and when I switched to Python for my main work it also provided support for it. <span style="background-color: white;">But lately the netbeans community seemed to have abandoned Python - at least, there is no built-in python support for any release of the 7.x series, and the available plugins are hackish and lack some of the functionality that was present still in 6.9.</span><br />
So it was clear: Python has no future in netbeans. And this meant that netbeans had no future as my IDE ;)<br />
I was inclined to actually believe the hype about eclipse - all eclipse users I know are very enthusiastic about telling you how great their IDE is. I already tried it several times, but never really got into it, always falling back to netbeans 6.9. <span style="background-color: white;">With the recent release of Eclipse Juno I decided to give it another shot. </span><br />
Installation is no problem, you just have to unpack the zipped file and take care that the binary is in your PATH. Netbeans had this nice installer which creates directories for you and stuff, but that's not really an important point. <span style="background-color: white;">Installation of plugins was a bit rough - Netbeans gives you a much smoother experience in this regard. B</span><span style="background-color: white;">ut hey, it's a developer tool, and a <a href="http://imgs.xkcd.com/comics/real_programmers.png" target="_blank">Real Programmer</a> should be able to figure out, right? ;)</span><br />
What I didn't understand was the concept of the workspace, which eclipse forces you to choose on every startup. What the hell was so important about that workspace to justify asking me all the time which one I want to use? I never figured out until <a href="http://www.uiginn.com/" target="_blank">David Higgins</a> explained to me at <a href="https://portal.g-node.org/gpu-workshop-2012/start" target="_blank">a recent workshop</a>: The workspace is simply a place where your settings are stored, and has nothing to do with where your projects are located. For example, it allows you for example to use different settings when you're developing in Java and Python, or C++, for that matter. So I checked <span style="background-color: white;">that "Don't ask anymore" box and was done with workspace selection ;) .</span><br />
I had to learn a few new shortcuts and customized other ones to be like Netbeans, which went smoothly once I discovered that hitting Ctrl-Shift-L twice gets me directly into the Keyboard Shortcuts dialog, which sports a nice quick filter widget to quickly identify any command you'd like Eclipse to perform. On the downside, I had to say goodbye to nice things like a new, separate console popping up for every process I start, or the more straightforward integration of SVN in netbeans. But then again, eclipse embraces Python while Netbeans just dumped support for it, so the decision to migrate is a no-brainer. And in addition<span style="background-color: white;"> I <a href="http://mybrainextension.blogspot.de/2012/07/migrating-from-svn-to-git.html" target="_blank">switched to Git</a> anyway ;) </span><br />
<br />mschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-24003673434299602542012-04-19T15:52:00.000+02:002012-04-24T15:16:04.854+02:00G-Node workshop on neuronal GPU computing: symposiumLast week, the <a href="https://portal.g-node.org/gpu-workshop-2012/" target="_blank">G-Node workshop on neuronal GPU computing</a> took place at LMU Munich. I had the pleasure organizing the scientific part, while Christian Kellner and Thomas Wachtler from <a href="http://www.g-node.org/" target="_blank">G-Node</a> did an extremely good job taking care of local organization (with solid support from lovely Manuela Brandenburg).<br />
We had a one-day symposium with talks, followed by a two-day hands-on developer workshop.<br />
First speaker of the symposium was <a href="http://audition.ens.fr/brette/" target="_blank">Romain Brette</a> from <a href="http://www.di.ens.fr/" target="_blank">ENS Paris</a>, who presented a graph-theoretical approach to optimizing the memory arrangement of neuronal networks for efficient simulation on GPUs.<br />
<a href="https://plus.google.com/107021285642509092240/posts" target="_blank">Giovanni Idili</a> from the openwork project presented their work on a simulation of <a href="http://en.wikipedia.org/wiki/Caenorhabditis_elegans" target="_blank">C. elegans</a>. The cool thing about that project is that the simulation is not restricted to the neuronal part of that worm, but actually linking worm physics to worm physiology. Very interesting work, as it is one of the still very few approaches that aim at an embodiment of a simulated neuronal network. I also learnt a lot on software engineering, as Giovanni described their approach to linking the various simulations using a <a href="http://en.wikipedia.org/wiki/Service-oriented_architecture" target="_blank">service-oriented architecture</a> based on <a href="http://en.wikipedia.org/wiki/OSGi" target="_blank">OSGi</a>-bundles.<br />
Afterwards, <a href="http://www.uiginn.com/index.html" target="_blank">Dave Higgins</a> from ENS Paris gave an account on how he learnt to use <a href="http://en.wikipedia.org/wiki/OpenCL" target="_blank">OpenCL</a> the hard way. It was interesting to see what OpenCL requires you to do that <a href="http://en.wikipedia.org/wiki/CUDA" target="_blank">CUDA</a> doesn't. Also, he somehow paved the way for the code generation talks by Thomas Nowotny and Damien Drix in the afternoon.<br />
The last speaker of the morning session was Javier Baladron from <a href="http://raweb.inria.fr/rapportsactivite/RA2011/neuromathcomp/uid1.html" target="_blank">Olivier Faugeras' workgroup at INRA Sophia-Antipolis</a>. His project dealt with stochastic neuronal models, in which random number generation soon turned out to be the major bottleneck. They could overcome the bottleneck by using the <a href="http://developer.nvidia.com/curand" target="_blank">cuRAND</a> library on a cluster of 14 (fourteen!) GPUs.<br />
After lunch, <a href="http://thesamovar.net/neuroscience" target="_blank">Dan Goodman</a> gave a short and to-the-point overview on the use of GPU computing in the <a href="http://briansimulator.org/" target="_blank">brian simulator</a>, including the integration of the <a href="http://nemosim.sourceforge.net/" target="_blank">NeMo Simulator</a> and the model fitting toolbox.<br />
<a href="http://www.doc.ic.ac.uk/~akf" target="_blank">Andreas Fidjeland</a> (Imperial College London), the developer of <a href="http://nemosim.sourceforge.net/" target="_blank">NeMo</a> presented his algorithm for efficient delivery of synaptic events on GPUs, considering memory layout and access patterns. He also showed some convincing benchmark data.<br />
After a short coffee break <a href="http://www.sussex.ac.uk/Users/tn41/" target="_blank">Thomas Nowotny</a> (<a href="http://www.sussex.ac.uk/" target="_blank">Sussex University</a>) gave a very nice talk on his experience which neuronal GPU computing. Thomas first encounter with neuronal GPU computing dates back to 2009 (or was it 2007?), and he experienced all the improvements made on the software and hardware level since then. It was also very instructive to see how he ended up with doing code generation on GPUs instead of trying to pass arguments to kernels, somehow complementing Dave's talk from this morning.<br />
Damien Drix from Eilif Muller's group at EPFL Lausanne showed an impressive code generation framework (including a nice GUI with sliders and all :) ) that converts <a href="http://www.neuroml.org/" target="_blank">NeuroML</a> into code to be compiled and executed on GPUs .<br />
The final slot of the symposium belonged to <a href="http://www.unic.cnrs-gif.fr/people/pierre_yger/" target="_blank">Pierre Yger</a> (Imperial College London), who demonstrated the power of <a href="http://neuralensemble.org/trac/PyNN" target="_blank">PyNN</a> for GPU-based neuronal simulations. He nicely layed out the advantage of being able to validate (or falsify!) your simulation results by running the exact same network on different simulators. One of the most impressive things he showed was how strongly some simulations were affected by factors like the timestep used for numerical integration, the integration algorithm itself, of even details of the implementation of models and simulators.mschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-46016668971024058992011-08-14T11:35:00.003+02:002011-08-14T11:53:28.648+02:00Interview on Sciple.org<a href="http://sciple.org">Sciple.org</a> has published an <a href="http://sciple.org/955">interview</a> with me. Sciple.org is a relatively new platform which is about people in science, with a special focus on young scientists. It also has a "Future" section with interesting graphs... still growing, but interesting to watch anyhow. That part reminded me a bit of <a href="http://seedmagazine.com/">Seed Magazine</a>, a magazine about science culture, the print version of which I subscribed to some time ago and which I enjoyed quite a lot. I still read their <a href="http://seedmagazine.com/feeds/RSS/">RSS feed</a> occasionally.mschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-41125776634065042492011-07-07T11:38:00.005+02:002011-07-07T11:56:23.262+02:00AAAARRRRRRGH!! Matlab!!Happy Matlab licensing trouble - yay!<br /><br />We have 5 Matlab licenses in our lab. Originally, we bought them as "Concurrent licenses", that is, Matlab allowed us to have 5 instances of Matlab running in parallel, regardless which computer they are running on. At some point, Mathworks somehow transformed these licenses into "Designated Computer" licenses, that is, each license must be associated with a designated computer and will only run on that machine. Although this was an obviously bad deal, we didn't care too much about that change back then, since we were busy doing more important stuff than caring about licensing issues. <div><br /></div><div>Anyway, Matlab is a dying species in our lab since most of us are using Python for scientific computing, except for a few legacy scripts. But every now and then, I need to run one of those legacy scripts.<br /><br />I do much development on my laptop, but for numbercrunching I use our compute server. Hence, I need my computing environment on both machines, although not necessarily at the same time. I had one of these designated computer licenses, and thanks to Mathworks' provident care, I was able to deactivate and reactivate them over the web when switching between computers. So I changed the designated computer a few times between those machines. Today I wanted to change again, but Mathworks wouldn't let me:<br /><br />"No more machine transfers available for this license."<br /><br />WTF?<br /><br />OK, you're forcing me to port even my old scripts to python. Pity you. I spent already too much time struggling with licensing issues - time which I would much more like to spend on research. Goodbye Matlab.</div>mschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.com4tag:blogger.com,1999:blog-6621654851452777487.post-674652756589668142011-07-01T15:24:00.007+02:002011-07-26T17:55:06.773+02:00Using Python decorators to work around version incompatibilitiesI'm using <a href="http://neuralensemble.org/trac/PyNN">PyNN</a> to simulate networks of spiking neurons. PyNN ist a "metasimulator" than can operate with several simulator backends, such as <a href="http://nest-initiative.org/">NEST</a>, <a href="http://www.neuron.yale.edu/">NEURON</a>, or several others. The cool thing is that PyNN also has a backend for the <a href="http://facets.kip.uni-heidelberg.de/public/goals/hard.html">FACETS hardware</a>, which I'm using in a project. I can prototype the simulation in the simulator, and run it on the hardware afterwards, without changing my simulation script.<div><br /></div><div>In theory.</div><div><br /></div><div>In practice, things are a bit different. The hardware interface works with PyNN version 0.6, but PyNN has progressed towards 0.7 already. The current version of NEST works only with the current development version (0.7+, that is). This caused some headache for me and others developing for the hardware. <b>Update:</b> Some people wondered and asked me why I wouldn't simply use the old version of NEST that works with 0.6. Well, I could, but actually, that version has other bugs which make this solution a no-go.</div><div><br /></div><div>Fortunately, the API changes between PyNN 0.6 and 0.7 are not so extensive, so one can work around the differences with relatively little code. Still, one wants to have an elegant way of automatically detecting the PyNN version and using the appropriate code automatically.</div><div><br /></div><div><a href="http://wiki.python.org/moin/PythonDecorators">Python decorators</a> are particularly well suited for that purpose. Python decorators are functions or classes that return a function. Using a decorator, you can check for the PyNN version in the decorator function and return the appropriate function which does what you want in the current PyNN version. </div><div><br /></div><div>Confused? OK, here's an example: Assume that I want to retrieve the IDs of all cells in a population. In PyNN 0.6 I must use<br /><pre class="brush: python">def get_population_ids_06(pop):<br />return [id for id in pop.ids()]<br /></pre><br />while in PyNN 0.7 I can use<br /><pre class="brush: python">def get_population_ids_07(pop):<br />return [id for id in pop]<br /></pre><br />Now I want to have my script automatically figure out which function to use based on the PyNN version which is used. And here comes the decorator into play:<br /><pre class="brush: python">def pynn_version_workaround(pop):<br />if pynn_version.split(' ')[0] == "0.6.0":<br /> return get_population_ids_06<br />else:<br /> return get_population_ids_07<br /></pre><br />Now I have simply to define a dummy function which is to be mangled through the decorator:<br /><pre class="brush: python">@pynn_version_workaround<br />def get_population_ids(pop):<br />pass<br /></pre><br />So, calling get_population_ids is actually first calling pynn_version_workaround, which determines the pyNN version and returns the appropriate function, which is then called with the provided arguments.</div><div><br /></div><div>Nice, isn't it?</div>mschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-49770253510656167102011-05-17T09:31:00.004+02:002011-05-17T09:53:44.035+02:00Any jackass can trash a manuscript...It seems I'm not the only one getting hilarious reviews from <a href="http://mybrainextension.blogspot.com/2010/06/frustrated-by-peer-review.html">time to time</a>. The <a href="http://www.molbiolcell.org/">Journal of Molecular Biology of the Cell (MBoC)</a> has published an editorial that speaks from my heart, titled <a href="http://www.molbiolcell.org/cgi/doi/10.1091/mbc.E11-01-0002">"Any jackass can trash a manuscript, but it takes good scholarship to create one (how MBoC promotes civil and constructive peer review)"</a>. <div><br /></div><div>In my opinion, one of the most important points in the article is that the relentless bashing which has become a recurring feature of many reviews will, inevitably, hurt the entire research field, because it destroys the scientific community in that field. </div><div><br /></div><div>As I said <a href="http://mybrainextension.blogspot.com/2010/06/frustrated-by-peer-review.html">before</a>, I think that an open peer review process with identifiable reviewers will foster constructive criticism in the reviews. The reviewers will become visible and their contribution to the community acknowledged. The whole process will become more transparent, which is a prerequisite for a functioning scientific community. </div>mschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-30506903637537532402011-02-01T14:34:00.007+01:002011-02-01T15:53:24.076+01:00SEED Magazine: The scientific paper is becoming obsolete<a href="http://seedmagazine.com/">SEED Magazine</a>, a New York-based magazine on science culture, just published an interesting <a href="http://seedmagazine.com/content/article/on_science_publishing/">article about how science publishing is about to be transformed by the internet</a>. You might think that this is an old hat, and in fact the idea that the internet revolutionizes the way we publish and access scientific results is not new. Indeed, "the Internet" was "invented" by scientists to share knowledge. Yet still, subscription costs rise, although the price of knowledge dissemination via the internet is much cheaper than in printed media. The market has obviously failed. Access to scientific results has become expensive, so expensive that the main funders of science (tax payers) only rarely can access to the knowledge which is produced using their money. <div><br /></div><div>More importantly, limited access to scientific results directly harms scientific progress. It is not unusual that it takes about two years until a paper is published. By the time a scientific breakthrough is published, it often fails to make <i>real</i> impact, apart from discouraging other labs to work in that direction. </div><div><br /></div><div>The tools to overcome the limitations are all there: Scientific results can be published on preprint servers (like <a href="http://arxiv.org/">arXiv</a> or <a href="http://precedings.nature.com/">Nature Precedings</a>) right after writing them up. The stream of information coming out of those servers can be filtered by the scientific community by writing blog posts, or by commenting on the preprint servers directly. So why are these tools still so rarely used in life science?</div><div><br /></div><div>The answer is simple: Lack of incentive. Writing blog posts does not extend my contract, papers on preprint servers do not increase my university budget (opposed to papers in high-impact journals), and often the possibility to publish in popular journals is compromised by publication on a preprint server. </div><div><br /></div><div>However, incentive will rise as more and more researchers get frustrated by the corporate science publishing machinery. As more and more university libraries drop out from subscriptions as publishers increase their fees, researchers focus on open access journals. As it is becoming more and more difficult to publish in high-ranking journals, researchers consider alternatives which enable them to spend more time on research and less on getting bashed in anonymous peer review. </div><div><br /></div><div>For example, <a href="http://plosone.org/">PLoS ONE</a> is very successful with publishing papers reviewed for technical correctness, but leaving it to the reader to gauge it's scientific impact. Recently, even <a href="http://nature.com/">Nature publishing group</a> picked up the idea and started it's own version of PLoS ONE, <a href="http://www.nature.com/srep/marketing/index.html">Scientific Reports</a>. </div><div><br /></div><div>While these journals make it easier to publish one's findings, it is up to the researcher to <i>make an impact</i> in terms of influencing the field. Doing good research takes you only half way. The other half consists of convincing other researchers about one's ideas. The great advantage is that this process takes place in public, while in anonymous peer review it is hidden from the largest part of the scientific community.</div>mschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-67109189984851974852010-11-25T10:17:00.008+01:002010-11-25T11:05:34.952+01:00Towards fast scientific python<a href="http://python.org/">Python</a> seems to come of age in its role as an universal language for scientific computing. It already has a good standing in the computational neuroscience community. The <a href="http://neuralensemble.org/">Neural Ensemble project</a> gathers some initiatives that use Python as the primary language for neuronal simulation and data analysis. Large simulator projects like <a href="http://nest-initiative.org/">Nest</a> and <a href="http://www.neuron.yale.edu/">NEURON</a> adopted Python as their primary command language already a few years ago. The core of those simulators is still written in C/C++, which delivers good performance, but leads to interfacing issues with the command language. Those issues can be addressed by clever software design, but a pure-python implementation of a simulator is much more convenient regarding maintainability and extendibility. The problem is, that pure Python will lag behind the speed of compiled languages like C/C++ by an order of magnitude.<div><br /></div><div>The <a href="http://briansimulator.org/">Brian simulator</a> is designed to be a simulator written entirely in python. To cope with the speed of C/C++-based simulators, Brian can <a href="http://www.briansimulator.org/2010/08/25/new-paper-on-code-generation/">generate compiled code from</a> the python network model. This code can also be compiled for graphics processors (GPUs), which promise high speedups for computational problems that can be parallelized efficiently. The Brian developers describe how to do just that in their <a href="http://www.briansimulator.org/2010/10/20/new-paper-on-vectorised-algorithms/">article on vectorized algorithms for neuronal simulations</a>, which is one of my current favorite papers.</div><div><br /></div><div>Today, and that was the initial motivation for this post, I came across the announcement for the new version of <a href="http://pypi.python.org/pypi/Theano/0.3.0">Theano</a>, a compiler for evaluation mathematical expressions on CPUs and GPUs. I haven't tried it out yet, but it looks definitely promising. But the really interesting fact is that there is vivid development toward making Python not only an ubiquitous language for scientific computing (a goal which has largely been achieved already), but also an alternative in terms of performance to established software packages. </div><div><br /></div><div>Without licence fees, and fully open source.</div>mschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-63358991415013685622010-11-23T13:40:00.006+01:002010-11-23T14:36:26.044+01:00PNAS Editorial: Impact Factor corrupts scienceThe (ab)use of the impact factor to evaluate the scientific merit of individuals corrupts the way how scientists publish their findings, say Eve Marder, Helmut Kettenmann and Sten Grillner in their recent <a href="http://dx.doi.org/10.1073/pnas.1016516107">editorial to PNAS</a>. Moreover, they state that the current practice to measure scientific achievement shifts the choice of research topic to potentially "great discoveries" (read: discoveries which will make it to <i><a href="http://www.nature.com/nature">Nature</a></i>), although the most important findings in science were made serendipitously, and hence the eventual contribution to science could not be estimated beforehand. <div><br /><div>However, in my opinion, the impact factor is only the tip of the iceberg. Even worse is the implicit role of author sequence on a paper. In life sciences, the first author typically is the one who did the work, and the last author is the supervisor or lab head. All authors in between are perceived to be "minor contributors". Of course, this rule leads to all kinds of problems. Fierce battles are fought over author sequence, since for PhD students, only first-author papers count, while for group leaders last-author papers are vital to demonstrate their scientific contribution. </div><div><br /></div><div>But there can only be one author first, and one author last. Of course, there are "equal contribution" asterisks all over the place, but are they actually been taken into account? After all, how much sense does it make to refer to the deprecated, intransparent and inflexible rule of author sequence to indicate contribution? For example, it is completely unclear how to handle interdisciplinary collaborations, which involve typically at least two PhD students and two group leaders. </div><div><br /></div><div>A completely fair and unbiased way to state individual contributions to a scientific publication would be to <b>list the authors in alphabetical order and have an "Author contribution" section in the paper</b>, where the individual contributions are described in detail. In fact, this is how many disciplines handle it, for example in social sciences. </div></div>mschmukerhttp://www.blogger.com/profile/17264084266583768199noreply@blogger.com1tag:blogger.com,1999:blog-6621654851452777487.post-73266992545505298672010-08-04T17:10:00.005+02:002010-08-04T17:43:44.274+02:00Paper on the honeybee brain atlas with 3D figuresColleagues of mine published a <a href="http://www.frontiersin.org/systems_neuroscience/10.3389/fnsys.2010.00030/abstract">paper on the three-dimensional atlas of the honeybee brain</a>. This atlas is a collection of three-dimensional morphological data from honeybee neurons.<br /><br />The honeybee brain atlas was first published in 2005 in the form of a <a href="http://www.ncbi.nlm.nih.gov/pubmed/16175557">conventional paper</a>. It is extremely useful for research in a number of ways:<br /><ul><li>For physiological recordings, the atlas helps you identify the neurons that you record from.</li><li>The atlas gives you a common frame of reference to compare morphological studies from different animals, experimenters or even different labs.<br /></li><li>It allows to put different neurons from different specimens in spatial relation, which is a prerequisite for reconstructing neuronal circuits in the honeybee brain.<br /></li></ul>The caveat is that in order to use the atlas, one needs the whole software stack on which it was developed. Only those who have seen it in action can really grasp its significance. In consequence, the atlas was always a bit of an insider's tip within the honeybee brain community.<br /><br />In their recent publication, Jürgen Rybak and coworkers made full use of the capabilities of the PDF format and published three-dimensional figures. The reader can interact with the atlas and rotate the brain for better overview, hide and show parts of the brain or even specific neurons. Now everyone can examine the bee brain in a way that was before only accessible to a small group of scientists. And since the paper is open access, you don't even have to pay for it.<br /><br />Thanks a lot for that great piece of science!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-64252798999416290742010-06-17T10:10:00.006+02:002011-03-29T16:49:30.656+02:00Preprint servers vs. anonymous peer reviewRecently, I got a manuscript rejected. There's nothing wrong with that, I consider rejection as a part of the publication process. Or more precisely, as a stop on the way to the publication of a manuscript.<br /><br />But what really was annoying is that the reviewers did not even seem to read the manuscript until the end. They criticized the lack of a specific kind of information. But this information was explicitely elaborated upon in the discussion part. Even after pointing them to the exact paragraph where the information was given, they did not acknowledge it (let alone say whether this satisfies their criticism). And then it took them two months and several iterations back and forth to finally reach their conclusion to reject the article.<br /><br />This experience goes along with what I hear from colleagues, namely that they seem to get more and more rude reviews. Constructive criticism appears to be a rare good in neuroscience nowadays. In addition, it is not uncommon that it takes a year or longer from the initial submission to the final publication of the manuscript.<br /><br />One of my thoughts was that probably the concept of anonymous peer review poses a problem here. As the pressure to publish rises, so rises the number of journals. These journals require more reviewers, who get more papers to review. It becomes increasingly difficult to devote the necessary time and effort to judge a manuscript in its entirety. In addition, there is no way the reviewers' effort is honored when peer review is anonymous. Journals like the Frontiers in... series already act up by naming reviewers on the published manuscript, in order to get the reviewers' contributions noticed.<div><br />I would definitely welcome an open peer review process, where reviewers are named from the first revision on, and the reviews are made publicly accessible. Preprint servers like <a href="http://arxiv.org/">ArXiv</a> are pretty much what I'd like to have in neuroscience. You put your paper there, you get comments, you revise the manuscript, and at some point a journal or a conference will accept it. Your knowledge is accessible right from the first upload on the server. Efficient exchange of scientific results favors the rapid advancement of the field. Moreover, it is much less likely that you get scooped during the review process, since you can always document when you published the paper for the first time.<br /><br />I will evaluate possibilities for this kind of publication for the resubmission of my manuscript.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-15425289040726374242010-06-16T12:44:00.001+02:002010-11-25T10:58:25.754+01:00One week with opensuse 11.3 FactorySince my last post, I submitted <a href="https://bugzilla.novell.com/show_bug.cgi?id=612117">a</a> <a href="https://bugzilla.novell.com/show_bug.cgi?id=612121">couple</a> <a href="https://bugzilla.novell.com/show_bug.cgi?id=612771">of</a> <a href="https://bugzilla.novell.com/show_bug.cgi?id=612829">bug</a> <a href="https://bugzilla.novell.com/show_bug.cgi?id=612834">reports</a>. The <a href="https://bugzilla.novell.com/show_bug.cgi?id=613165">most severe bug</a> caused my external monitor to be unuseable on DVI. It turned out that my monitor is providing faulty EDID data, which confused the graphics driver. But Stefan Dirsch of novell pointed out a <a href="https://bugzilla.novell.com/show_bug.cgi?id=613165#c15">way to work around this</a>. Cool.<br /><br />Apart from that I'm really surprised how stable and useable OpenSuse 11.3 is already. This is going to be a great release!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-62684312351394847752010-06-07T12:23:00.001+02:002010-11-25T10:58:52.309+01:00Suse 11.3 M7 on Lenovo T410sToday, I got my new shiny Lenovo T410s. First thing I did, throw that Windows 7 off the disk and install Linux! :)<br /><br />So I downloaded the current milestone 7 of opensuse 11.3. Install from DVD went smoothly, as was expected: I chose to use the entire Harddisk (which is not a disk but rather an SSD ;) ). No exotic software configuration was chosen, either, just plain KDE4 desktop.<br /><br />The only thing that crashed was firefox, but this is a <a href="https://bugzilla.novell.com/show_bug.cgi?id=608087">known bug</a>, with known remedy: update to the lates factory repository.<br /><br />Another <a href="http://www.blogger.com/post-edit.g?blogID=6621654851452777487&postID=6268431235139484775">known bug with known workaround</a> prevented me to have wifi out of the box. So installed the "Laptop" pattern, and the kernel-firmware package, and voila, wifi works. Like a charm.<br /><br />The integrated camera worked straight out of the box.<br /><br />The fingerprint reader seems not to work, as the proper driver is not included in openSUSE 11.3. More precisely, it is only in the current development version of libfprint that the driver is contained. But chances are that this version will become available for SUSE some time soon.<br /><br />Check also the <a href="http://www.thinkwiki.org/wiki/Category:T410s">thinkwiki-page for the T410s</a>, and the <a href="http://www.thinkwiki.org/wiki/Installing_OpenSUSE_11.3_on_a_ThinkPad_T410s">installation instructions for SUSE</a>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-55163453186679641672009-03-18T20:55:00.001+01:002010-11-25T11:07:29.552+01:00Honey bee dance roxor :PSeems that honey bees are capable of so much more than just waggle dancing!<br /><br /><object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/7m5vt07W2n4&hl=de&fs=1"><param name="allowFullScreen" value="true"><param name="allowscriptaccess" value="always"><embed src="http://www.youtube.com/v/7m5vt07W2n4&hl=de&fs=1" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br /><br />The <a href="http://savethehoneybees.com/">link they show in the end</a> goes to a site put up by a popular ice-cream manufacturer that is devoted to conserving the honey bees. Nice idea. And I feel even better now doing honey bee research! :)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-40651713905496658772009-02-20T10:15:00.001+01:002010-11-25T11:06:56.569+01:00Never shout at your hard disks......<a href="http://blogs.sun.com/brendan/entry/unusual_disk_latency">you will only slow them down</a> (via <a href="http://www.gnome.org/~federico/news-2009-02.html">Federico Mena-Quintero</a>). Amazing stuff. Funny analogy in regard to leadership principles: Never shout at your staff, you will not make them work faster!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-30882596911693407402009-02-19T16:27:00.000+01:002009-02-26T10:53:38.002+01:00Have your python toolchain in $HOMEIn my <a href="http://mybrainextension.blogspot.com/2009/02/installing-scipy-070-on-opensuse.html">previous post</a> I explained how to install scipy from source on openSUSE. <br /><br />What makes it particularly nice is that I can now carry most of my toolchain in my $HOME. I make python include modules from within my $HOME by setting $PYTHONPATH to something like <pre>/home/micha/mypython/lib64/site-packages</pre>. The good thing is that you can install any python package in your $HOME by using the --prefix option to setup.py: <pre>python setup.py install --prefix=$HOME/mypython</pre><br /><br />For even more python goodness I tell easy_install to put everything there by having a file called .pydistutils.cfg in my $HOME with the contents <pre><br />[install]<br />prefix=/home/micha/mypython<br /></pre><br /><br />So everytime I easy_install a package, it is automatically put into my $HOME directory. That makes it much easier to reinstall or upgrade the system. Since most python-related stuff is now in my $HOME and not in the system, rebuilding my python-toolchain basically consists of installing python and distutils. Isn't that great :)<br /><br /><b>Update:</b> I learnt from <a href="http://rhodesmill.org/brandon/2009/emacs-python-virtualenv/">Brandon Rhodes</a> that <a href="http://pypi.python.org/pypi/virtualenv">virtualenv</a> will set up everything for you automatically. Awesome python goodness.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-18641986541457164192009-02-19T13:52:00.001+01:002010-11-25T11:06:03.217+01:00Installing scipy 0.7.0 on openSUSEI had to rebuild parts of my toolchain because I messed up my OS and needed to reinstall. In the process of searching for nice numpy and scipy packages for openSuSE (which failed), I discovered that now its actually possible to do<br /><pre>easy_install numpy<br />easy_install scipy</pre><br />(provided that you have the python-distutils package installed). That's great. But... it doesn't work! At least not on openSUSE. I could convince numpy to install somehow. I don't remember exactly, I think I at least needed to install gfortran, maybe also blas and lapack from the <a href="http://download.opensuse.org/repositories/science:/ScientificLinux/">scientificlinux-repository</a> in the build service.<br /><br />For scipy then it was a little bit more work. It kept complaining that it did not find BLAS and LAPACK, even though I edited numpy's site.cfg file so that it should be aware of the location of the shared libs.<br /><br />It turned out that to install scipy I had to:<br /><ol><br /><li> Download <a href="http://www.netlib.org/blas">BLAS</a> sources and unpack them, e.g. to $HOME/Apps/BLAS<br /></li><li> edit make.inc in that directory, changing the FORTRAN line to <pre>FORTRAN = gfortran</pre><br /></li><li> build BLAS by calling make in the BLAS dir<br /></li><li> DL and unpack <a href="http://www.netlib.org/lapack">LAPACK</a> to $HOME/Apps/lapack-3.2<br /></li><li> edit make.inc.example in that dir, changing the BLASLIB line to <pre>BLASLIB = $(HOME)/Apps/BLAS/blas$(PLAT).a</pre> and saving that file as make.inc<br /></li><li> build LAPACK by typing make in the lapack dir.<br /></li><li> <a href="https://sourceforge.net/project/showfiles.php?group_id=27747">Download scipy</a>, unpack it and start the build process: <pre>python setup.py install</pre><br /></li></ol><br /><br />That takes quite a while. It seems it builds LAPACK and BLAS again, so maybe you don't have to build it first, but I guess you need at least to make the appropriate modifications to the respective make.inc files. Comments on that are welcome.<br /><br />But most important: it finally worked :)<br /><br /><b>Update:</b> At least I thought it worked. It didn't :D Problem was that import scipy produced a symbol not found error. Maybe adjusting ldconfig's path could fix this, but I don't have time to look into this. Installed numpy & scipy from the scientificLinux repo (link see above).Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-6621654851452777487.post-87135585862641264652008-12-03T14:26:00.001+01:002010-11-25T11:00:48.452+01:00Netbeans + python = happy happy<a href="http://download.netbeans.org/netbeans/6.5/python/ea/">Netbeans is now available with python support</a> as an early access (read: beta) build. Install went smoothly, it just updated my existing netbeans 6.5 install with the python capability. Hassle-free, netbeans-style. Great!<br /><br />At first glance python support looks great - I can create projects from existing sources, it allows code navigation by function names, and all that other netbeans goodness. During the next few days I'll give it a shot - I'm looking forward to finding out whether it beats Eric4...Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-6621654851452777487.post-23151031932918690122008-11-06T19:30:00.000+01:002008-11-06T20:16:20.130+01:00Teaching for (no) funOne of the nice parts about working at university is doing teaching. In my past experience teaching always was very rewarding, so I volunteered to tutor an undergrad course on the Hodgkin-Huxley cell model. I felt it would be a nice opportunity to drop out of the daily sitting-in-front-of-the-computer-the-entire-day routine, And as a side effect, it would revive my knowledge on Hodgkin-Huxley models. <br /><br />Well, it got me out of my routine, but it wasn't all sunshine. Granted, some students actually were highly motivated, and the course reports they turned in were carefully prepared. It was fun tutoring them. But others just didn't care. Some of their reports stated the name of last year's tutor - suggesting that they didn't even care to change that during copy-and-paste. <br /><br />Not that I care about plagiarized reports - I think a student at an university should be smart enough to figure out that they cut their own flesh when plagiarizing reports. But they also cheat on those who actually put some effort. And that's unfair. And it leaves a bad feeling for me as a teacher who volunteers out of idealism. Spoiled my fun somehow.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-43457263720863475062008-06-13T17:19:00.000+02:002008-06-13T17:26:36.961+02:00Wordle the antennal lobe!<a href="http://nedbatchelder.com/blog/200806/wordle.html">Ned Batchelder</a> posted the link to <a href="http://wordle.net">Wordle</a> on his blog. What a cool app! It creates word clouds from any piece of text you throw at it. I couldn't resist and had to produce a wordle of my <a href="http://www.pnas.org/cgi/content/abstract/104/51/20285">2007 PNAS paper</a>. Here it is:<br /><br /><a href="http://wordle.net/gallery/virtual_antennal_lobe" title="Wordle: virtual antennal lobe"><img src="http://wordle.net/thumb/virtual_antennal_lobe" style="padding:4px;border:1px solid #ddd" ></a><br /><br />How cool is that :)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-46822452927105221862008-06-02T17:30:00.000+02:002008-06-02T17:34:07.830+02:00Design patterns in pythonFound a <a href="http://www.protocolostomy.com/2008/06/02/couple-of-python-design-pattern-links/">blog post</a> that summarizes several links on how to implement design patterns in python. Have to check it out once I finished my abstract for <a href="http://2008.neurocomp.fr/index.php?page=call&lang=en">Neurocomp '08</a>!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6621654851452777487.post-32798451264580150942008-05-22T00:10:00.000+02:002008-05-22T00:23:46.404+02:00biomachinelearning.net<p><a href="http://biomachinelearning.net">The very first domain of my own!</a> Feels exciting ;)</p> <p>It came to me that my research always evolved around biology-inspired machine learning, so yesterday evening I decided to register the domain. </p> <p>So far, it links to my <a href="http://userpage.fu-berlin.de/~schmuker">home page at FU Berlin</a>. In addition, I added 2 subdomains:<br /><a href="http://sommer.biomachinelearning.net">sommer.biomachinelearning.net</a>, pointing to the SOMMER homepage at Uni Frankfurt, and<br /><a href="http://mybrainextension.biomachinelearning.net">mybrainextension.biomachinelearning.net</a> pointing at this blog.</p> <p> I want to use it as a platform for my research and other stuff related to biological approaches to machine learning. If you have something to contribute, drop a comment!Unknownnoreply@blogger.com0