[Open hardware] A safe laser by-pass

Well, I remember when I started this business, a beam stop was done with a recycled block of lead and reflections stopped with carton boxes 😉 Brown boxes, black carton catches fires, of course (tell this to my undergrad-self). Not any longer, of course!

About ten years ago, I started the procurement and development of my first two-photon microscope. For the first time, I was directly responsible of laser safety and I had to take decisions about how to build a system that was safe for a user facility in a biomedical research institute. As I was coupling commercially sourced systems (Leica SP5, Chameleon Vision 2 and Pulse Select) and I was not planning much customization for the excitation path of this instrument (I heavily develop assays and detection), I opted to fully enclose the laser in lens tubes. The resulting system is safe, stable, and no more difficult to align compared to other enclosures.

I think that enclosures around the complete table might make sense in many instances, particularly when compartmentalized in sub-sections, but this is the system that worked best for me at the time. One solution I wish to share, is a bypass for the Pulse Picker we had used to develop spectrally resolved FLIM utilizing smart SPAD arrays (detectors that integrate photon counting electronics with them).

20181112_184730As I start planning replacement of this systems, I wished to share this design, in case some of you might find it useful. In the image on the left, you can see the Ti:Sapphire on the top, the pulse-picker on the right and the first enclosure by Leica used to steer the beam to their in-coupling optics (bottom-right).

In the middle, the laser bypass we utilize to direct the laser through the pulse-picker or around it.

In the image below, you see a close-up photo of the by-pass. The black box with the rectangular aluminum cover is the Leica spectral flattener used to reduce power of the Chameleon Vision at the peak wavelength. One of the few customization I needed here was simply to have a hole on a Thorlabs SM2 lens tube to accommodate this filter. This is screwed in a C4W-CC cube that can host a movable turning mirror with high reproducibility. The alignment of the microscope without the pulse-picker is done with the pair of mirrors provided by Leica. The alignment of the Pulse Picker is done with the kinematic mirrors visible on the left (M1 and M2). I placed a light-block behind them just in case one would become lose or to block the small amount of light transmitted through them. A kinematic cube is used to host ultrafast beam sampler by Newport to direct a small fraction of light to the Thorlabs PIN diode I use to feed the electronics of the pulse picker. In front of the PIN diode I have an xy-translating cage element. An empty four-way cube is used to allow the laser beam to pass from top to bottom (bypassed) or from left to right (coupled pulse picker). The aluminum block tagged as L1 is just a cover for the C4W-CC when empty.


At the output of the pulse-picker, you see the mirror image of this bypass (on the right) and the two steering mirrors by Leica (the cylindrical towers). On the far right of the picture there is the in-coupling optics by Leica, preceded by two diagnostics ports.


Below, you can see a close-up picture of this part of the coupling. Because of the layout, I needed to add one extra mirror (top left) and aiming to isolate users (placed on the top of the image) from accidental damages of the in-coupling optics, I added a light barrier.

Both diagnostics ports are based on a 4-way kinematic cube from Thorlabs hosting Newport beam samplers. The first port is used to sample the pulses after the pulse-picker and to feed our FLIM electronics. The second has two scopes. First, for course alignment of the system. I have two irises in the system that are aligned when the laser is aligned (roughly) to the in-coupling optics of Leica.

I usually remove a cover at the exit of this diagnostic port and use a fluorescent card to verify alignment, but in this picture you see the fiber coupling a spectrograph we occasionally use to diagnose faults of the laser.



The alignment is simpler that it seems. First we start with a microscope that is fully aligned without pulse-picker as per normal operations. Then, when we need the pulse picker, we insert the two turning mirrors (L1 and R1). We do this with the laser off and with the pulse-picker crystal retracted (coarse alignment) or protected by an alignment card (fine alignment). M1 and M2 are then used to align the beam with the crystal. Then we align the PIN diode and proceed with the fine alignment of the pulse-picker cavity.  Once this is done, we align the cavity with the microscope utilizing M4 and M5. For course alignment, the signals from the two diagnostics ports is very useful until some signal is picked on the microscope monitor, after which the final fine tuning of all the optics can proceed.

Be aware, alignment of Class 4 lasers can be dangerous. Therefore, do your own risk assessments and think carefully about the logistics of your system. Now that I am starting to consider the redevelopment of the system, I thought to share these notes with you, hoping that it could be of some use.


Sharing is caring: an open access FLIM trial

Are you interested in cell biochemistry, but in single living cells, organoids or tissues? Is there a Western blot or IP you wished to do on a living sample? Or did you wish to see where in a cell a protein-protein interaction occurs.

Well, if you are interested in quantifying a ligand concentration, a post-translational modification, a protein-protein interaction, chromatin states, oligomerization of proteins, you might be interested in FLIM or FRET, but you might not be in your comfort zone to set-up or execute such assays. 

The specialist expertise and instrumentation required to perform fluorescence lifetime imaging microscopy (FLIM) is often a barrier to adoption of quantitative biochemical imaging techniques. The same can be true, although at a lesser extent, for intensity-based measurements of FRET.

Well, we have the expertise and we have the instrumentation. Not just this, but today, instrumentation and data analysis is becoming simpler and simpler. During 2019, we are going to trial a system by which we can support you for setting-up and test FLIM/FRET experiments. We have limited resources and, therefore, we will open only a few experimental sessions to start with, but there will be no strings attached. No fees, no authorship to include in that paper you really care.

Although we still have to setup the “Sharing is caring” trial, feel free to inform us about your interest. Initially, projects will be selected at our discretion, with priority given (but not confined) to cancer-related work and work with a potential to impact public health in the short or long period.

NyxBits and NyxSense? What?!

NyxSense&NyxBits paper here.

800px-Arte_romana,_statuetta_di_nyx_o_selene,_I_secolo_acI am not fond of new achronyms or ‘cool’ names, but then… guilty! you got me, I am contributing to the proliferation of four letters acronyms and fancy names like others! Lately, I have introduced a new one, HDIM as for Hyper-Dimensional Imaging Microscopy. But that is another story, and in a Supporting Note of that pre-print we explain our choice.

Earlier, we created the pHlameleons with the friend, my group leader back then, Fred Wouters. Well, first it was the Cameleon, the famous calcium reporter by the great Miyawaki and Tsien, brilliantly referred to as Camaleon because it is a protein that ‘changes colour’ upon binding calcium (Ca). Then it was the Clomeleon by Kuner and Augustine, as it senses cloride ions (Cl) rather then calcium. With all due respect for the authors, I must admit I did not love that name at first. Indeed, as we were deriving a family of pH sensors from yet another creation of Miyawaky (the CY11.5), we started to joke that we should have called this family of sensors the pHlameleons. Months after months, a joke ended up in a title of a paper, to be adopted as the name of these pH sensitive proteins. So, let’s not take ourselves too seriously too often. Sometimes we pick names for a bit of branding, other times to make our assays less heavy with too many technical terms, and other times, let’s just have fun with words (Clomeleon now for me is a great name, but I routinely joke about pHlameleons!).

Now that you know the little funny story about the pHlameleons, it is the turn of NyxSense and NyxBits. NyxSense is a software dedicated to multiplexing of FRET sensors. NyxBits are the components to create a multiplexing platform, a number of fluorescent proteins of distinct Stokes shift that can report, through their fluorescence lifetime, biochemical reactions probed via FRET with the use of dark/darker acceptor chromoproteins. A huge effort for us that took several years to bear fruit. Why Nyx?

During the revision of the drafts, colleagues found the manuscript a bit too technical and difficult to read. Thus I went back to pen and paper,  google and wikipedia, to find a name that could help us to refer to this sensing platform with a single word rather then a sentence. Greek mythology always provides great inspiration and eventually, I discovered Nyx the primordial goddess of the night (Nox in the Roman mythology). With Erebus (personification of darkness), Nyx gives birth to Aether (personification of the upper air and brightness), Moros (deadly fate), Moirai (destiny) and Thanatos (death). Then, I felt that this short name, Nyx, is intimately connected with our work for three reasons.

First, Nyx seems to link darkness and light, the day and night, a nice analogy with our bright donor fluorophores and dark acceptors. Second, Nyx is related to death and fate. We created the NyxBits and NyxSense to study cell fate, and our first application is cell death responses to an anti-cancer drug. Third, Nyx is a goddess and as I am really committed to gender equality at work (not just by picking names of fluorophores), it felt a little bit in tune with what I do, to honour a female deity.

But do not take these reflections too seriously – I do not – after all I needed just a simple name for a very complex sensing platform. As there is no way for me to tell the reasoning behind the names in the manuscripts, I thought to share with you why we picked NyxSense and NyxBit, light-heartedly.

 Now starting project Atlas… we’ll speak about this another time! 🙂

Volume rendering: is this localization-based super-resolution?

Project outcome published in Biophysical Journal in 2010.

  • Esposito A*, Choimet JB, Skepper JN, Mauritz JMA, Lew VL, Kaminski CF, Tiffert T, “Quantitative imaging of human red blood cells infected with Plasmodium falciparum“, Biophys. J., 99(3):953-960

Most papers have an untold backstory that we cannot reveal in it so to focus on a main message and the most relevant discoveries. This one has a little one I wish to share. Volumetric imaging of red blood cells is not the most difficult thing I have ever done. However, accurate morphological and volumetric imaging of red blood cells infected by Plasmodium falciparum, the causative pathogen of malaria, caused me a few headaches. Let’s forget the time spent waiting for the cultures growing at the right speed to deliver bugs at the right stage of development, undecided if to sleep before or after the experiment, and always getting the decision wrong. Let’s not speak for now about the optimization of the sample preparation that that by trying and failing lead to other interesting observations. And here we focus on the very simple concept of accurate volume rendering.

In one way or another, volume rendering and estimation will require some sort of thresholding on the data so to discriminate the object from the background. As imaging conditions change even slightly from experiment to experiment, setting this threshold might confound the final outcomes. When you deal also with a sample that undergoes major morphological transitions, a simple problem soon became one for which I spent a lot of time to identify a solution for. As it happens, one perhaps does not find the best, most elegant or even the simplest solution, but the solution that they can find with their skills and tools. Mine was a brute-force solution of isosurface volume rendering, iteratively deformed by local refitting of a random sample of vertices in order to respect a specific model set for the transition of object to background. This was a method that permitted us to preserve high resolution morphological descriptions, at high accuracy and reproducibility for volume rendering.

This work was carried out while many of my colleagues were focusing on super-resolution, e.g. maximizing the spatial resolution in optical microscopy. Then, it was simple to notice that fitting a surface onto volumetric data delivers volume estimates at higher precisions than what the optical resolution of a microscope should permit. Indeed, whenever you have a model for an object, in my case the boundary of a red blood cell, in single-molecule super-resolution methods the point-spread-function of an emitter, it is possible to fit this model with a precision that is not (fully) constrained by diffraction, but – in the right conditions – only by the signal-to-noise ratio, the analytical tools and the adequacy of the model for the object.

In this Biophysical Journal paper, we focused on the biological application and, together with other published work, on the modelling of homeostasis of infected red blood cells. Also to avoid criticisms from referees, probably legitimate ones, I decided not to mention the concept of super-resolution. As my research focus is on biochemical resolution and its utilization to understand cellular decisions in cancer, I will not pursue this work any further, but I thought to write this little story.

While writing this brief story, I recalled my friend Alberto Diaspro often citing Toraldo di Francia on resolving power and information. I believe that my work was far from being breakthrough from an optical standpoint, but I wished to use it as a reminder of a fundamental issue that, often in biomedical applications, get forgotten. The resolution at which we can observe a phenomenon, irrespective of the tools used, depends both on the qualities of the instrument used and the quality of prior information we can utilize to interpret the data. Once technology permitted to image single emitters in fluorescence microscopy, the prior of point-like sources could be use to analyse images so to reveal the fullness of the information content of an image that is carried by photons.

In an experiment, information content is the most precious thing. Irrespective of the methodologies used, our protocols are designed to maximize signal-to-noise ratios and, thus, maximize information content, precision and resolution. However, as trivial as these statements are, in the biomedical sciences we often do not follow through the process of maximizing information content. Significant information can be provided by our a priori constrains and models. Moreover, a thorough understanding of information theory related to a specific assay can provide levels of precision and resolution that go beyond what we assume, at first, possible. However, priors and information theory are far too often neglected. This happens out of necessity as most people do not have the training and understanding of both biological and physical processes, and even those that might, have to invest their limited resources carefully. I wish that in the future there will be more collaborative work between the life sciences, physicists and mathematicians, aimed to better understand how to extract maximum information from experiments in the biomedical areas.

So… was our volumetric imaging super-resolution? I am not sure I care to really answer, but I wished to provoke some thoughts and make you think a little bit about the relevance of information theory in biomedical research.

Photon partitioning theorem and biochemical resolving power

Project outcome published in PLoS ONE in 2013.

  • Esposito A*, Popleteeva M, Venkitaraman AR, “Maximizing the biochemical resolving power in fluorescence microscopy”, PLOS ONE, 8(10):e77392

After my 2007 theoretical work on photon-economy and acquisition throughput, I occasionally worked on a more general framework attempting to falsify my hypothesis that multi-channel or multi-parametric imaging techniques can deliver better results than other simpler techniques.

My proposal to develop instrumentation to achieve spectrally and polarization resolved lifetime imaging (later defined as HDIM) was met with scepticism by many. The recurrent question was: if you struggle to do a double exponential fit with the small photon budget we have available in biological applications, how could you possibly dilute these photons over several channels and analyse them with more complex algorithms?

Here, there are a few fundamental misunderstandings. First, the analysis should not be carried out on each “detection channel” independently, but the entire dataset should be used to exploit all information at once. Second, the use of dispersive optics rather than filters permits to acquire a higher number of useful photons. Third, limitations in current technologies (e.g., speed or photon-collection efficiency) should not be an obstacle to the development of these techniques because these are not conceptual flaws, but simply technology obstacles that can be removed.

Although I have a lot of (unpublished) work I used to describe performances of multi-channel systems, I achieved a breakthrough only when I understood I had to focus my efforts on the description of the general properties of the Fisher information content in fluorescence detection rather than the Fisher information in a specific experiment. Fisher information is the information content that an experiment provides about an unknown we wish to estimate. Its inverse is the smallest variance ever attainable within an experiment, or what is called the Rao-Cramer limit. In other words, by maximizing Fisher information, we maximize the precision of our experiments.

Photon-partitioning theorem

The second breakthrough was the understanding that the best description of precision in biophysical imaging techniques was possible only defining the concept of biochemical resolving power that is a generalization of the resolving power of a spectrograph to any measured photophysical parameter and then to its application to biochemistry. The biochemical resolving power is proportional to the square root of the photon-efficiency of a microscopy technique and the number of detected photons. Maximization of Fisher information leads to the maximization of photon-efficiency and, therefore, net improvements in biochemical resolving power. This definition complements the definition of spatial resolution in microscopy and allows to define when two objects are spatially and/or biochemically distinct. It is worth to mention that this is equivalent to stating that two objects are spatially and photo-physically distinct, but we use the photophysics of fluorophores to do biochemistry, hence my nomenclature. I see possible implications for other techniques, including super-resolution and, perhaps, this will be the subject of a future work.

The third breakthrough was the utilization of numerical computation of Fisher information rather than the analytical solutions of equations that are not always available. This process is very common in engineering but not in our field. Therefore, we can now optimize the properties of any detection scheme in order to attain the highest performance.

This work is a very specialist one and I assume there will be not many people interested in it, although the implications of this piece of theory for everyone’s experiment are significant. I believe that this is my most elegant theoretical work, but I guess it is a matter of opinion. The paper in itself had to be expanded well beyond what I wished to publish during the refereeing process and it is now including examples, software, etc. I think the theoretical introduction and the mathematical demonstrations are the best part and the description of the numerical optimization of Fisher information the most useful.

NOTE: there are two typographical errors in the published manuscript within the definitions of photon economy and separability. These are described in a comment on PLOS ONE