I’m interested to hear which instruments people would rate as highly robust - i.e. fewest failed transfers during routine use (once you’ve optimised the method for liquid classes, labware, pipetting modes etc.).
In my experience, I have always found the Hamilton STARs to have an unexplained intermittent failure rate of 1-4% at low volumes. By this I mean that 1-4 out of every 100 transfers (of <10uL) just don’t work (even after much optimisation). They are very roust at larger volumes, but in our hands seem to struggle at the bottom of their range.
We’ve never used TADM on our Hamilton’s because as far as I understand, the (arbitrary) acceptance criteria on each aspiration & dispense curve would need to be set manually, for every liquid class, with every tip type, using every aspiration mode and multiple volumes, and even then it won’t work reliably at low volumes (less than 10uL). I assume this is a similar limitation for all pressure-based verification techniques. e.g. Threads on this forum seem critical of Dynamic Devices’ VVP technology.
I expect that positive displacement pipettes such as the SPT Labtech Mosquito or the FAST from Formulatrix would be the most robust, but have also seen impressive marketing from vision-verified instruments such as BioFluidix and the I.DOT. However I’ve learned not to take marketing materials at face value…
Any real-world experiences that people can share would be greatly appreciated.
This is an important issue on the STAR we are constantly optimizing around. Has anyone gotten a <10ul dispense to work every.single.time?
A 10μL transfer is part of the Hamilton field volume verification test that is performed at installs and PMs so I would say it is certainly possible.
If you could provide some more context on what issues you are experiencing, that would be helpful. What liquid, dispense mode, acceptance criteria, etc.
Is this still true with the new MagPip format?
The current MagPip field volume verification tests down to 1μL. I am actually at our facility in Switzerland to get hands on MagPip training this week! Looking forward to finally get a chance to personally play around with it!
Take videos and post them back here if you’re allowed! The MagPip seems so darn cool I want to see it in action on an actual system.
You’re seeing at least an order of magnitude higher error rate than I see on our machines. We’re closer to 0.05 - 0.1% sample drop outs at volumes <10 uL (usually 2 - 5 uL). We’re working with primarily water based solutions though which maybe explains it.
I’ve never had issues with volumes > 3 uL. Anything below that, expect much more variability unless you’ve properly calibrated your liquid classes.
And even then proper calibration can lead to variable results because of other factors.
Thanks Eric, this wasn’t meant to be a troubleshooting thread but if you have any insight that would be valuable.
This is for a PCR plate setup workflow. We have 5uL sample transfers of TE-like aqueous buffer containing the extracted DNA. We use a 10uL pre-primed aspiration with cLLD and a 5mm submerge depth to avoid potential issues with bubbles. Dispense is surface empty into 20uL of PCR master mix. When we have failures, I mean that we find one sample replicate will fall in the middle of our STD curve, but the other replicate will be undetermined suggesting that no liquid was transferred. You can see this reduced volume on the assay plate. As I said, this works very reproducibly ~95-99% of the time, but we just can’t get those final few %. I know it’s unrealistic to expect 100% success from any robotic system, but it would be great to close that gap a little more.
Has anyone else had experience with the other systems mentioned? Mosquito, FAST, BioFluidix or I.DOT? Trying to find any system reviews from real end-users is difficult outside of conferences. Is there any interest in adding a review section to the forum for that?
Have you tried a fix height aspirate? Not sure what the volume is in the original plate / what plate type it is or how well dialed in your liquid level detection settings are, but this is where I would probably start.
For me it works best if I add a blowout, even if I dispense to surface. You will have bubbles in your PCR plate, but they disappear in the first 95° cycle and did not harm my PCR success so far!
have you seen air pockets/bubbles towards the bottom of the well of your aspirated sample that may be the cause of some missed transfers, also are you close enough to the bottom of the well with the 5mm submerge depth that you may be experiencing hitting the bottom and not aspirating the volume? I’ve been using 2.5-3.5mm submerge depths for aspirating aqueous buffers,
When in doubt, fixed heights.
If an ALH were failing to aspirate any volume, I would assume pinching is happening somewhere or you’re working with something that’s very tough (lots of bubbles, inconsistent, clots, etc…)
There also plate readers that can be trained to detect the absence of liquid in a well.
I have predominantly worked in a super high throughput environment (millions of samples per year, 24/7 lab) and a lot of the liquid handling issues were more due to other factors (maintenance, teaching irregularities, non-optimized definitions, poorly trained staff.) Yes o-rings went bad or tips had lot issues but the hardware itself was generally robust.
We tried fixed height as one of the first fixes, but still had issues. In fairness though, we have optimised other parts of the pipetting since then, so it might be worth re-visiting this. I was worried about potential sealing against the bottom of the well so I added a pre-defined 1.5mm tolerance in the bottom definition (we always have plenty of dead volume to cover this). Do you think that the cLLD could be tripped above the real liquid height causing aspiration of air instead of sample? We do have sensitivity set to max.
I don’t think that blowouts and bubbles are the issue.
I’d start with the fixed heights just to answer that question first because it’s the simplest solution to implement. And then it’s worth revisiting the other components. I think this is where some Hamilton’s LC pro’s like Eric can chime in about setting sensitivity parameters to the max.
I’ve found that depending on the software, there’s a weird interplay with the hardware movement, lab definitions, tracking with asp/disp, and when events are triggered. I’ve learned the hard way to clean these up and keep it simple.
There are a lot of great suggestions in this thread, but admittedly, it can be challenging to troubleshoot liquid handling remotely.
Gareth, I reached out to my UK colleagues and they should be reaching out to see if they can assist in any further optimizations. Perhaps an update can be provided after such an assessment happens?
yes absolutely! I would be very interested!
We never have issues with liquid transfers down to 10% of the tip volume. Our most used tips are 50uL, 300uL, and 1000uL. Once we go below 5uL with the 50uL tip we need to use a custom liquid class. This class was optimized and provided by Hamilton. I’ll share the details.
As far as sub 5uL liquid transfers go, do you do these transfers with liquid contact (surface dispense)? We routinely load 2uL into a Stunner plate without issue. We even pipette critical reagents as low as 1uL and see very little variability in UV or TIC peak area.