General thoughts on standardization of liquid handlers

I’m curious what the experts here have for opinions about validation and standardization of liquid handler testing for workflows. Are LHs more akin to glorified pipettors, and thus require simple gravimetric calibrations? Are they closer to a process, assay, or workflow, and require more elaborate testing? Or even closer still to programs that have I/O from scientists and need to have full programmatic validation performed? I haven’t been able to dive into the ISO 23783:2022 parts 1-3, but I’m curious if anyone has implemented this into their own labs to appease QA and to build confidence in their automation.

My standard approach is to track everything into a data lake and update my methodology as new problems and challenges arise, but it would be great to see if there’s a general consensus among folks that do validation and workflow setup for a living.


You really came to the right place to ask this! PyLabRobot is at its core a project to define abstractions over the functionality of liquid-handling robots in order to create an ecosystem of interoperable applications and tools, along with an open-source developer community to create and support these. By posting on this forum you are contributing to the project, so thank you!

For the PyLabRobot framework, we define a liquid-handling robot as a machine capable of aspirating and dispensing precise volumes of liquid within a Cartesian coordinate system. I think your question is a little deeper than that though, because you want to know about workflows.

Beyond gravimetric (ie volumetric) calibration there are important physical phenomena that are relevant when dealing with sensitive biomolecules and materials (eg cells). These are affected by speed and acceleration of pipetting, and it’s much harder to validate these than pure volumetrics because you need a more specific downstream assay to measure and optimize these parameters.

For instance, creating a stem cell passaging workflow involves optimizing aspiration and dispense speed for handling the cells. You need to measure cell viability to tell you what speed to use. This is much harder than gravimetric measurement, and will vary depending on the material being handled.

I think the two ways we can think about solving this are generalizing across robots using physical models (eg accounting for bore size and shear stress to approximately translate liquid classes across robots) and generalizing across workflows by breaking them down into modular unit operations that can be reused across different workflows, creating a sort of library of unit operations. In my opinion both of these are much easier in Python than any GUI-based interface.


If you think about it, liquid handler’s are akin to glorified pipettors the way that cars are akin to horses.

I think of standardization in two forms,

  1. Localized: Standardizing definitions, parameters, software, design methodologies within a workflow, across methods, and across systems.

    • This is great for QA appeasement, confidence building, may help increase funding, can be fun, done poorly it will just lead to way more troubleshooting, etc…
  2. Global: Regulatory standardization for auditing purposes or some regulatory body.

    • This is usually legally required, essential for business partnerships, lots more documentation needed so get used to it… If you fail to meet standards that leads to a loss of accreditation, you will lose your job.

Ultimately it depends on the scope of the project or company goals but you should ALWAYS be practicing #1. However if you’re building complex tasks for a regulatory body (FDA, CLIA, or other regulatory agencies) then you are legally required to adhere to #2. Therefore (and in my humble opinion), it makes sense to build something from the ground up with #1 and with #2 in mind.

Now how you standardize something can vary from workflow to workflow and potentially even person to person. Some folks view this as the most annoying part of what we do (LC validation, documentation, requirement building, design methodology, etc…) but it’s literally why we have jobs and it’s why some folks get paid a lot.

However someone that can program and build something while managing all of the aforementioned is top notch. In some instances, a gravimetric test is fine and other times you want something more complex akin to an ARTEL. Perhaps for some workflows, the real validations lies with the sequencing results or cell counts or flow cytometry data. However you want to make sure that each step of the process is at the very least working as advertised. And so the standardization hierarchy for me has been…

  1. Liquid handling validation (the pipette step is working as intended)
  2. Method validation (whole programming script works as intended)
  3. Process validation (workflow with whole of downstream and upstream because your method may be part of a whole process with items that aren’t integrated and may impact final results)
  4. Data management validation (data analysis, LIMS, final results because as an engineer you can use this data for future optimizations or to gather enough data to eliminate results criteria if the workflow becomes super stable)

I second what Stefan said, if you’re using a liquid handler that allows you to breakdown items into modular unit operations… you should because that standardization (or ability to standardize) will create a library of reusable components that will open programmatic floodgates of opportunities. Now once you begin to adopt that sort of framework, you’ll quickly realize that you SHOULD also be applying that same methodology framework to everything else that CAN be standardized because it will save you a ton of time and allow you standardize your work so that you can appease QA, build confidence or heck… troubleshoot a lot less.


Excellent and very thorough post

1 Like

Great answers, I love this sort of feedback. In short I suppose it does come down to your application. Are you just trying to make your routine labwork easier or are you looking into diagnostics - both have their own challenges and degree of adherence to complex routines/SOP such as ANSI/SLAS/ISO.

I suppose my initial thought was: if your automation leads to a downstream result that is a reproducible PASS, would that cover your LH/LC/method/etc validation just by empirically driven data? Probably not. Or at least not well enough to convince anyone outside of your own lab that everything is working (and will continue to work) as intended. Roll up your sleeves and make a good, well-defined, ground-up process that doesn’t cut corners. Somehow this is starting to remind me of housework… all the extra steps like priming before painting, refinishing surfaces, are rarely appreciated but the end product is always better when they are included.

1 Like

It definitely depends on the application but also how far along the team (automation folks and scientists) may be with the process. If you’re using liquid handlers to help inform process development (for example) you may be a little more… free. However if you’re translating a well established manual protocol, you don’t want to be stuck in a situation where it’s your word vs skeptical scientists/PM’s/Directors. You want to be able to say hey look my liquid handling system is actually transferring the 2uL and here’s robust data to prove it.

Skepticism towards automation in the life sciences is still high, and that becomes a problem because lab automation is also a highly collaborative process with lots of different stakeholders. It’s not uncommon to convince folks that your new shiny machine is actually not the source of problems. Standardization can help lead to accountability which leads to trust. It’s just smart no matter the size of the company or application.


TL; DR - Really just a cheerleading post about this online community saying we are the future of standardizations!

I think @luisvillaautomata is right that we should all practice #1 as much as we can. But here’s the thing. Every single lab you walk into, everyone of them, will have different (#1)’s they have implemented. I mean really the only thing we agree on is one thing…the SBS plate format :face_with_peeking_eye:

That is insane.

I do think with a crew like this though rallying together and applying for real ISO standards for things like labware definitions, worklists, liquid handling parameters, etc. is a reality we can achieve!

I mean hey look at us, we already have some grassroots initiatives for labware file data mode standardizations.

There will always be room for secret sauce optimization on top of baseline standards. But we don’t even have “Hello World!” programs for our tools.

I think that ends here with this community though. We are EXACTLY the people that can put new standardizations into place.


This is the root of the issue. During validation (or really any workflow), we can definitely create the most stringent and robust plan possible to cover every possible situation and requirement, but it would be much better if there was a consensus about what was needed vs covering every base. It sounds strange at first, but I don’t think this is a field where every solution needs to be “perfect” and in fact, “good enough” is often the best strategy. In my mind, good gets the job done, great means there’s minimal backtracking to perform, and perfect is something to strive for when you’ve got nothing else better to spend your time on.

Development/version traceability, sample tracking, user management, and LC validation are likely the fundamentals of what is needed. Everything else is probably derivative to those categories, and empirical testing would cover overall performance.


It sounds strange at first, but I don’t think this is a field where every solution needs to be “perfect” and in fact, “good enough” is often the best strategy.

It’s not strange at all. Of course, it’s going to be different but consistency is key. I know companied where they’re not even consistent about their setups, naming conventions, etc… It’s a support disaster and makes it supremely difficult for the next manager or director to make their stamp without first refactoring and eliminating a lot of bad habits from their engineers.

My experiences has been aligning expectations. For lab people whom never worked with robots before, they have in their minds that robots will work 100% of the time without errors. Laboratory automation is still a fairly new field with new and emerging technologies with lots of limitations. So it’s our job to let lab personnel understand about those limitations along with the fact automation is a process improvement. It’s always good to point out steps with higher than normal failure rate and how to correctly resolve them. Frustration usually comes from users not understanding failure modes.


+1 to this.

A lot of automation’s upfront costs are with aligning and subsequently defining expectations as part of that alignment. Well said.