Writing a new backend / Agnosticity

Hello everyone! Thanks for the space.

A few mates and I have developed yet another pipetting robot, more or less like the OT1.

I am now looking for a more mature “general purpose” python framework to write the protocols, and hoped that PyLabRobot’s frontend could be it.

I think we need more guidance before committing to writing a new backend, so here are a few questions to get started.

Disclaimer: I’m sorry if these initial questions do not make sense, part of this post is about getting my bearings.


What is the benefit of using PyLabRobot?

There are other options available, such as:

  1. forking the more widespread OT2 API,
  2. trying to write an implementation (backend) for LabOP, or
  3. offer a GUI to define protocols and headless tools only for runs/debugging (i.e. not developing an API), which we already have done. :slight_smile:


What would be the first steps in adding a new backend? What are the important things to consider?

The CONTRIBUTING.md file says: “… Implement the methods in the class” for a backend.

While I could use those source files to start writing, I’d like to know if there is a more textual definition of the methods required by the module (i.e. what methods are needed, and their inputs/outputs). Personally, I’d need more comprehensive contributing instructions for this part.

Though I’ve had a look at backend.py and opentrons_backend.py, I could not grasp the general direction for development.


Besides the source files: is the pylabrobot.liquid_handling package API page the main resource for development?

Is there a definition of the agnostic front end commands required by the module?

Thank you for the work you are doing!




Thanks for your interest! I’ll try to address Q0 and Rick will respond shortly to address the rest of your question.

The benefit of using PLR is as you note the agnostic/ cross-platform nature. It is possible to build software tools that will benefit development across all robots in the ecosystem rather than duplicating work for different brands. User experience is also cross-platform which can be very helpful.

  1. Forking the more widespread OT2 API

It really depends on what you are reusing and what you are building yourself. OT2 architecture is roughly

HTTP API <-> Python logic <-> hardware specific commands for OT2

I think you are proposing:

HTTP API <-> Python logic from OT2 <-> hardware specific commands for your robot

I think it would actually be a very substantial investment to understand and recontextualize OT2’s logic into a new hardware setting. The codebase is quite large, arguably more so than is absolutely necessary. My team built a pretty solid understanding of their codebase and concluded that we would not repeat their code design if building from scratch.

  1. trying to write an implementation (backend) for LabOP

I am not really familiar enough with LabOP to say definitively how this would be done. I would expect one caveat to be that users would have to buy into the entire LabOP way of doing things just to write their own aspirate and dispense commands. I don’t know how LabOP communicates to liquid-handlers under the hood, but unless they have something like PLR’s abstractions, there is some inherent loss of flexibility and generality to their interactions with liquid-handling robots.

  1. offer a GUI to define protocols and headless tools only for runs/debugging (i.e. not developing an API), which we already have done. :slight_smile:

The advantage of a programmatic API over a GUI is easy integration with external software and hardware resources, and all the advantages of writing software in a modern programming environment.

I’m happy to go into more detail on any of these, and would love to learn more about yours and others’ perspective on LabOP


Hi and thanks for the question!

Fully agree with everything Stefan said.


I will answer your second question first. PyLabRobot will manage everything from a layout manager to high level protocol validation and composition. If you choose to work with PyLabRobot, you do not have to (and in fact should not) replicate this layer. If anything on this high level turns out to be incompatible with what you’re trying to do, we’d be more than happy to work those issues out (thereby creating an even more generalizable framework for everyone.) This means there is a relatively small amount of work required to get a full framework.

The first step would be write software to perform the 4 atomic liquid handling operations (tip pickup & drop, and aspiration and dispensing) at arbitrary locations (specified by coordinates wrt some origin); exposing the low level hardware programming to Python. Depending on your current setup, this is arguably the hardest step.

The second step is easy: a PyLabRobot backend receives operations (defined in standard.py) which contain all information necessary to perform the 4 atomic operations. For your new backend you should integrate the code previously written for your hardware into the four backend methods corresponding to those operations. In liquid_handling/errors.py you will find standard errors that your backend can raise, and I would consider this the ‘output’ of the method. That is what “Implement the methods in the class” refers to and I hope this answers the “I’d like to know if there is a more textual definition of the methods required by the module (i.e. what methods are needed, and their inputs/outputs).” question.

The Opentrons backend in PLR is a little awkward becauase we choose their HTTP API as our communication channel. This choice was made primarily because it does not require any installation on the Opentrons’ onboard computer. The downside is that we have to deal with an API that makes a lot of assumptions about the layout manager.

backend.py defines an abstract base class for backends, sort of like a blueprint. It really just lists the headers for the methods you’d be implementing. (Note that you will probably just want to raise a NotImplementedError for most of them.)


Yes, in addition to other pages on the docs website and this forum.

I hope the above answers this question.

To be specific, in PyLabRobot we call LiquidHandler the front end (which is fully agnostic to the physical robot). Backends are the objects that convert this high level intent into concrete, robot specific commands. So in order to write a new robot integration, one would be required to write backend commands, which will then be automatically compatible with the front end and the rest of the package.

Hope that is useful- let me know if you have any further questions!

1 Like

Hello Stefan and Rick! Thank you both for the detailed responses <3


Indeed. There are many open pipetting robot projects out there, and gathering around a more agnostic protocol programming framework seems important.

From what I interpret, each PLR protocol has a few lines loading the backend modules, specific to the robot, and the rest of the “syntax” (i.e. the way of using python to program protocols) is the hardware-agnostic part.

Q0.1: Correct?

Almost. I’ve been considering something like this:

[some GUI <->] Python syntax from OT2 <-> hardware specific commands for your robot

This would totally avoid the issue of adapting OT2’s codebase to new hardware, which I agree would be hard.

And, in this way, the protocols in OT2’s database could be reused by anyone. This interoperability layer seemed great, because it bridges to the existing OT community.

Q0.2: Can PLR use OT2 protocols and their “context”?

I’ve invited some of them here, I hope they join this chat :slight_smile:


I’ll need to study PLR more to understand this exactly. Every automation project, including ours, has come up with different terms for the same stuff.

That is great to hear!

Many of my hesitations come from this point. The robot we made can tool-change, and that adds so much flexibility to lab automation, that it becomes really hard for me to think about a general framework.

My best idea so far is to define (and grow) a list of atomic actions, starting with the ones for liquid handling with micropipettes, and slowly add others as hardware modules becomes available (e.g. “pick a colony”, “take a picture”, “spin the tubes”, etc.)

Any capability not provided by a particular back-end, should be delegated to a human (which is already what currently happens) or error out, but give it a chance of existing in the protocol anyways.

Q1.1: Would implementing this in PLR make sense?

Eventually, if a robot gains a missing capability, less protocol re-programming would be needed.

It won’t be hard to add location parameters to the underlying functions. :slight_smile:

The “atomic” actions we have defined so far are: HOME, PICK_TIP, LOAD_LIQUID, DROP_LIQUID, DISCARD_TIP, PIPETTE, COMMENT, HUMAN, WAIT.

Tool-change is handled automatically by our module, because each action specifies which pipette or tool must be grabbed from the parking posts.

I’d need some more guidance and experimentation to learn exactly which information is stored and passed by PLR, and how. I expect this to take more time than changing our code.

Another aspect I might have missed before is about resources.

In our current setup, the GUI populates a Mongo database with every definition (which are all JSON essentially), and then passes only a protocol name to the machine controller.

Q1.2: Can PLR store or load resource definitions from a database, or files?

Q1.3: Would this final layout make sense?

[Our GUI] <-> [PLR + PLR Backend] <-> [slicer + controller specific to a robot]

I can see that there’s been great effort in documenting PLR, and congrats on that. I’ll have a better look around considering what you explained, and come back if I fail to make progress.

And finally…

Q2.1: On the other hand, if you’re interested, we could setup a brief call to better outline development, and map our project’s components to PLR. I’d be glad to contribute to the contributing guide. :stuck_out_tongue:

Sorry for the long post, and thanks again for your help and the amazing effort!



PS: Here’s our project pipettin-bot / Pipetting Bot · GitLab

Great questions.

Yes. The backend (and the specific layout of resources on the deck, but not the resources themselves) are defined before the protocol, and are specific to the robot. After that, the protocol is shareable across all robots.

See this Writing robot agnostic methods — PyLabRobot documentation for an example.

No, but if anyone wants to build this, I imagine it would not be too hard. The approach would likely be to create a custom ProtocolContext that talks to PLR instead of an InstrumentContext. It will not be the prettiest, but for the purpose of transitioning it should work. I would like to point out that PyLabRobot is completely interactive (think Jupyter notebooks), and it does not require a context.

I’m thinking these features would not be methods of LiquidHandler (which is just the front end to liquid handling robots), but rather get their own front end class that in turn has different backends. As an example, there could be a Camera class that exposes take_picture, which then similarly forwards this command to a backend. The backend in this case could be the same liquid handler, or an external camera.

PLR aims to be a complete and modular package for lab automation, so to answer Q1.1: yes, that does make sense.

FYI, this is setup in PLR.

What do these commands do exactly?

In PLR, I would implement the automatic tool switching by loading the tool from the parking spot at the start of each atomic function (if that tool is not already loaded, of course). It’s robot specific behavior, so it goes in the backend.

Keep in mind that all functionality defined by a backend can still be available to the user if you wish.

To give a quick summary: the location of all resources on deck and relevant parameters for the liquid handling operations (think volume, flow rate, etc.). See standard.py.

Yes, of course, resources in PLR are just Python objects.

They are also completely robot agnostic. For example, you can use labware originally defined by Hamilton on an Opentrons without any extra effort.

Yes, that is exactly what I designed PLR to support.

That would be great! Let me know when a good time is! (My DMs are open)

That looks amazing!

1 Like

Excellent, will do!

Good news for future interop.

Off-topic: I posted first on the OT repo, but didn’t get as far (issue 11542). I also just came across “RobotsByDerAndere”, commenting on OTs repo (issue 4078), with a similar initiative. Would be cool to contact them.

Seems like the sensible approach. I’d need to think about it a bit more, but the worry is not so important for now.

  • PIPETTE: moves the pipette’s shaft only, a lower-level action used for mixing or similar things.
  • COMMENT: no action, just a comment on the protocol, serves as documentation.
  • HUMAN: display a message on the web GUI, describing what a human is required to do (spin tubes).
  • WAIT: do nothing for the given amount of time.

Cool, it’s already doing this. :slight_smile:


Q: Would there be interest in supporting a “generic” robot?

Some projects around lab automation are using Klipper, and I’ve started to migrate to it as well. We may have already had a short chat with Stefan, at Jubilee’s lab-automation Discord channel a while ago.

Klipper is a 3D-printer software and controller firmware; it runs on a PC and can control multiple MCUs synchronously.

A Klipper backend for PLR seems like a nice thing to support. Since its motion planner is controller-agnostic (as long as a Klipper firmware has been written for the controller), it would add to the overall modularity of pipetting robots.

Since I am moving to Klipper, I would probably implement this anyway. What I would ask for is some advice on how to do it properly. Even though I did a lot of coding, I’m not really a programmer.

Will do ^.^

Thanks! You can also have a look at the other projects I’ve come across: Robot project list (#63) · Issues · pipettin-bot / Pipetting Bot · GitLab

PS: sorry for the lack of links to the stuff I mentioned, I can only paste 2 links as a new user.


Klipper seems like a really great system, and we’re definitely interested in supporting these types of devices.

1 Like

For sure. I think PyLabRobot already addresses a lot of the points raised in those threads, and I’d love to have a chat about your ideas.

That sounds useful.

Also somewhat off-topic: I am planning a ‘teaching’ gui (a tool to determine the location of labware using the pipetting head), that also integrates with my game controller project. Having a ‘move’ operation as an atomic operation would allow this program to work universally.

Just to sketch out the scope of the liquid_handling package in PLR, I would like to say that to me, these sound like too high level for the PyLabRobot LiquidHandler abstraction. I can totally see them being a part of some GUI backend, and they definitely should, but in a Python environment these are all things that are be handled at the user level (using time.sleep, logging, your own code, etc.)

Of course. I’d view the Klipper based backend as a ‘group’ of robots. In PLR, something similar currently exists with the SerializingBackend, which is an abstract base class for all backends that serialize the operations they receive. The subclasses of this backend are the HTTPBackend, SerializingSavingBackend and WebSocketBackend which know how to send the serialized input over a particular channel. I think for the Klipper based backend, something similar can exist where an abstract base class implements the shared operations, and subclasses (one for each robot) complement and extend that by implementing the robot specific operations.

I hope the above is clarifying, and I’d be happy to help along the way!

Should be fixed :slight_smile:

1 Like

Following up on Rick’s comments on these, I would choose not to handle these within PLR because there is no command being sent to a robot in this case. These are control elements that make sense in the scope of a protocol, but PLR is rather strictly a library for talking to robots and in some cases equipment integrated with robots.

The case I would make here is that there is a lot of behavior that can be encompassed by non-robot protocol control elements. It will be better to rationalize the responsibilities of each library in explicit terms to minimize confusion about what belongs where. I definitely think these control elements are very valuable and make sense alongside PLR, but not within PLR, so that we can be clear about delegation of responsibility.

1 Like

@naikymen I also changed your trust level so you can post more links now

1 Like

Notes from today’s meeting

Sharing the good stuff!

Participants: rick and nico.


Rick’s diagram.


PLR data is represented in [de]serializable python objects, this means that they can be converted to and from JSON.

Option 1: Output PLR format.

  • The JSON structure is not very well documented.
  • What is needed is that the GUI exports a suitable JSON PLR definition of the protocols and items.

Rick: You’re probably best of instantiating objects in Python and calling .serialize() :). Lmk if you need help there.

Option 2:

  • Have the GUI write a protocol in PLR format.
  • Sounds harder.

Rick: I think the other option is to load the current format into Python, which is the easier option of the two, but also less future proof as it would require your project and PLR to maintain separate JSON formats. I’m quite confident the PLR serialization format is nearing its final form.

PLR → Robot

Backend: a small python object to translate actions.

PLR does not require any feedback from the backend.

Have a look at the chatterbox for a list of methods.

Rick: backend.py (the actual definition) has a little information, so definitely use both!

Note: this means that the PLR scripts are meant to be synchronous (the robot must finish each operation before returning control).


  • Keeping execution stuff in the GUI would ideally require “session” management, to avoid needing the GUI open. Actually the GUI can be closed, and perhaps re-opening it would still display messages. Probably something good for the future :slight_smile:
  • A possible good contribution: make the LH have a “persistent” counterpart, would be a nice feature. It currently lives only in memory. Doing this would enable saving states that could be restored, for sharing protocols.

Rick: Indeed. Specifically look at (de)serializing the tracker objects, VolumeTracker and TipTracker . It should be fairly easy to add something neat there.


I finished a first version of the current landscape and roadmap, the shaded stuff is not implemented.