The PyLabRobot team has just started working on our OT-2 driver. We want to know if anyone has come up with ways to run an OT-2 script outside the app that opentrons provides. We want to run py protocol.py and have this execute directly on the robot, without an intermediary upload step.
I’m sure a lot of people out there have made this happen, we have our own ideas but maybe there are easier or more effective approaches that have already been validated.
Generally speaking there are two ways to run protocols outside of the Opentrons App. First being directly from a Jupiter notebook. Second option is via command line by using ssh to access the OT-2’s terminal.
This is admittedly a bit outside of my wheel house as I don’t get to spend much time engineering these days, but this article can at least give you a few ideas:
Reposting what I wrote on Twitter so it’s here as well:
I went into the weeds on this last spring and basically the read-only OS on the RasPi means you need to re-image the entire OS if you want complete unfettered control. Which is, like, a lot.
Alternatively, Keoni Gandall and I wrote an API service that does a version of this a year or two ago, but the latest OT2 update broke it, so it’s more of a starting point than a useful tool right now: https://github.com/Koeng101/opentronsfastapi…
OT also put together an HTTP API of their own that maybe meets your needs.
If I was starting this fresh, I would take a look at the repo Keoni and I wrote above to get a basic understanding of the protocol context object that runs that OT2 protocols, then begin the hard work of understanding how the recent changes to focus on async methods change things. Honestly, I personally just got lost in the sauce on the async part, but maybe you will fair better.
My team and I have started to use Jupyter notebooks instead of the app, as this gives us more control and enables us to give more visual aid to the operator. I am hoping that this will also make a future full automation implementation easier.
So glad there’s movement toward HTTP because Python on the robot is very limited. I’ve been using python Fabric to first drop a protocol.py and JSON data onto the OT-2 with scp, then execute by ssh. That way I can still have just one Python codebase to do all the integration, ML, whatever I want, and be able to manage dependencies and execution off the robot.
Would be way less clunky to just talk to it via requests package
Interesting approach, so you are still executing a whole script on the OT-2 Pi? We are thinking of hosting an API on the OT-2 that we post individual commands to over HTTP. That way robot scripting+analytics+equipment integration can all be performed on the controller-side. The OT-2 Pi becomes responsible for as little as possible (eg one command at a time) because it is relatively inaccessible.
I’ve tried out the HTTP API beta last year. It was OK to upload and run a protocol but it lacked several endpoints that were fundamental for us, like calibration checks. We shelved it and may consider it again if it evolves.
The HTTP API was interesting because it would use the same session manager as the opentrons app does. (Plus our orchestrator would send REST API calls directly)
If all you need is to start full protocols and don’t care about restarting a session every time, then an ssh bridge is very straightforward.
But If you want to send atomic commands, then it may be a bit trickier: you’d need a persistent python program with the session listening to events from a second program. Essentially replicating the server that the robot already starts to use the REST API.
I’d start by…
Beg the OT team to better document their REST API endpoints. No need for a specific HTTP API library.
If they don´t … I’d try to reverse engineer it (yes, my react level is so bad that at this point I’d say reading the code is reverse engineering it)
I’d avoid building a dedicated server at all costs, unless I want to repurpose the OT2 for something completely different and don’t mind losing the stock HTTP server.
Yup, I execute the whole script on the OT-2 but the protocol.py is stripped down as simple as possible. All high-level logic happens off the robot and compiles a rich worklist JSON so that the protocol script rarely changes. This is pretty similar to how @smohler and others have been approaching liquid handler integration.
As others in the thread mentioned the current HTTP API seems to only be able to transfer and run full python protocols, which doesn’t give much benefit over SSH. So I think I’ll stick with SSH for now.
Atomic commands by HTTP is the way to go, good luck!
I currently have a Flask server sitting on the Opentrons that takes in REST commands and runs a generic worklist. I also send a deck layout to the OT2 and the worklist will use tips based on volume and availability from the current deck layout. I also added Start/Stop/Pause commands so users can run everything right from a web page (We have a Django server with all our apps)
The SiLA driver provides the traditional implementation of uploading and running whole OT-2 scripts in one go. This classic approach hinders integration with data inputs and external software tools, because you are relying only on what is provided on the OT-2 Raspberry Pi.
Our approach uses the amazingly convenient but seemingly little used HTTP API for OT-2, which lets us post individual commands from a Python script running on a local computer. We provide the first set of Python wrappers over this API, letting you (let’s say) extract data from a local CSV in one line and run an OT-2 command referencing that data on the next line.
This is all available in the PyLabRobot library, which you can install with pip install pylabrobot[opentrons]
Yes, I can see the advantage! pyLabRobot needs a more “atomic” API than that current SiLA2 driver provides. Although, I wonder if that HTTP API could be adapted to SiLA2 (which is basically just a wrapper around gRPC ; HTTP/2 ; Zero-conf) - but that is a separate discussion.