My name’s Ryan Lewis, and I’m a software developer at Argonne National Laboratory’s Rapid Prototyping Lab. We’ve been developing an open source laboratory automation solution called the Workflow Execution Interface (WEI, for short). You can find all of our code at https://github.com/AD-SDL.
PyLabRobot has been on our radar for a bit, so I wanted to reach out to start making some connections/introductions, and see what interest there might be in a collaboration between the RPL and the PLR team/community. We seem to have shared goals around making automated laboratories more open, accessible, and interoperable, so I’m optimistic that there’s a strong foundation for a mutually beneficial collaboration.
(My thanks to @Alex for reaching out to us via this github issue, and my apologies for not following up sooner)
I’ve been lurking your WEI repo ever since I saw @Alex’s issue - we are building a scheduler & experimental state management layer above pylabrobot and I noticed interesting parallels between our designs.
It will be helpful to learn if we can work together on something open-source that generalizes across many labs and applications.
Seems like an awesome project, I’ll check out the code and get back to you on how we can work together
Awesome project! There’s definitely a great opportunity to collaborate here. Should I organize a quick call to get started?
(didn’t get any notification of that GH issue…)
Thank you all for the warm welcome!
I think that’s a great idea! My email is firstname.lastname@example.org if you’d like to reach out directly, and I know a few of my coworkers would be interested to join as well.
@Stefan @rickwierenga @ryandlewis
you may be interested in checking this project from University Greifswald as well
According to the diagram they have a bidirectional SiLA interface to humans, that sounds very useful.
Jokes aside, I can’t judge how relevant that is, but they focus of SiLA interfaces and use pythonlab orchestrator.
Interesting! Seems like there’s a lot of convergent evolution on all these different projects, I’m seeing a lot of familiar design patterns and approaches. And I’d be lying if I said we hadn’t though about adding a “human” module to some of our setups
First I’m seeing pythonlab though, will have to look deeper into that.
We’ve been cognizant of SiLA for a bit, but have mostly been put off by the closed-source integrations for the devices we work with, and the lack of adoption in our sphere of influence. Would be curious if anyone here has a different perspective though.
I work at a company where our flagship product is a custom platform for true end-to-end manufacturing. The controls engineers decided to use OPC-UA to standardize their machine-to-machine communication protocols. Since it’s custom, we can create endpoints for every action or we can create a subset of functions that only we want to expose. The necessities. Those controls engineers in turn hand over that public interface to the software engineers who in turn work their magic to integrate it into the larger ecosystem. We can make it as open or as closed as we want to make it.
What makes it magical is that the same subset of code can be used to “discover” OPC-UA servers and open up execution paths for all of the various pieces of integrated hardware, aka a common front end. The implications are that you can treat your hardware like “microservices” which famously improve security and introduce cross platform support. There are additional benefits, especially for those looking to manage multiple sites with multiple configurations. Integration is easier not just across hardware but also with regard to software. And you can absolutely use the same communication protocols in a Jupyter Notebook if you really hate yourself or in a .NET Web API.
It’s interesting that OPC Foundation recently introduced LADS and that some instrument vendors have started to adopt it.
SiLA 2 is similar to OPC-UA in the sense that it’s attempting to standardize communication protocols but with some key differences. SiLA 2 has broader support from instrument vendors and it’s easier to retrofit on to existing hardware. The learning curve is also easier.
Adoption takes time but it seems inevitable in the long run. It’s also the correct move from vendors IMO who want to maximize their usage while minimizing any risk.