Lyra (“lee-ruh”) is an open source workflow engine for provisioning and managing cloud native infrastructure. Using infrastructure as code, Lyra enables you to declaratively provision and manage public cloud, private cloud, and other API-backed resources as well as orchestrate imperative actions. For more information, see the README.md in the main project repository.
The fastest way to get started is with the official Lyra Docker container. You can browse all the available tags and builds on the Lyra org on dockerhub or just grab the latest image:
lyra-local directory to save your work locally.
cd mkdir lyra-local
Pull the Lyra container:
docker pull lyraproj/lyra:latest
Run the container in interactive mode and mount the directory at
/src/lyra/local to your
docker run -it \ --mount type=bind,src=$HOME/lyra-local,dst=/src/lyra/local \ lyraproj/lyra:latest /bin/ash
Homebrew support is available with:
brew install lyraproj/lyra/lyra
On other platforms, or if you’re interested in hacking on the codebase directly, follow the build instructions in the README.
Check out the example workflows to get an idea of what Lyra can do. In particular the “foobernetes.yaml” has a heavily annotated workflow that describes the deployment of infrastructure to a simple Kubernetes-like service.
There are a number of similar projects and products in this space. The following comparisons are not meant to judge the relative merits of Lyra against them, but rather to provide a frame of reference for users who might be familiar with one or more of them.
Language: Terraform is tied to HCL (Hashi Config Language), whereas Lyra has a polyglot (multiple language) design. Currently supported language frontends for Lyra are YAML, Typescript, and a variant of the Puppet Language.
Imperative actions: Lyra lets you mix imperative actions (like sending a Slack notification or triggering Github Actions) with declarative resource management. While it’s possible to do actions in Terraform, it’s working against the desired-state model that’s Terraform’s core principle.
Providers: Terraform has a rich ecosystem of Providers which enable management of different cloud resources. Lyra has a bridge which allows it to make use of that ecosystem, but it also can use other content ecosystems, including native Kubernetes interfaces.
Kubernetes: Speaking of Kubernetes, one of Lyra’s primary operating modes is as a k8s Controller, allowing it to take part in cluster events and persist beyond point-in-time execution. There is a similar project in rancher/terraform-operator.
Application Programming: There are a number of similarities between Lyra and Pulumi: the bridge to Terraform providers, polyglot interfaces, and describing infrastructure that spans cloud services and providers. However, Pulumi’s primary users are application developers who want to define the infrastructure configuration their app requires inside the app itself. Lyra’s primary users are responsible for getting infrastructure working alongside the application, allowing people who lean more towards the “ops” side of the “devops” continuum to blueprint app architectures which can then be instantiated for deployments.
State: Pulumi’s business relies on users starting off at the free tier of their web service, which stores application state for each “stack” you configure, and upgrading to paid tiers for team and enterprise features. You can opt-out of the service, but it’s central to the way Pulumi operates. Lyra manages an identity service which provides a mapping between the resources described in the workflow and the instantiation (the “identity”) of those resources in the real world. From Lyra’s perspective, the source of truth for state is received from the remote services. This identity is currently stored locally, but we do plan to move it to a service.
Non-Kubernetes deployments: While Helm is a flexible deployment tool for Kubernetes applications, its ability to deploy to non-Kubernetes APIs is limited. It’s great if you’re all-in on k8s but many people have only one foot in the kubernetes world and the other in traditional apps. If you’re looking for one tool that can be used across deployment scenarios, Helm on its own won’t be sufficient.
Partial-kube deployments: Related, if you have an application composed of some k8s services but depends on, say, an RDS database in EC2 for its backing store, you’d need to deploy and use the AWS Service Broker, with its attendant complexity, or use your own scripting inside a Job or init container to create the instance. Lyra spans both Kubernetes and other cloud providers natively, allowing a single workflow to describe all components of the application, no matter what underlying service provisions them.