DevOps at scale requires predictability and consistency. Your application code is versioned. Infrastructure-as-code defines your environments. But what about the process?
On Tuesday I participated in an online panel on the subject of Process-as-Code, as part of Continuous Discussions (#c9d9), a series of community panels about Agile, Continuous Delivery and DevOps.
Watch a recording of the panel:
Continuous Discussions is a community initiative by Electric Cloud, which powers Continuous Delivery at businesses like SpaceX, Cisco, GE and E*TRADE by automating their build, test and deployment processes.
Special offer: cloudonaut t-shirt
Do you love our blog posts and podcast episodes? Unlock our weekly videos and online events by subscribing to cloudonaut plus.
Special offer: Join cloudonaut plus before November 30th, and we will send you a cloudonaut t-shirt for free.Subscribe now!
Below are a few insights from my contribution to the panel:
Let me use Jenkins and a software delivery pipeline as an example:
- At the beginning, only the source code was version controlled. Jenkins contained all the information that was needed to build, test and deploy the application.
- Then we moved the build, test, and deploy scripts into version control and Jenkins only contained the information about what scripts to execute in which order.
- Now we put the whole pipeline under version control, and Jenkins only knows where to get the process configuration from to execute it.
So process as code is two things: a description of the process and a runtime environment that can execute that description.
- Idempotent process steps that can run asynchronously and can be retried.
- Having a descriptive language for your process and let an execution engine figures out how to execute the process description.
- We use a lot of Amazon tooling: CloudFormation, Code Pipeline, and Lambda.