Pipelines
pipelines:
-
when:
from:
do:
to:
then:
error:
finally:
when
What triggers this pipeline. This is what services this pipeline subscribes to.
when:
- { service: queue, topic: something }
from
This is typically where the input data is fetched for processing in the do section. If the from section is missing or empty the input data is what triggered the pipeline in the the when section.
from:
- { task: enumKeys, service: myfiles }
Note Although possible, currenly there is no point to specify more than one from task. Future features of the product could be joins from different sources.
do
Here you do things with the data. All output from each do task cumulated and passed on to the next do task.
do:
- { task: node, module: mymodule, function: somefunc }
to
This is where you typically store the outcome to a storage. All to tasks are invoked in parallel with the same data, the cumulated output from the last do task.
to:
- { task: store, service: myfiles }
- { task: publish, service: queue, topic: processthis }
then
Then tasks are executed once when all tasks where successful. Aborted tasks are still considered as successful.
You can constraint the tasks in this section by using empty: true/false
or aborted: true/false
on the task.
then:
- { task: publish, service: queue, topic: gotempty, empty: true, aborted: false }
- { task: publish, service: queue, topic: something, empty: false, aborted: false }
- { task: publish, service: queue, topic: regardless }
error
These tasks are invoked in case any task threw an error. Aborted tasks aren't considered as an error.
error:
- { task: node, module: mymodule, function: reportError }
finally
Always executed regardless of error, aborted or success.
finally:
- { task: publish, service: queue, topic: proceed_to_next_step }
id
All pipelines have a generated id unless explic specified.
pipelines:
-
id: mypipeline
when:
...
semaphore
The number of simultaneous executions of a pipeline can be limited with the semaphore option.
pipelines:
-
id: mypipeline
semaphore: 1
when:
...