Configuration can be written i json or yaml format.

YAML basics

YAML is a human-readable data serialization language. It is commonly used for configuration files, but could be used in many applications where data is being stored [...] YAML 1.2 is a superset of JSON, another minimalist data serialization format where braces and brackets are used instead of indentation.

  • Whitespace indentation is used to denote structure; however tab characters are never allowed as indentation.
  • Comments begin with the number sign (#).
  • List members are denoted by a leading hyphen (-) with one member per line.
  • Associative arrays are represented using the colon space (: ) in the form key: value, either one per line or enclosed in curly braces ({ }) and separated by comma space (, ).
  • Strings (scalars) are ordinarily unquoted, but may be enclosed in double-quotes ("), or single-quotes (').

(Source: Wikipedia)


  - sku         : BL394D
    quantity    : 4
    description : Basketball
    price       : 450.00
  - sku         : BL4438H
    quantity    : 1
    description : Super Hoop
    price       : 2392.00     # Oh my!
  - { sku: BL394D, quantity: 4, description: Basketball, price: 450.00 }
  - { sku: BL4438H, quantity: 1, description: Super Hoop, price: 2392.00 }
mailbody: >
   Wrapped text
   will be folded
   into a single

   Blank lines denote
   paragraph breaks

A useful tool is Yaml lint for verifying that your yaml is valid.

Foopipes configuration file structure

version: 2
plugins: <list of plugin specifications>
services: <associative array of services>
pipelines: <list of pipelines>


Convention based plugin loading. Foopipes will try its best to resolve and load the plugin:

  - Elasticsearch 

Explicit plugin loading:

  - { path: <filepath>, filename: <assembly filename>, assemblyName: <assembly name> }

Writing custom .NET plugins is a way to extend the functionallity of Foopipes. Often the functionallity that is needed can be implemented with node.js modules, but plugins is also a way to package functionallity for a specific use case. See Writing Plugins for more information.


The Services section contains the definition of all event sources and services. You can then reference a service later in the pipeline section. A service may also support variable data binding, see Variable Binding.

    type: scheduler
    type: httplistener
    path: myWebhook
    url: "http://${elasticsearch|localhost}:9200",
        index: entries
        typename: entry
        field: fields.url

In this example we have a scheduler event source, a http listener on the path /myWebhook, and an Elasticsearch configuration.

Also we define a not_analyzed term mapping for Elasticsearch which is created on startup. Term mappings are not needed for the pipelines in this example, but are useful when querying for data using exact values.

The url to Elasticsearch is obtained from the environment variable elasticsearch, with a fallback to localhost.

Automatically created services

There are a couple of services that are automatically created when the application starts unless overridden in this config section. Most important is the queue service as Foopipes is built around message passing.

They are: * queue - An in-memory message queue * file - a default file storage. * http - for sending http request with the http task.


Pipelines are processing that starts after a service fires an event.

      - { service: myWebhook }
      - { task: http, url: "http://myurl/entries" }
      - { task: publish, service: queue, topic: entry }
      - { service: queue, topic: entry }
      - { task: node, module: mymodule } 
      - { task: store, service: elasticsearch, index: myindex, dataType: entry, key: "#{entryId}" }

What happens here is when an event is fired from the service myWebhook, a message with the contents of the request body is:

  1. Matched to the first pipeline, as it subscribes to events from myWebhook.
  2. Json is loaded from an url.
  3. All entries from the previous task are passed on to a message queue with the topic entry.
  4. A cascade of new events are fired from the message queue, and as the topic entry matches the second pipeline's when criterias, the second pipeline is starts for each entry.
  5. The node.js module mymodule's default export is invoked to process the entry.
  6. The outputs from the previous step are stored to the elasticsearch service with the key specified in the entryId field of each entry.

Config shorthand format

There's an option to write pipeline configuration in a shorthand format where the first key specify the task's name: taskName: defaultarg instead of having to specify task: taskName for each step in the pipeline. The defaultarg is specific for each task type. For instance, the http task's default argument is url.

See Tasks reference for built in tasks and what their default argument is.

  - { service: [defaultarg], [arg: value] }
  - service: [defaultarg]
  - <service>
  - { <task>: [<defaultarg>], [arg: value] }
  - <task>: [<defaultarg>]
  - <task>

For example, the following config is valid:

      - queue: started
      - scheduler
      - { http: "", method: get }
      - { file: "post_#{id}.json", path: "." }
      - exit

Which is the same as:

      - { service: queue, topic: started }
      - { service: scheduler }
      - { task: http, url: "", method: get }
      - { task: file, filename: "post_#{id}.json", path: "." }
      - { task: exit }

Here are four ways to write the same thing:

      - { task: file, filename: "post_#{id}.json", path: "." }
      - { file: "post_#{id}.json", path: "." }
      - file: "post_#{id}.json"
        path: "."
      - task: file
        filename: "post_#{id}.json"
        path: "." 


      - { task: exit }
      - { exit }
      - exit