Skip to main content

Pipeline Creation and Common Functions



User flow for creating a pipeline and common functions 


Log into your cluster

$ ssh ubuntu@10.1.xx.xx

Create a table for the pipeline to work on 

$ /opt/interana/backend/import_server/tools/ -t 'Table Name' /path/to/dataset/data_set_file.json

Create your Pipeline

Navigate to `/opt/interana/backend/import_server`. This is where the pipeline tool scripts exist

$ cd /opt/interana/backend/import_server

Use the 'Table Name' created previously and create a new Pipeline with it

$ ./ -t 'Table Name' -p 'Pipeline Name' -d file_system

- Set a pattern: ~/path/to/datasets/*.json

- Follow prompts if YES to advance settings

- Note** the "Wait Seconds" interval will run your pipeline in that interval if you decide to setup a Forever job later on

Start your Pipeline by creating a Job

Create Job (one time job)

$ ./ -c <pipeline_id> 2016-09-13 2016-09-14

$ ./ -c <pipeline_id> yesterday tomorrow

Forever Job (runs forever according to the wait interval set**)

$ ./ -f <pipeline_id> yesterday today


Check to see your import was successful by tailing the log 

$ sudo tail -f /var/log/interana/import-pipeline.log

Refresh your server and check data

Edit your Pipeline || Job

Add / Edit / Remove Transformers 

$ ./ -t 'Table Name' -p 'Pipeline Name'

View all your Jobs | Pipelines

$ ./ -s all

Pause / Resume / Delete Jobs

$ ./ --pause job_id

$ ./ --resume job_id

$ ./ --delete job_id

$ ./ --help


What's Next

Transformer Library

  • Was this article helpful?