Skip to main content
Interania

mParticle integration cookbook

0votes
11updates
63views
This applies tov2.24

mParticle allows you to manage integrations between your data sources and your analytics platform (Interana). You can use mParticle's API and libraries to track events and actors, then send that data to Interana.

mParticle uses a data format that aggregates events into sessions: this means that one line is a session with many events, instead of a single event per line. 

In order to import this data structure into Interana, you need to "flatten" the mParticle data logs using the Interana Transformer Library during ingest. To accomplish this, you can use a code snippet step to flatten the data, and a unique token step to generate a unique token for each line off a value or group of values:

["code_snippet", {"code": '''
try:
    events = line.pop('events')
except:
    line = None
else:
    top_level = line
    line = (dict(e, **top_level) for e in events)
'''}],


["unique_token", {"unique_columns": ["data.event_id"]}],

After setting up your transformer configuration, associate it with an import pipeline (which contains details such as the location of your mparticle data and the table you want to import it to), then start an import job to import your data. See mParticle's JSON reference and Amazon S3 configuration documentation for more information.

Backing up your data

Because this is not a high-availability data integration, we recommend that you use another method as your main event data storage solution. We recommend using an Amazon S3 or Microsoft Azure archive to back up your data. 

  • Was this article helpful?