Skip to main content

Interana 2.24 Release Notes

This document introduces new features included in Interana release 2.24 and lists newly resolved and known issues.

See Release 2.24.1 and 2.24.2 for information about the latest maintenance releases.

New features

Interana release 2.24 includes the following new features:

New installation and configuration tools

You can now install and configure a single or multi-node Interana cluster on any private cloud provider (AWS, Azure, GCP) or on-premises hardware (with VMWare) to run and manage yourself. See the Admin Guide for more information, or see the Sandbox Deployment Guide if you want to quickly set up a single-node cluster on AWS. 

There's also a newly improved command-line interface (CLI) that you can use to install, configure, and maintain your Interana cluster. See the Interana CLI reference for more information. 

Streaming ingest API

To support the new ingest methods, we've built a streaming ingest API. This allows Interana to accept events via an add_events HTTP API (rather than only accepting events in the form of files). See Configuring the Interana SDK for ingest for more information. 

JavaScript SDK

Interana's JavaScript SDK can be used for tracking events and sending them to Interana via the streaming ingest API. You can use the CLI to create a pipeline and job to ingest these events into an Interana table. See the SDK guide for more information.

New integrations with Segment, mParticle, and Kafka

As of version 2.24, you can use webhooks to connect your Segment data source to Interana for data ingestion. See the Segment integration cookbook for more information.

You can also use the Interana Transformer Library to transform your mParticle data logs and import data into Interana. See the mParticle integration cookbook for more information. 

We also support ingesting data directly from a Kafka system. An Interana admin can set up a pipeline to consume directly from an external Kafka topic. See Create a pipeline for an existing Kafka cluster for more information. 

Easier exploration from dashboards

When exploring from our living dashboards, you can now open a query in a new tab by right-clicking on the Explore link. Alternatively, you can press Control-click (on Windows) or Command-click (on Mac) to open the link in a new tab.

Support diagnostics file

Users of our Growth Edition can export a diagnostics file to share with the Interana support team. This diagnostics file has information about the OS, memory and disk usage, log files, and general system stats. This file can be created by running: 

sudo /opt/interana/backend/deploy/

See Create a support diagnostic file for more information about the contents of this file. 

Cluster usage loopback

Interana Growth Edition customers may choose to analyze the query usage of their own cluster. Users create a table to "loopback" query and ingest logs into an Interana table. Here are the current steps to enable the feature on the latest 2.24:


ia table create QUERY_TABLE "__time__" milliseconds username
ia settings update usage_loopback query_table QUERY_TABLE --force
(on the API node) sudo service interana restart


ia table create INGEST_TABLE "__time__" milliseconds pipeline_id
ia settings update usage_loopback ingest_table INGEST_TABLE --force
(on every import node) sudo service interana restart

See Analyzing Interana logs for more information. 

Log-based monitoring

We currently have monitors and alerts for our Managed Edition customers using Datadog. For Growth Edition customers, we now have now made a collection of Interana log messages available via syslog so customers can monitor using their own system. See How to analyze Interana logs with Datadog to learn how to use this feature.

Improved session metrics

Based on feedback from our users, we have changed the way we calculate session metrics. As of 2.24, only events that satisfy the session filter are included in session metric calculations.

Kafka Ingest

Users can now configure Interana to consume from an external Kafka topic.

Resolved issues

HIG-9113 Headers in export files from Bar and Pie views include Javascript codes
HIG-9375 Clone job and Edit job operations do not work for datasets with continuous Import
HIG-9531 First last aggregators with string derived column returns string ID instead of string value
HIG-10161 During import, auto-detect misinterprets time columns as hex columns.
HIG-10187 Per-session metric values change when user adjusts the time frame
HIG-10625 String queries are blocked during rebalance operations
HIG-10820 Monthly resolution snaps end time to 18 days after written end time
HIG-10862 Cohort end time "today" is interpreted as "now." With this fix, "Today" is interpreted properly (12:00 AM of the current day). This applies only to newly created or edited cohorts. For existing cohorts, you must change the start / end time to “today” to apply the change.
HIG-11013 Session Restart Event disappears after saving a new session
HIG-11018 External API rate limit settings are not working
HIG-11024 Sending precacher missing cache key events to syslog makes syslog very large
HIG-11049 Samples export includes usage log data in the bottom row
HIG-11060 Query returns "write() send(): Broken pipe" error
HIG-11141 Application settings added with the command line interface are not prepended with dashes

Known issues

HIG-9505 Cannot share a query for Distribution view that uses unpublished metrics.
HIG-9809 Must refresh to apply a column name change made from data tooltips in the Explorer.
HIG-10159 In the Explorer, scrolling doesn't work when the mouse pointer is on a table view.
HIG-10168 A dashboard that points to a non-existent table results in a confusing error message.
HIG-10170 Copying an invalid cohort is allowed, which then causes the UI to hang.
HIG-10180 Filter this out selects the contents of the entire table row instead of the column that was selected.
HIG-10194 Queries run while rebalancing return inaccurate results.
HIG-10268 If you re-use the name of a deleted table, the old derived columns appear in the new table as deleted.
HIG-10285 Admins are unable to edit the last column in datasets.
HIG-10289 Unable to make decimal columns aggregable.
HIG-10383 An edited metric does not update with new information until you perform a hard refresh.
HIG-10462 Newlines in advanced filter text break permalinks.
HIG-10485 Text no longer wraps for small dashboard charts, and cuts off the very bottom of the text for the first line.
HIG-10716 Emailed stacked bar dashboard charts lose sort order
HIG-10735 No visual indicator when pasting an API token when using the CLI 
HIG-10891 The enabled parameter of the update auth password_auth command does not correctly enable password authorization requirements.
HIG-11026 In Explorer filters, if you exact match the name of a named expression, the named expression is added to the filter.
HIG-11028 In some cases, creating a Time view query using monthly resolution shows an extra month of data before the specified start time
HIG-11034 Recreating a table with the same name as a deleted table inherits schema from the deleted table.
HIG-11936 The unsampled checkbox remains checked even when aggregation sampling is in process.

Release 2.24.1

This maintenance release fixes the following issues:

HIG-11295 Interana failed to start after an upgrade, if the system had more than one network interface.
HIG-11339 Long chart names did not display completely in Number View.
HIG-11343 In some cases, there were difficulties with the consistent performance with some types of ingest pipelines.
HIG-11487 You were unable to create new email reports, but could still use existing reports.
HIG-11517 A loopback import job would stop when a process failed, and should have continued in spite of the failure.
HIG-11518 Charts on the left side of the dashboard did not align correctly when a medium size chart was placed in the row above.

Release 2.24.2

This maintenance release fixes the following issues:

HIG-11554 When Global filters were updated, the dashboard charts that referenced the filters were not updated.
HIG-11604 Unable to do a daily count of users that were active over a time window (such as 7 or 28 days). For example, if a user was active on a particular day, they are included in the metric. However, if a user was not active in the current time slice, even if they met the definition of being active over the time window, they are not included in the metric.
  • Was this article helpful?