Skip to main content

Interana 2.25 Release Notes

This document introduces new features included in Interana release 2.25 and lists newly resolved and known issues. Click a link to jump directly to a topic:

New features

This section provides a high-level description of the new features included in release 2.25:

Selective data deletion

This feature allows you to selectively delete data based on specified filter criteria, such as time range, actor, or event type. There are a number of cases for selectively deleting events:

  • When there are garbage records that can be identified with the use of a boolean expression.
  • When you need to delete records for a particular set of actors.
  • To comply with legal requirements, you may need to delete all logs of activity for users who request a privacy purge.  
  • When there is a long retention period for specific high-value events, and a short retention period for all other events. You can use selective delete to periodically delete events of a particular type that are older than a specified date.

You can preview the results of a selective delete job without removing any events and review a list of deleted events after the job is run. Logs for selective delete jobs are available for audit and troubleshooting purposes as necessary. For more information, see Selective data deletion.

Privacy purge

Interana Privacy Purge enables you to comply with GDPR and other privacy regulations and policies to which your company adheres. Privacy Purge enables you to protect the privacy of Interana users, as well as the users of services whose data resides in Interana. 

A privacy purge enables you to delete all the events for a specified actor across the entire cluster, as well as deleting strings associated with the actor. Likewise, references to the specified actor are deleted from filters, named expressions, dashboards, and any other metadata in the ConfigDB. For more information, see the following articles:

Parquet ingest support

Users can now configure Interana to consume data from an external Parquet topic. For more information, see parquet_load in the Transformer library reference.

Automated whale handling

Data whales create unbalanced data that can cause problems in a data tier. The automated whale handling feature provides a solution for detecting and removing data whales. 

The script is an integral feature in an Interana installation. Executing the script runs an unsampled event count that is grouped by shard. You can specify the time period on which to run the script, how far back in time the periods should go, as well as an initial delay to allow an ingest to complete. A time period must have at least 500 actors or events per actor, or the time period is skipped (with a notification). Events belonging to a null actor are ignored.

Actors with an event count that is higher than the specified outlier threshold are flagged as whale candidates. Your whale analysis can be updated using different thresholds without having to rerun the unsampled query. You can use the with tuned parameters in a cron job that runs at automatic intervals to prevent future whales. 

For more information on automated whale handling, see Balancing data for efficient sampling.

Time handling enhancements

This release introduced ways to use the Interana CLI to change the query time zone and dashboard timezone offsets to account for daylight savings time. For more information, see the following articles: 

String tier optimization

Enhancements were made to significantly improve string tier performance and backup recovery time.

Specify API order

You can now specify the order in which results are returned when querying via the API.

Resolved issues

The following issues were resolved in Release 2.25.

HIG-7728 Error messages for string server synchronization did not clearly state what had failed and the origin of the failure.
HIG-11801 Deleting a global filter broke pinned queries that used that filter.
HIG-12112 Support for Google authentication was not available.
HIG-12285 The CLI ia settings update command didn't include double quotes with the resultTimezone.offset parameter, causing an error.
HIG-12446 Query API structured logs rapidly consumed too much space, which resulted in decreased performance.
HIG-12463 The "months" text label was for 30-day month value, which caused confusion. This label has been changed to "30-day months".
HIG-12465 Using a funnel metric in query builder resulted in an error.
HIG-12488 String servers took too long to recover after an upgrade.
HIG-12739 Sampled count events sometimes returned a decimal value. This issue has been resolved so that values are rounded to the nearest whole number.
HIG-13070 Large numbers that were identifiers stored in integer columns were rounded by filters before being displayed in the UI.

Known issues

The following are known issues in Release 2.25.

HIG-8483 After authentication, a user has to click a link twice to view the desired page.
HIG-10876 The ia table delete command successfully deletes the specified table, but then returns an error that no such table exists.

During a privacy purge, if two or more tables have columns with the same name that are different column types, an error results showing each table, column name, and type.

Workaround: Change the friendly_name of one or more of the columns shown in the error so that the names are unique.

HIG-12905 Values that have spaces in the name are not supported by Interana privacy purge.
HIG-12917 When there is a named expression and a cohort or session has the same name as a value, filter results only show the cohort or session, not the value.
HIG-13086 Selective data deletion and privacy purge do not currently support decimal values.
HIG-13090 User IDs represented as integers that exceed 53 bits are stored correctly upon ingest, but when displayed in the UI they are rounded with the last few digits shown as zeros.

Deleted ratio metrics reappear in the UI.

Workaround: Refresh the page and the deleted metrics disappear.


The dashboard cache is not included in a privacy purge. However, the cache refreshes every week, clearing out old data. For this reason, there may be a short time when dashboards still display privacy information that has been purged.

Workaround: Wait for the dashboard cache to refresh.


Columns that do not have a specified type cause a significant impact on performance during import.

Workaround: Stop the import, specify a type for any columns that do not have a type specified, then restart the import.

  • Was this article helpful?