- Parallel connections
- Indexing horizon
- Index resolution
- Index granularity
- Simultaneous index operations
- Data sources
TrendMiner uses certain connection settings to optimize the performance of the historian connection. By default, 2 parallel connections are used. In most cases, the value for the “Historian Parallelism” setting should equal the number of cores on your Historian server.
The setting is a global setting, but parallel connections can also be set per datasource (in the datqa > datasource menu) as an override. Changing the global default will not update the existing overrides. They will keep the current value set as override.
Note: Consult TrendMiner support at firstname.lastname@example.org if you are experiencing performance issues or believe it would be appropriate to change this setting.
The indexing horizon shows the earliest date from where tags are indexed. Tag data before this date will not be available for TrendMiner analysis. The default indexing horizon is January 1st, 2015( reference 1). When increasing the index horizon (e.g. from 2015 to 2010), already indexed tags will automatically resume indexing up until the new horizon whenever these tags are used in charting or by monitors.
The index resolution defines the level of detail of the index. For a resolution of 1 minute (the default) the index contains up to 4 points per minute.
Valid index resolutions are:
- Minimum: 1 (one second)
- Maximum: 86400 (one day)
- 86400 has to be divisible by the index resolution
Note: Changing the index resolution will delete all existing tag indexes
The index granularity defines the time periods which are fetched from the datasource to build the index, during the backward indexing process. The smaller the time periods, the more calls have to be made to the datasource, causing greater overheads and slower indexing. The larger the granularity, the greater the risk on connection time-outs and the higher the memory consumption.
The default granularity is 1 month ("1M").
Valid index granularities include, for example, "1D" (1 day), "5D" (5 days) or "2M" (2 months). Changing the granularity will not impact tags which are already indexed.
Simultaneous index operations
This value sets the number of simultaneous index operations. The default value is 2, which means only 2 indexing tasks can be running at any given time. Periods of different tags will be interleaved, giving higher priority to most recent periods, ensuring all tags will index at a similar pace.
Please contact TrendMiner support before changing this setting - email@example.com
Add a new connector by clicking the (+ Add connector) label next to the title and fill in the fields for connector details:
- Name: free to choose name. The only restriction is that each connector name should be unique.
- Host: hostname of the connector.
- Username and Password: should only be filled in when they are configured on the connector installation.
As soon as a new connector is successfully added it will start syncing all the data sources which are configured to the connector.
If a connector cannot be synced it will show a red exclamation mark in the connector list. To learn more about the cause of failure, open the connector details by clicking on the connector name. The error feedback will be shown under 'Last sync'.
For connectors which are successfully connected the 'Last sync' field in the connector details will show the last sync date and time. The last sync date and time will be updated when a manual sync is triggered or when TrendMiner synchronises the tag cache for that connector, typically during a nightly tag cache refresh.
Important note: A green checkmark in the connector list indicates that TrendMiner is connected successfully to the connector, but it does not indicate if there are syncing issues between the connector and the data source. The health status for connected data sources should be consulted in the 'Data sources' menu. Also note that the status will not automatically update when the page is loaded. Refresh the page or choose the 'Test connection' option to update the connection status of a connector.
To edit the details of a connector click on its name to open the details and then choose 'Options' -> 'Edit'In the connector overview, the version of each connector is listed. This information is important in case a newer connector version supports new features or improvements for a specific data source.
Changing the name of the connector will not affect the tags from the connected data sources but changing the host, username or password to an incorrect value will render - tags from the connected data sources inaccessible.
Other options available are:
- Sync connector: this will trigger a manual (re-)sync of all data sources which are configured on this connector. Choose this option to update all tags from all connected data sources at once.
- Test connection: this option will test the connection to the connector without triggering a sync and update the health status of the connector.
- Delete: this option will remove the connector from the configuration and remove all data sources which are using this connector, until the data sources are reconnected via a correctly configured connector.
When clicking on the Data sources option, the data source menu appears:
Add a new data source by clicking the (+ Add data source) label next to the title. An "Add data source" side panel will appear from the right of the screen:
Data source details
Populate the fields in the "Add data source" side panel:
- Name: you are free to choose data source names but they are mandatory, case insensitive and unique. The name of a data source identifies the data source.
- Provider: TrendMiner provides some out of the box connectivity to data sources via specific vendor implementations (e.g. OSIsoft PI, Honeywell PHD, ...) and via more generic alternatives (e.g. ODBC, OleDB, ...).The provider 'TM connector' enables the connection of data sources via a connector to connector setup. To connect a data source via multiple connectors, extra configuration is needed via the TrendMiner Connector API.
- Connect via: this mandatory field is used to select the connector which is used to connect to the data source. All data sources need to be connected via a connector. To add a data source at least 1 connector needs to be added first.
Important note: Duplicate tag names are not supported. If 2 tags with exactly the same name are synced to TrendMiner, analytics, calculations and indexing on/for these tags might fail. Use data source prefixes to avoid possible duplicate tag name issues.
Note: Depending on the provider you select, the connection details required for completion may differ. The data managed can be one of two types (or both):
- Time series
Time series data
When you click on the Time series check box, further fields display for completion.
- Host: the hostname of the data source, e.g. myhistorian.mycompany.com
- Username and password: username and password of the account configured in the data source.
- Prefix: you are free to choose prefixes. They are case insensitive but unique strings and have a maximum length of 5 characters. When synchronising a data source, all tag names of that data source will be prepended with the prefix to ensure tag name uniqueness in TrendMiner. Prefixes are optional but we highly recommend the provision of a prefix when connecting a data source to avoid duplicate tag names.
Time series configuration
- Tag filter: this optional field allows the addition of a regular expression. Only tags matching this regular expression will be synced to TrendMiner.
|LINE.+||Will make tags with 'LINE.1' in the name available but will exclude tags with 'LINE.3' in the tag name.|
|^(?:(?!BA:TEMP).).*$||Only excludes tag STARTING with BA:TEMP (so still keep test_BA:TEMP.1)|
|^\[pref\]PI.*$||Only syncs tags from a data source with prefix 'pref' and which start with 'PI'|
Click "Save data source".
Important Note: This config is depending on the provider.
The process when dealing with asset data sources, is somewhat similar to the time series details, but much fewer fields are required for completion.
Important Note: It is not permitted to add the same connection multiple times with asset capabilities enabled on both instances! Asset tree permissions need to be managed in the asset permission section (ContextHub).
Context data sources are managed the same way as asset data sources. When a data source is context capable, the context capability checkbox can be checked, after which the correct database for context data needs to be specified.
Note: Context data synchronized from a data source in OSISoft PI will be related to asset data in TrendMiner based on the "referenced elements" on the PI event frames. The system will always attempt to relate the context item to the asset corresponding to the primary referenced element in PI (if it exists). Otherwise it will default to the first referenced element for which a corresponding asset is known in TrendMiner.
Important Note: It is not permitted to add the same connection multiple times with context capabilities enabled on both instances! This will result in the creation of duplicate context items.
Data source menu
As soon as a new data source is successfully added it will start syncing all the tags from the data source, and can be found in the Data source menu.
To manually synchronise the data source, simply:
- Click on the data source of choice within the data source menu. A side panel will appear from the right.
- Click on the sync button.
If a data source cannot be synced it will show in the data source list. To learn more about the cause of the failure, open the data source details by clicking on the data source name. The error feedback will be shown under 'Last synced'.
For data sources which are successfully connected the 'Last synced' field in the data source details will show the last synced date and time. The last sync date and time will be updated when a manual sync is triggered or when TrendMiner synchronises the tag cache for that data source, typically during a nightly tag cache refresh.
Note: The status will not automatically update when the page is loaded. Refresh the page or choose the 'Test connection' option (for time series to update the connection status of a data source.
To edit the details of a data source click on its name to open the details and then choose 'Options' -> 'Edit'
It is prohibited to edit the prefix of an existing data source because it would break existing views, formulas, etc. It is also prohibited to edit the 'Connect via'. All other fields can be updated after which the data source is synced again.
Other options available are:
- Test connection (only for timeseries datasources): this option will test the connection to the data source without triggering a sync and update the health status of the data source.
- Delete: this option will remove the data source and all tags from this data source until it is connected again via a correctly configured connector.
When a data source is deleted, all tags from that data source will become unavailable immediately, as well as breaking views and calculations which depend on these tags. It is possible to restore these tags and dependent views and formulas by adding the data source again, using the exact same name and prefix via the same or alternate connector.
Event Frame Sync
The “event frame sync” section of diagnostics page in the data section of ConfigHub allows administrators to effectively monitor the synchronization status of context capable data sources. Synchronizations are split into multiple types, for each of which a separate section exists on the diagnostics page:
- The live synchronization is focussed on keeping the context items in TrendMiner as close as possible to the state of the event frames in the data source. It processes incoming event frames sequentially, following a “first in, first out”principle.
- The excessive interval sync is automatically triggered from the live synchronization, in case a large amount of event frames is received over a short amount of time. It will result in this bulk update being isolated from the live sync queue, and processed in parallel.
- The historical synchronization can be triggered on-demand for context capable data sources, and will synchronize specific intervals all event frames from that data source for a specific time interval in the past. It is processed in parallel to the live sync and excessive interval sync.
Event frame synchronization is driven by their last modified date. This makes sure that changes will always be picked up, even if the event itself occurred outside of the synchronization interval.
For every context capable data source with live synchronization enabled, TrendMiner will periodically check the data source for new/updated event frames. All event frames created or updated between the last check and the present moment will be picked up for synchronization. The application keeps track of synchronization progress, so even in case the synchronization is interrupted (e.g. due to connection issues), it will continue where it left off after the issue has been resolved.
In case the live sync runs into a hard exception (e.g. the data source cannot be reached, and no event frames can be retrieved), it will receive the “Failed” status and an exception message can be made visible by clicking the arrow in the left-most column.
Errors when processing single event frames will not result in a failed status, these events will simply be skipped, and tracked as part of the failed context items section.
Excessive Interval Sync
The excessive interval sync gets triggered automatically when the live sync detects a large amount of event frames in the same synchronization interval. The threshold for excessive interval sync is set to 800 event frames per interval, and once this is reached the entire interval will be isolated from the live sync and processed in parallel. This to avoid delaying the next interval of the live sync, due to the processing time required for the large amount of event frames in the current interval.
The table keeps track of a full historical overview of all excessive interval synchronizations that have occurred. For each one, the following information is made available:
- Data source: The data source for which the need for an excessive interval sync was detected.
- Interval: The time interval during which the need for an excessive interval sync arose.
- Progress: The progress column combines the status of the synchronization with a progress bar. The following statuses are possible:
- Queued: In case too many excessive interval synchronizations are already running, additional ones will be queued until resources are available to pick it up.
- Failed: The excessive interval sync encountered a hard exception, resulting in some or all event frames not being processed. The interval synchronization can be retried by clicking the retry icon in the right-most column of the table (only visible for failed rows). The exception encountered can be shown by clicking the arrow in the left-most column.
- Done: The excessive interval sync has completed successfully. All event frames were either processed successfully or have been added to the list of failed context items.
A historical sync can be requested on-demand by administrators, for context capable data sources, by selecting the desired data source in the table and starting the historical sync for a specified interval in the past. The result is a full resynchronisation of all event frames modified during this period.
The table keeps track of a full historical overview of all historical synchronizations that have occurred. The information available in the table is the same as for the excessive interval sync, including the possibility to retry failed synchronizations.
There are two triggers for cleanup processes available, the nightly cleanup process triggered each night at a pre-set time, and an automatic resync of items without a component.
Note: The automatic resyncs are monitored and managed exclusively by ConfigHub administrators, in the "Data source" under the "Settings" option in ConfigHub. Look to the "Data Source" section above.
The automatic nightly resync of open items will appear in the diagnostics screen as an active job, once it starts. This sync is scheduled in Settings under the Data Section of ConfigHub.
Every night the system will check if there are context items open for more than 24 hours. This means that the context items have a start date present but no end time/date indicated. This implies the context items are still open.
Items open longer than 24 hours may be an indication, that the start event was good, but something went wrong during the closing event, preventing the closure of the context item.
Context items due to be closed will be properly updated as part of the nightly cleanup. If not, the context items will remain open until an end event appears from the source.
Note: A manual cleanup can be triggered from the datasource, under settings in ConfigHub. Look to the "Data source" section above.
AF sync (Asset Framework)
Another cleanup process on context items is triggered by the successful completion of either an Af sync (which can be started from asset capable data sources in ConfigHub) or an AF import (which can be initiated by TrendMiner admins in ContextHub)
If assets have been added to or taken away from the source AF, an AF sync will identify the differences and update the TrendMiner AF accordingly. Subsequently, this will trigger an Automatic resync of context items found to have no associated component, which will appear in the diagnostics screen as a scheduled job.
Note: To add an AF sync, look to the "Asset data" section above.
Failed Context Items
In case an event frame fails to process correctly, and the corresponding context item cannot be created or updated, it will be added to the table of failed context items.
The table keeps track of a full historical overview of all such failures, and for each one the following information is made available:
- External ID: The ID of the corresponding event frame, in the source system.
- Data source: The data source in which the event frame resides.
- Sync date: The time on which the synchronization last occurred (and failed).
- Error message: The error message encountered at the time of failure.
Additionally, the table offers has two additional features:
- View data source response: By clicking on the arrow in the left-most column, the administrator can view the payload that was received from the data source.
- Re-process event frame: By clicking the retry icon in the right-most column, the application will try to reprocess the event frame, potentially resolving the failure.
Asset Framework Sync
The “asset framework sync” section of diagnostics page in the data section of ConfigHub allows administrators to effectively monitor the synchronization status of asset capable data sources.
The history table keeps track of the full history of asset framework synchronizations.
The following information is available to the administrator:
- Data source: The data source from which the asset structure was synchronized.
- Start date: The date and time on which the synchronization was started.
- End date: The date and time on which the synchronization concluded.
- Status: The final status of the synchronization
For failed synchronizations, an error message can be viewed by clicking on the arrow in the left-most column of the table.
- For clean installs of 2019.R2 or later. Upgrades from a previous TrendMiner installation will remain the previous default indexing horizon which is January 1st 2010, or the indexing horizon they set manually.