PutElasticsearchRecord
Description
A record-aware Elasticsearch put processor that uses the official Elastic REST client libraries. Each Record within the FlowFile is converted into a document to be sent to the Elasticsearch _bulk APi. Multiple documents can be batched into each Request sent to Elasticsearch. Each document's Bulk operation can be configured using Record Path expressions.
Tags
elasticsearch, elasticsearch5, elasticsearch6, elasticsearch7, elasticsearch8, index, json, put, record
Properties
In the list below required Properties are shown with an asterisk (*). Other properties are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.
Display Name | API Name | Default Value | Allowable Values | Description |
---|---|---|---|---|
Index Operation * | put-es-record-index-op | index | The type of the operation used to index (create, delete, index, update, upsert) Supports Expression Language, using FlowFile attributes and Environment variables. | |
Index * | el-rest-fetch-index | The name of the index to use. Supports Expression Language, using FlowFile attributes and Environment variables. | ||
Type | el-rest-type | The type of this document (used by Elasticsearch for indexing and searching). Supports Expression Language, using FlowFile attributes and Environment variables. | ||
@timestamp Value | put-es-record-at-timestamp | The value to use as the @timestamp field (required for Elasticsearch Data Streams) Supports Expression Language, using FlowFile attributes and Environment variables. | ||
Max JSON Field String Length * | Max JSON Field String Length | 20 MB | The maximum allowed length of a string value when parsing a JSON document or attribute. | |
Client Service * | el-rest-client-service | Controller Service: ElasticSearchClientService Implementations: ElasticSearchClientServiceImpl | An Elasticsearch client service to use for running queries. | |
Record Reader * | put-es-record-reader | Controller Service: RecordReaderFactory Implementations: AvroReader CEFReader CSVReader ExcelReader GrokReader JsonPathReader JsonTreeReader ReaderLookup ScriptedReader Syslog5424Reader SyslogReader WindowsEventLogReader XMLReader YamlTreeReader | The record reader to use for reading incoming records from flowfiles. | |
Batch Size * | put-es-record-batch-size | 100 | The number of records to send over in a single batch. Supports Expression Language, using FlowFile attributes and Environment variables. | |
ID Record Path | put-es-record-id-path | A record path expression to retrieve the ID field for use with Elasticsearch. If left blank the ID will be automatically generated by Elasticsearch. Supports Expression Language, using FlowFile attributes and Environment variables. | ||
Retain ID (Record Path) | put-es-record-retain-id-field | false |
| Whether to retain the existing field used as the ID Record Path. Supports Expression Language, using FlowFile attributes and Environment variables. This property is only considered if:
|
Index Operation Record Path | put-es-record-index-op-path | A record path expression to retrieve the Index Operation field for use with Elasticsearch. If left blank the Index Operation will be determined using the main Index Operation property. Supports Expression Language, using FlowFile attributes and Environment variables. | ||
Index Record Path | put-es-record-index-record-path | A record path expression to retrieve the index field for use with Elasticsearch. If left blank the index will be determined using the main index property. Supports Expression Language, using FlowFile attributes and Environment variables. | ||
Type Record Path | put-es-record-type-record-path | A record path expression to retrieve the type field for use with Elasticsearch. If left blank the type will be determined using the main type property. Supports Expression Language, using FlowFile attributes and Environment variables. | ||
@timestamp Record Path | put-es-record-at-timestamp-path | A RecordPath pointing to a field in the record(s) that contains the @timestamp for the document. If left blank the @timestamp will be determined using the main @timestamp property Supports Expression Language, using FlowFile attributes and Environment variables. | ||
Retain @timestamp (Record Path) | put-es-record-retain-at-timestamp-field | false |
| Whether to retain the existing field used as the @timestamp Record Path. Supports Expression Language, using FlowFile attributes and Environment variables. This property is only considered if:
|
Script Record Path | put-es-record-script-path | A RecordPath pointing to a field in the record(s) that contains the script for the document update/upsert. Only applies to Update/Upsert operations. Field must be Map-type compatible (e.g. a Map or a Record) or a String parsable into a JSON Object Supports Expression Language, using FlowFile attributes and Environment variables. | ||
Scripted Upsert Record Path | put-es-record-scripted-upsert-path | A RecordPath pointing to a field in the record(s) that contains the scripted_upsert boolean flag. Whether to add the scripted_upsert flag to the Upsert Operation. Forces Elasticsearch to execute the Script whether or not the document exists, defaults to false. If the Upsert Document provided (from FlowFile content) will be empty, but sure to set the Client Service controller service's Suppress Null/Empty Values to Never Suppress or no "upsert" doc will be, included in the request to Elasticsearch and the operation will not create a new document for the script to execute against, resulting in a "not_found" error Supports Expression Language, using FlowFile attributes and Environment variables. | ||
Dynamic Templates Record Path | put-es-record-dynamic-templates-path | A RecordPath pointing to a field in the record(s) that contains the dynamic_templates for the document. Field must be Map-type compatible (e.g. a Map or Record) or a String parsable into a JSON Object. Requires Elasticsearch 7+ Supports Expression Language, using FlowFile attributes and Environment variables. | ||
Date Format | put-es-record-at-timestamp-date-format | Specifies the format to use when writing Date fields. If not specified, the default format 'yyyy-MM-dd' is used. If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters, as in 01/25/2017). Supports Expression Language, using Environment variables. | ||
Time Format | put-es-record-at-timestamp-time-format | Specifies the format to use when writing Time fields. If not specified, the default format 'HH:mm:ss' is used. If specified, the value must match the Java Simple Date Format (for example, HH:mm:ss for a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 18:04:15). Supports Expression Language, using Environment variables. | ||
Timestamp Format | put-es-record-at-timestamp-timestamp-format | Specifies the format to use when writing Timestamp fields. If not specified, the default format 'yyyy-MM-dd HH:mm:ss' is used. If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy HH:mm:ss for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters; and then followed by a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 01/25/2017 18:04:15). Supports Expression Language, using Environment variables. | ||
Log Error Responses | put-es-record-log-error-responses | false |
| If this is enabled, errors will be logged to the NiFi logs at the error log level. Otherwise, they will only be logged if debug logging is enabled on NiFi as a whole. The purpose of this option is to give the user the ability to debug failed operations without having to turn on debug logging. |
Output Error Responses | put-es-output-error-responses | false |
| If this is enabled, response messages from Elasticsearch marked as "error" will be output to the "error_responses" relationship.This does not impact the output of flowfiles to the "successful" or "errors" relationships |
Result Record Writer * | put-es-record-error-writer | Controller Service: RecordSetWriterFactory Implementations: AvroRecordSetWriter CSVRecordSetWriter FreeFormTextRecordSetWriter JsonRecordSetWriter RecordSetWriterLookup ScriptedRecordSetWriter XMLRecordSetWriter | The response from Elasticsearch will be examined for failed records and the failed records will be written to a record set with this record writer service and sent to the "errors" relationship. Successful records will be written to a record set with this record writer service and sent to the "successful" relationship. | |
Treat "Not Found" as Success | put-es-not_found-is-error | true |
| If true, "not_found" Elasticsearch Document associated Records will be routed to the "successful" relationship, otherwise to the "errors" relationship. If Output Error Responses is "true" then "not_found" responses from Elasticsearch will be sent to the error_responses relationship. |
Group Results by Bulk Error Type | put-es-record-bulk-error-groups | false |
| The errored records written to the "errors" relationship will be grouped by error type and the error related to the first record within the FlowFile added to the FlowFile as "elasticsearch.bulk.error". If "Treat "Not Found" as Success" is "false" then records associated with "not_found" Elasticsearch document responses will also be send to the "errors" relationship. This property is only considered if:
|
Dynamic Properties
Name | Value | Description |
---|---|---|
The name of the Bulk request header | A Record Path expression to retrieve the Bulk request header value | Prefix: BULK: - adds the specified property name/value as a Bulk request header in the Elasticsearch Bulk API body used for processing. If the Record Path expression results in a null or blank value, the Bulk header will be omitted for the document operation. These parameters will override any matching parameters in the _bulk request body. Supports Expression Language: Yes, evaluated using FlowFile Attributes and Environment variables. |
The name of a URL query parameter to add | The value of the URL query parameter | Adds the specified property name/value as a query parameter in the Elasticsearch URL used for processing. These parameters will override any matching parameters in the _bulk request body Supports Expression Language: Yes, evaluated using FlowFile Attributes and Environment variables. |
Relationships
Name | Description |
---|---|
errors | Record(s)/Flowfile(s) corresponding to Elasticsearch document(s) that resulted in an "error" (within Elasticsearch) will be routed here. |
failure | All flowfiles that fail for reasons unrelated to server availability go to this relationship. |
original | All flowfiles that are sent to Elasticsearch without request failures go to this relationship. |
retry | All flowfiles that fail due to server/cluster availability go to this relationship. |
successful | Record(s)/Flowfile(s) corresponding to Elasticsearch document(s) that did not result in an "error" (within Elasticsearch) will be routed here. |
Reads Attributes
This processor does not read attributes.
Writes Attributes
Name | Description |
---|---|
elasticsearch.bulk.error | The _bulk response if there was an error during processing the record within Elasticsearch. |
elasticsearch.put.error | The error message if there is an issue parsing the FlowFile records, sending the parsed documents to Elasticsearch or parsing the Elasticsearch response. |
elasticsearch.put.error.count | The number of records that generated errors in the Elasticsearch _bulk API. |
elasticsearch.put.success.count | The number of records that were successfully processed by the Elasticsearch _bulk API. |
State Management
This component does not store state.
Restricted
This component is not restricted.
Input Requirement
This component requires an incoming relationship.
System Resource Considerations
Scope | Description |
---|---|
MEMORY | The Batch of Records will be stored in memory until the bulk operation is performed. |