Elasticsearch数据迁移或者版本升级,如何快速提升迁移效率?

The length of time in milliseconds before the interval count resets. Must be finite.

(default: 5000)

–intervalCap

The max number of transport request in the given interval of time.

(default: 5)

–carryoverConcurrencyCount

Whether the task must finish in the given concurrencyInterval

(intervalCap will reset to the default whether the request is completed or not)

or will be carried over into the next interval count,

which will effectively reduce the number of new requests created in the next interval

i.e. intervalCap -=

(default: true)

–throttleInterval

The length of time in milliseconds to delay between getting data from an inputTransport and sending it to an outputTransport

(default: 1)

–debug

Display the elasticsearch commands being used

(default: false)

–quiet

Suppress all messages except for errors

(default: false)

–type

What are we exporting?

(default: data, options: [settings, analyzer, data, mapping, policy, alias, template, component_template, index_template])

–filterSystemTemplates

Whether to remove metrics-*-* and logs-*-* system templates

(default: true])

–templateRegex

Regex used to filter templates before passing to the output transport

(default: (metrics|logs|\\…+)(-.+)?)

–delete

Delete documents one-by-one from the input as they are

moved.  Will not delete the source index

(default: false)

–headers

Add custom headers to Elastisearch requests (helpful when

your Elasticsearch instance sits behind a proxy)

(default: ‘{“User-Agent”: “elasticdump”}’)

–params

Add custom parameters to Elastisearch requests uri. Helpful when you for example

want to use elasticsearch preference

(default: null)

–searchBody

Preform a partial extract based on search results

(when ES is the input, default values are

if ES > 5

`‘{“query”: { “match_all”: {} }, “stored_fields”: [“*”], “_source”: true }’`

else

`‘{“query”: { “match_all”: {} }, “fields”: [“*”], “_source”: true }’`

[As of 6.68.0] If the searchBody is preceded by a @ symbol, elasticdump will perform a file lookup

in the location specified. NB: File must contain valid JSON

–searchWithTemplate

Enable to use Search Template when using --searchBody

If using Search Template then searchBody has to consist of “id” field and “params” objects

If “size” field is defined within Search Template, it will be overridden by --size parameter

See https://www.elastic.co/guide/en/elasticsearch/reference/current/search-template.html for

further information

(default: false)

–sourceOnly

Output only the json contained within the document _source

Normal: {“_index”:“”,“_type”:“”,“_id”:“”, “_source”:{SOURCE}}

sourceOnly: {SOURCE}

(default: false)

–ignore-errors

Will continue the read/write loop on write error

(default: false)

–scrollId

The last scroll Id returned from elasticsearch.

This will allow dumps to be resumed used the last scroll Id &

`scrollTime` has not expired.

–scrollTime

Time the nodes will hold the requested search in order.

(default: 10m)

–scroll-with-post

Use a HTTP POST method to perform scrolling instead of the default GET

(default: false)

–maxSockets

How many simultaneous HTTP requests can we process make?

(default:

5 [node <= v0.10.x] /

Infinity [node >= v0.11.x] )

–timeout

Integer containing the number of milliseconds to wait for

a request to respond before aborting the request. Passed

directly to the request library. Mostly used when you don’t

care too much if you lose some data when importing

but rather have speed.

–offset

Integer containing the number of rows you wish to skip

ahead from the input transport.  When importing a large

index, things can go wrong, be it connectivity, crashes,

someone forgetting to `screen`, etc.  This allows you

to start the dump again from the last known line written

(as logged by the `offset` in the output).  Please be

advised that since no sorting is specified when the

dump is initially created, there’s no real way to

guarantee that the skipped rows have already been

written/parsed.  This is more of an option for when

you want to get most data as possible in the index

without concern for losing some rows in the process,

similar to the `timeout` option.

(default: 0)

–noRefresh

Disable input index refresh.

Positive:

  1. Much increase index speed

  2. Much less hardware requirements

Negative:

  1. Recently added data may not be indexed

Recommended to use with big data indexing,

where speed and system health in a higher priority

than recently added data.

–inputTransport

Provide a custom js file to use as the input transport

–outputTransport

Provide a custom js file to use as the output transport

–toLog

When using a custom outputTransport, should log lines

be appended to the output stream?

(default: true, except for `$`)

–awsChain

Use [standard]( ) location and ordering for resolving credentials including environment variables, config files, EC2 and ECS metadata locations

_Recommended option for use with AWS_

–awsAccessKeyId

–awsSecretAccessKey

When using Amazon Elasticsearch Service protected by

AWS Identity and Access Management (IAM), provide

your Access Key ID and Secret Access Key

–awsIniFileProfile

Alternative to --awsAccessKeyId and --awsSecretAccessKey,

loads credentials from a specified profile in aws ini file.

For greater flexibility, consider using --awsChain

and setting AWS_PROFILE and AWS_CONFIG_FILE

environment variables to override defaults if needed

–awsService

Sets the AWS service that the signature will be generated for

(default: calculated from hostname or host)

–awsRegion

Sets the AWS region that the signature will be generated for

(default: calculated from hostname or host)

–awsUrlRegex

Regular expression that defined valied AWS urls that should be signed

(default: ^https?:\\.*.amazonaws.com.*$)

–transform

A javascript, which will be called to modify documents

before writing it to destination. global variable ‘doc’

is available.

Example script for computing a new field ‘f2’ as doubled

value of field ‘f1’:

doc._source[“f2”] = doc._source.f1 * 2;

–httpAuthFile

When using http auth provide credentials in ini file in form

`user=

password=`

–support-big-int

Support big integer numbers

–retryAttempts

Integer indicating the number of times a request should be automatically re-attempted before failing

when a connection fails with one of the following errors `ECONNRESET`, `ENOTFOUND`, `ESOCKETTIMEDOUT`,

ETIMEDOUT`, `ECONNREFUSED`, `EHOSTUNREACH`, `EPIPE`, `EAI_AGAIN`

(default: 0)

–retryDelay

Integer indicating the back-off/break period between retry attempts (milliseconds)

(default : 5000)

–parseExtraFields

Comma-separated list of meta-fields to be parsed

–maxRows

supports file splitting.  Files are split by the number of rows specified

–fileSize

supports file splitting.  This value must be a string supported by the **bytes** module.

The following abbreviations must be used to signify size in terms of units

b for bytes

kb for kilobytes

mb for megabytes

gb for gigabytes

tb for terabytes

e.g. 10mb / 1gb / 1tb

Partitioning helps to alleviate overflow/out of memory exceptions by efficiently segmenting files

into smaller chunks that then be merged if needs be.

–fsCompress

gzip data before sending output to file.

On import the command is used to inflate a gzipped file

–s3AccessKeyId

AWS access key ID

–s3SecretAccessKey

AWS secret access key

–s3Region

AWS region

–s3Endpoint

AWS endpoint can be used for AWS compatible backends such as

OpenStack Swift and OpenStack Ceph

–s3SSLEnabled

Use SSL to connect to AWS [default true]

–s3ForcePathStyle  Force path style URLs for S3 objects [default false]

–s3Compress

gzip data before sending to s3

–s3ServerSideEncryption

Enables encrypted uploads

–s3SSEKMSKeyId

KMS Id to be used with aws:kms uploads

–s3ACL

S3 ACL: private | public-read | public-read-write | authenticated-read | aws-exec-read |

bucket-owner-read | bucket-owner-full-control [default private]

–retryDelayBase

The base number of milliseconds to use in the exponential backoff for operation retries. (s3)

–customBackoff

Activate custom customBackoff function. (s3)

–tlsAuth

Enable TLS X509 client authentication

–cert, --input-cert, --output-cert

Client certificate file. Use --cert if source and destination are identical.

Otherwise, use the one prefixed with --input or --output as needed.

–key, --input-key, --output-key

Private key file. Use --key if source and destination are identical.

Otherwise, use the one prefixed with --input or --output as needed.

–pass, --input-pass, --output-pass

Pass phrase for the private key. Use --pass if source and destination are identical.

Otherwise, use the one prefixed with --input or --output as needed.

–ca, --input-ca, --output-ca

CA certificate. Use --ca if source and destination are identical.

Otherwise, use the one prefixed with --input or --output as needed.

–inputSocksProxy, --outputSocksProxy

Socks5 host address

–inputSocksPort, --outputSocksPort

Socks5 host port

–handleVersion

Tells elastisearch transport to handle the `_version` field if present in the dataset

(default : false)

–versionType

Elasticsearch versioning types. Should be `internal`, `external`, `external_gte`, `force`.

NB : Type validation is handle by the bulk endpoint and not elasticsearch-dump

–csvDelimiter

The delimiter that will separate columns.

(default : ‘,’)

–csvFirstRowAsHeaders

If set to true the first row will be treated as the headers.

(default : true)

–csvRenameHeaders

If you want the first line of the file to be removed and replaced by the one provided in the `csvCustomHeaders` option

(default : true)

–csvCustomHeaders  A comma-seperated listed of values that will be used as headers for your data. This param must

be used in conjunction with `csvRenameHeaders`

(default : null)

–csvWriteHeaders   Determines if headers should be written to the csv file.

(default : true)

–csvIgnoreEmpty

Set to true to ignore empty rows.

(default : false)

–csvSkipLines

If number is > 0 the specified number of lines will be skipped.

(default : 0)

–csvSkipRows

If number is > 0 then the specified number of parsed rows will be skipped

(default : 0)

–csvTrim

Set to true to trim all white space from columns.

(default : false)

–csvRTrim

Set to true to right trim all columns.

(default : false)

–csvLTrim

Set to true to left trim all columns.

(default : false)

–csvHandleNestedData

Set to true to handle nested JSON/CSV data.

NB : This is a very optioninated implementaton !

(default : false)

–csvIdColumn

Name of the column to extract the record identifier (id) from

When exporting to CSV this column can be used to override the default id (@id) column name

(default : null)

–csvIndexColumn

Name of the column to extract the record index from

When exporting to CSV this column can be used to override the default index (@index) column name

(default : null)

–csvTypeColumn

Name of the column to extract the record type from

When exporting to CSV this column can be used to override the default type (@type) column name

(default : null)

–help

This page

Examples:

# Copy an index from production to staging with mappings:

elasticdump \

–input=http://production.es.com:9200/my_index \

–output=http://staging.es.com:9200/my_index \

–type=mapping

elasticdump \

–input=http://production.es.com:9200/my_index \

–output=http://staging.es.com:9200/my_index \

–type=data

# Backup index data to a file:

elasticdump \

–input=http://production.es.com:9200/my_index \

–output=/data/my_index_mapping.json \

–type=mapping

elasticdump \

–input=http://production.es.com:9200/my_index \

–output=/data/my_index.json \

–type=data

# Backup and index to a gzip using stdout:

elasticdump \

–input=http://production.es.com:9200/my_index \

–output=$ \

| gzip > /data/my_index.json.gz

# Backup the results of a query to a file

elasticdump \

–input=http://production.es.com:9200/my_index \

–output=query.json \

–searchBody “{\“query\”:{\“term\”:{\“username\”: \“admin\”}}}”

------------------------------------------------------------------------------

Learn more @ https://github.com/taskrabbit/elasticsearch-dump

[root@root bin]#

1.3迁移工具使用


常用命令,直接用help中的example,如下:

# Copy an index from production to staging with mappings:

elasticdump \

–input=http://production.es.com:9200/my_index \

–output=http://staging.es.com:9200/my_index \

最后

image.png

e

elasticdump \

–input=http://production.es.com:9200/my_index \

–output=query.json \

–searchBody “{\“query\”:{\“term\”:{\“username\”: \“admin\”}}}”

------------------------------------------------------------------------------

Learn more @ https://github.com/taskrabbit/elasticsearch-dump

[root@root bin]#

1.3迁移工具使用


常用命令,直接用help中的example,如下:

# Copy an index from production to staging with mappings:

elasticdump \

–input=http://production.es.com:9200/my_index \

–output=http://staging.es.com:9200/my_index \

最后

[外链图片转存中…(img-OJmZDtiw-1714290052322)]

本文已被CODING开源项目:【一线大厂Java面试题解析+核心总结学习笔记+最新讲解视频+实战项目源码】收录

  • 22
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值