neo4j 显示所有配置

SHOW SETTINGS
namevalueisDynamicdefaultValuedescription

1

"browser.allow_outgoing_connections""true"false"true""Configure the policy for outgoing Neo4j Browser connections."

2

"browser.credential_timeout""0s"false"0s""Configure the Neo4j Browser to time out logged in users after this idle period. Setting this to 0 indicates no limit."

3

"browser.post_connect_cmd"""false"""Commands to be run when Neo4j Browser successfully connects to this server. Separate multiple commands with semi-colon."

4

"browser.remote_content_hostname_whitelist""guides.neo4j.com,localhost"false"guides.neo4j.com,localhost""Whitelist of hosts for the Neo4j Browser to be allowed to fetch content from."

5

"browser.retain_connection_credentials""true"false"true""Configure the Neo4j Browser to store or not store user credentials."

6

"browser.retain_editor_history""true"false"true""Configure the Neo4j Browser to store or not store user editor history."

7

"client.allow_telemetry""true"false"true""Configure client applications such as Browser and Bloom to send Product Analytics data."

8

"db.checkpoint""PERIODIC"false"PERIODIC""Configures the general policy for when checkpoints should occur. Possible values are: * `PERIODIC` (default) -- it runs a checkpoint as per the interval specified by `<<config_db.checkpoint.interval.tx,db.checkpoint.interval.tx>>` and `<<config_db.checkpoint.interval.time,db.checkpoint.interval.time>>`. * `VOLUME` -- it runs a checkpoint when the size of the transaction logs reaches the value specified by the `<<config_db.checkpoint.interval.volume,db.checkpoint.interval.volume>>` setting. By default, it is set to `250.00MiB`. * `CONTINUOUS` (Enterprise Edition) -- it ignores `<<config_db.checkpoint.interval.tx,db.checkpoint.interval.tx>>` and `<<config_db.checkpoint.interval.time,db.checkpoint.interval.time>>` settings and runs the checkpoint process all the time. * `VOLUMETRIC` -- it makes the best effort to checkpoint often enough so that the database does not get too far behind on deleting old transaction logs as specified in the `<<config_db.tx_log.rotation.retention_policy,db.tx_log.rotation.retention_policy>>` setting. "

9

"db.checkpoint.interval.time""15m"false"15m""Configures the time interval between checkpoints. The database does not checkpoint more often the specified interval (unless checkpointing is triggered by a different event), but might checkpoint less often if performing a checkpoint takes longer time than the configured interval. A checkpoint is a point in the transaction logs from which recovery starts. Longer checkpoint intervals typically mean that recovery takes longer to complete in case of a crash. On the other hand, a longer checkpoint interval can also reduce the I/O load that the database places on the system, as each checkpoint implies a flushing and forcing of all the store files."

10

"db.checkpoint.interval.tx""100000"false"100000""Configures the transaction interval between checkpoints. The database does not checkpoint more often the specified interval (unless checkpointing is triggered by a different event), but might checkpoint less often if performing a checkpoint takes longer time than the configured interval. A checkpoint is a point in the transaction logs from which recovery starts. Longer checkpoint intervals typically mean that recovery takes longer to complete in case of a crash. On the other hand, a longer checkpoint interval can also reduce the I/O load that the database places on the system, as each checkpoint implies a flushing and forcing of all the store files. The default is `100000` for a checkpoint every 100000 transactions."

11

"db.checkpoint.interval.volume""250.00MiB"false"250.00MiB""Configures the volume of transaction logs between checkpoints. The database does not checkpoint more often the specified interval (unless checkpointing is triggered by a different event), but might checkpoint less often if performing a checkpoint takes longer time than the configured interval. A checkpoint is a point in the transaction logs from which recovery starts. Longer checkpoint intervals typically mean that recovery takes longer to complete in case of a crash. On the other hand, a longer checkpoint interval can also reduce the I/O load that the database places on the system, as each checkpoint implies a flushing and forcing of all the store files."

12

"db.checkpoint.iops.limit""600"true"600""Limit the number of IOs the background checkpoint process consumes per second. This setting is advisory. It is ignored in Neo4j Community Edition and is followed to best effort in Enterprise Edition. An IO is, in this case, an 8 KiB (mostly sequential) write. Limiting the write IO in this way leaves more bandwidth in the IO subsystem to service random-read IOs, which is important for the response time of queries when the database cannot fit entirely in memory. The only drawback of this setting is that longer checkpoint times may lead to slightly longer recovery times in case of a database or system crash. A lower number means lower IO pressure and, consequently, longer checkpoint times. Set this to -1 to disable the IOPS limit and remove the limitation entirely. This lets the checkpointer flush data as fast as the hardware goes. Removing or commenting out the setting sets the default value of 600."

13

"db.cluster.catchup.pull_interval""1s"false"1s""Interval of pulling updates from cores."

14

"db.cluster.raft.apply.buffer.max_bytes""1.00GiB"false"1.00GiB""The maximum number of bytes in the apply buffer. This parameter limits the amount of memory that can be consumed by the apply buffer. If the bytes limit is reached, buffer size will be limited even if max_entries is not exceeded."

15

"db.cluster.raft.apply.buffer.max_entries""1024"false"1024""The maximum number of entries in the raft log entry prefetch buffer."

16

"db.cluster.raft.in_queue.batch.max_bytes""8.00MiB"false"8.00MiB""Largest batch processed by RAFT in bytes"

17

"db.cluster.raft.in_queue.max_bytes""2.00GiB"false"2.00GiB""Maximum number of bytes in the RAFT in-queue"

18

"db.cluster.raft.leader_transfer.priority_group"""false"""The name of a server_group whose members should be prioritized as leaders. This does not guarantee that members of this group will be leader at all times, but the cluster will attempt to transfer leadership to such a member when possible. If a database is specified using db.cluster.raft.leader_transfer.priority_group.<database> the specified priority group will apply to that database only. If no database is specified that group will be the default and apply to all databases which have no priority group explicitly set. Using this setting will disable leadership balancing."

19

"db.cluster.raft.leader_transfer.priority_tag"""false"""The name of a server tag whose members should be prioritized as leaders. This does not guarantee that members with this group will be leader at all times, but the cluster will attempt to transfer leadership to such a member when possible. If a database is specified using db.cluster.raft.leader_transfer.priority_tag.<database> the specified priority tag will apply to that database only. If no database is specified that tag will be the default and apply to all databases which have no priority tag explicitly set. Using this setting will disable leadership balancing."

20

"db.cluster.raft.log.prune_strategy""1g size"false"1g size""RAFT log pruning strategy that determines which logs are to be pruned. Neo4j only prunes log entries up to the last applied index, which guarantees that logs are only marked for pruning once the transactions within are safely copied over to the local transaction logs and safely committed by a majority of cluster members. Possible values are a byte size or a number of transactions (e.g., 200K txs)."

21

"db.cluster.raft.log_shipping.buffer.max_bytes""1.00GiB"false"1.00GiB""The maximum number of bytes in the in-flight cache. This parameter limits the amount of memory that can be consumed by cache. If the bytes limit is reached, cache size will be limited even if max_entries is not exceeded."

22

"db.cluster.raft.log_shipping.buffer.max_entries""1024"false"1024""The maximum number of entries in the in-flight cache. Increasing size will require more memory but might improve performance in high load situations."

23

"db.filewatcher.enabled""true"false"true""Allows the enabling or disabling of the file watcher service. This is an auxiliary service but should be left enabled in almost all cases."

24

"db.format""aligned"true"aligned""Database format. This is the format that will be used for new databases. Valid values are `standard`, `aligned`, `high_limit` or `block`.The `aligned` format is essentially the `standard` format with some minimal padding at the end of pages such that a single record will never cross a page boundary. The `high_limit` and `block` formats are available for Enterprise Edition only. Either `high_limit` or `block` is required if you have a graph that is larger than 34 billion nodes, 34 billion relationships, or 68 billion properties."

25

"db.import.csv.buffer_size""2097152"false"2097152""The size of the internal buffer in bytes used by `LOAD CSV`. If the csv file contains huge fields this value may have to be increased."

26

"db.import.csv.legacy_quote_escaping""true"false"true""Selects whether to conform to the standard RFC 4180 - Common Format and MIME Type for Comma-Separated Values (CSV) Files for interpreting escaped quotation characters in CSV files loaded using `LOAD CSV`. Setting this to `false` will use the standard, interpreting repeated quotes '""' as a single in-lined quote, while `true` will use the legacy convention originally supported in Neo4j 3.0 and 3.1, allowing a backslash to include quotes in-lined in fields."

27

"db.index.fulltext.default_analyzer""standard-no-stop-words"false"standard-no-stop-words""The name of the analyzer that the fulltext indexes should use by default."

28

"db.index.fulltext.eventually_consistent""false"false"false""Whether or not fulltext indexes should be eventually consistent by default or not."

29

"db.index.fulltext.eventually_consistent_index_update_queue_max_length""10000"false"10000""The eventually_consistent mode of the fulltext indexes works by queueing up index updates to be applied later in a background thread. This newBuilder sets an upper bound on how many index updates are allowed to be in this queue at any one point in time. When it is reached, the commit process will slow down and wait for the index update applier thread to make some more room in the queue."

30

"db.index_sampling.background_enabled""true"false"true""Enable or disable background index sampling"

31

"db.index_sampling.sample_size_limit""8388608"false"8388608""Index sampling chunk size limit"

32

"db.index_sampling.update_percentage""5"false"5""Percentage of index updates of total index size required before sampling of a given index is triggered"

33

"db.lock.acquisition.timeout""0s"true"0s""The maximum time interval within which lock should be acquired. Zero (default) means timeout is disabled."

34

"db.logs.query.annotation_data_as_json_enabled""false"true"false""Log the annotation data as a JSON strings instead of a cypher map. This only have effect when the query log is in JSON format."

35

"db.logs.query.annotation_data_format""CYPHER"true"CYPHER""The format to use for the JSON annotation data. `CYPHER`:: Formatted as a Cypher map. E.g. `{foo: 'bar', baz: {k: 1}}`. `JSON`:: Formatted as a JSON map. E.g. `{"foo": "bar", "baz": {"k": 1}}`. `FLAT_JSON`:: Formatted as a flattened JSON map. E.g. `{"foo": "bar", "baz.k": 1}`. This only have effect when the query log is in JSON format."

36

"db.logs.query.early_raw_logging_enabled""false"true"false""Log query text and parameters without obfuscating passwords. This allows queries to be logged earlier before parsing starts."

37

"db.logs.query.enabled""VERBOSE"true"VERBOSE""Log executed queries. Valid values are `OFF`, `INFO`, or `VERBOSE`. `OFF`:: no logging. `INFO`:: log queries at the end of execution, that take longer than the configured threshold, `db.logs.query.threshold`. `VERBOSE`:: log queries at the start and end of execution, regardless of `db.logs.query.threshold`. Log entries are written to the query log. This feature is available in the Neo4j Enterprise Edition."

38

"db.logs.query.max_parameter_length""2147483647"true"2147483647""Sets a maximum character length use for each parameter in the log. This only takes effect if `db.logs.query.parameter_logging_enabled = true`."

39

"db.logs.query.obfuscate_literals""false"true"false""Obfuscates all literals of the query before writing to the log. Note that node labels, relationship types and map property keys are still shown. Changing the setting will not affect queries that are cached. So, if you want the switch to have immediate effect, you must also call `CALL db.clearQueryCaches()`."

40

"db.logs.query.parameter_logging_enabled""true"true"true""Log parameters for the executed queries being logged."

41

"db.logs.query.plan_description_enabled""false"true"false""Log query plan description table, useful for debugging purposes."

42

"db.logs.query.threshold""0s"true"0s""If the execution of query takes more time than this threshold, the query is logged once completed - provided query logging is set to INFO. Defaults to 0 seconds, that is all queries are logged."

43

"db.logs.query.transaction.enabled""OFF"true"OFF""Log the start and end of a transaction. Valid values are `OFF`, `INFO`, or `VERBOSE`. `OFF`:: no logging. `INFO`:: log start and end of transactions that take longer than the configured threshold, `db.logs.query.transaction.threshold`. `VERBOSE`:: log start and end of all transactions. Log entries are written to the query log. This feature is available in the Neo4j Enterprise Edition."

44

"db.logs.query.transaction.threshold""0s"true"0s""If the transaction is open for more time than this threshold, the transaction is logged once completed - provided transaction logging (db.logs.query.transaction.enabled) is set to `INFO`. Defaults to 0 seconds (all transactions are logged)."

45

"db.memory.pagecache.warmup.enable""true"false"true""Page cache can be configured to perform usage sampling of loaded pages that can be used to construct active load profile. According to that profile pages can be reloaded on the restart, replication, etc. This setting allows disabling that behavior. This feature is available in Neo4j Enterprise Edition."

46

"db.memory.pagecache.warmup.preload""false"false"false""Page cache warmup can be configured to prefetch files, preferably when cache size is bigger than store size. Files to be prefetched can be filtered by 'dbms.memory.pagecache.warmup.preload.allowlist'. Enabling this disables warmup by profile "

47

"db.memory.pagecache.warmup.preload.allowlist"".*"false".*""Page cache warmup prefetch file allowlist regex. By default matches all files."

48

"db.memory.pagecache.warmup.profile.interval""1m"false"1m""The profiling frequency for the page cache. Accurate profiles allow the page cache to do active warmup after a restart, reducing the mean time to performance. This feature is available in Neo4j Enterprise Edition."

49

"db.memory.transaction.max""0B"true"0B""Limit the amount of memory that a single transaction can consume, in bytes (or kibibytes with the 'k' suffix, mebibytes with 'm' and gibibytes with 'g'). Zero means 'largest possible value'."

50

"db.memory.transaction.total.max""0B"true"0B""Limit the amount of memory that all transactions in one database can consume, in bytes (or kibibytes with the 'k' suffix, mebibytes with 'm' and gibibytes with 'g'). Zero means 'unlimited'."

51

"db.recovery.fail_on_missing_files""true"false"true""If `true`, Neo4j will abort recovery if transaction log files are missing. Setting this to `false` will allow Neo4j to create new empty missing files for the already existing database, but the integrity of the database might be compromised."

52

"db.relationship_grouping_threshold""50"false"50""Relationship count threshold for considering a node to be dense."

53

"db.shutdown_transaction_end_timeout""10s"false"10s""The maximum amount of time to wait for running transactions to complete before allowing initiated database shutdown to continue"

54

"db.store.files.preallocate""true"false"true""Specify if Neo4j should try to preallocate store files as they grow."

55

"db.temporal.timezone""Z"false"Z""Database timezone for temporal functions. All Time and DateTime values that are created without an explicit timezone will use this configured default timezone."

56

"db.track_query_cpu_time""false"true"false""Enables or disables tracking of how much time a query spends actively executing on the CPU. Calling `SHOW TRANSACTIONS` will display the time."

57

"db.transaction.bookmark_ready_timeout""30s"true"30s""The maximum amount of time to wait for the database state represented by the bookmark."

58

"db.transaction.concurrent.maximum""1000"true"1000""The maximum number of concurrently running transactions. If set to 0, limit is disabled."

59

"db.transaction.monitor.check.interval""2s"false"2s""Configures the time interval between transaction monitor checks. Determines how often monitor thread will check transaction for timeout."

60

"db.transaction.sampling.percentage""5"true"5""Transaction sampling percentage."

61

"db.transaction.timeout""0s"true"0s""The maximum time interval of a transaction within which it should be completed."

62

"db.transaction.tracing.level""DISABLED"true"DISABLED""Transaction creation tracing level."

63

"db.tx_log.buffer.size""2097152"false"2097152""On serialization of transaction logs, they will be temporary stored in the byte buffer that will be flushed at the end of the transaction or at any moment when buffer will be full. By default, the size of byte buffer is based on the number of available CPUs, with a minimum of 512KB. Every additional 4 CPUs add another 512KB into the buffer size. The maximal buffer size in this default scheme is 4MB taking into account that there can be one transaction log writer per database in a multi-database env.For example, runtimes with 4 CPUs will have a buffer size of 1MB; runtimes with 8 CPUs will have buffer size of 1.5MB; runtimes with 12 CPUs will have buffer size of 2MB."

64

"db.tx_log.preallocate""true"true"true""Specify if Neo4j should try to preallocate logical log file in advance."

65

"db.tx_log.rotation.retention_policy""2 days 2G"true"2 days""Tell Neo4j how long logical transaction logs should be kept to backup the database.For example, "10 days" will prune logical logs that only contain transactions older than 10 days.Alternatively, "100k txs" will keep the 100k latest transactions from each database and prune any older transactions."

66

"db.tx_log.rotation.size""256.00MiB"true"256.00MiB""Specifies at which file size the logical log will auto-rotate. Minimum accepted value is 128 KiB. "

67

"db.tx_state.memory_allocation""ON_HEAP"false"ON_HEAP""Defines whether memory for transaction state should be allocated on- or off-heap. Note that for small transactions you can gain up to 25% write speed by setting it to `ON_HEAP`."

68

"dbms.cluster.catchup.client_inactivity_timeout""10m"false"10m""The catch up protocol times out if the given duration elapses with no network activity. Every message received by the client from the server extends the time out duration."

69

"dbms.cluster.discovery.endpoints"nullfalsenull"A comma-separated list of endpoints which a server should contact in order to discover other cluster members."

70

"dbms.cluster.discovery.log_level""WARN"false"WARN""The level of middleware logging"

71

"dbms.cluster.discovery.resolver_type""LIST"false"LIST""Configure the resolver type that the discovery service uses for determining who should be part of the cluster. Valid values are `LIST`, `SRV`, `DNS`, and `K8S`: `LIST`:: A static configuration where `dbms.cluster.discovery.endpoints` must contain a list of the addresses of the cluster members. `SRV` and `DNS`:: A dynamic configuration where `dbms.cluster.discovery.endpoints` must point to a DNS entry containing the cluster members' addresses. `K8S`:: At least `dbms.kubernetes.service_port_name` must be set. The addresses of the cluster members are queried dynamically from Kubernetes."

72

"dbms.cluster.discovery.type""LIST"false"LIST""This setting has been replaced by 'dbms.cluster.discovery.resolver_type'"

73

"dbms.cluster.minimum_initial_system_primaries_count""3"false"3""Minimum number of machines initially required to formed a clustered DBMS. The cluster is considered formed when at least this many members have discovered each other, bound together and bootstrapped a highly available system database. As a result, at least this many of the cluster's initial machines must have 'server.cluster.system_database_mode' set to 'PRIMARY'.NOTE: If 'dbms.cluster.discovery.resolver_type' is set to 'LIST' and 'dbms.cluster.discovery.endpoints' is empty then the user is assumed to be deploying a standalone DBMS, and the value of this setting is ignored."

74

"dbms.cluster.network.connect_timeout""30s"true"30s""The maximum amount of time to wait for a network connection to be established."

75

"dbms.cluster.network.handshake_timeout""20s"false"20s""Time out for protocol negotiation handshake."

76

"dbms.cluster.network.max_chunk_size""32768"false"32768""Maximum chunk size allowable across network by clustering machinery."

77

"dbms.cluster.network.supported_compression_algos"""false"""Network compression algorithms that this instance will allow in negotiation as a comma-separated list. Listed in descending order of preference for incoming connections. An empty list implies no compression. For outgoing connections this merely specifies the allowed set of algorithms and the preference of the remote peer will be used for making the decision. Allowable values: [Gzip, Snappy, Snappy_validating, LZ4, LZ4_high_compression, LZ_validating, LZ4_high_compression_validating]"

78

"dbms.cluster.raft.binding_timeout""1d"false"1d""The time allowed for a database on a Neo4j server to either join a cluster or form a new cluster with at least the quorum of the members available. The members are provided by `dbms.cluster.discovery.endpoints` for the system database and by the topology graph for user databases."

79

"dbms.cluster.raft.client.max_channels""8"false"8""The maximum number of TCP channels between two nodes to operate the raft protocol. Each database gets allocated one channel, but a single channel can be used by more than one database."

80

"dbms.cluster.raft.election_failure_detection_window""3s-6s"false"3s-6s""The rate at which leader elections happen. Note that due to election conflicts it might take several attempts to find a leader. The window should be significantly larger than typical communication delays to make conflicts unlikely."

81

"dbms.cluster.raft.leader_failure_detection_window""20s-23s"false"20s-23s""The time window within which the loss of the leader is detected and the first re-election attempt is held. The window should be significantly larger than typical communication delays to make conflicts unlikely."

82

"dbms.cluster.raft.leader_transfer.balancing_strategy""EQUAL_BALANCING"false"EQUAL_BALANCING""Which strategy to use when transferring database leaderships around a cluster. This can be one of `equal_balancing` or `no_balancing`. `equal_balancing` automatically ensures that each Core server holds the leader role for an equal number of databases.`no_balancing` prevents any automatic balancing of the leader role. Note that if a `leadership_priority_group` is specified for a given database, the value of this setting will be ignored for that database."

83

"dbms.cluster.raft.log.pruning_frequency""10m"false"10m""RAFT log pruning frequency"

84

"dbms.cluster.raft.log.reader_pool_size""8"false"8""RAFT log reader pool size"

85

"dbms.cluster.raft.log.rotation_size""250.00MiB"false"250.00MiB""RAFT log rotation size"

86

"dbms.cluster.raft.membership.join_max_lag""10s"false"10s""Maximum amount of lag accepted for a new follower to join the Raft group"

87

"dbms.cluster.raft.membership.join_timeout""10m"false"10m""Time out for a new member to catch up"

88

"dbms.cluster.store_copy.max_retry_time_per_request""20m"false"20m""Maximum retry time per request during store copy. Regular store files and indexes are downloaded in separate requests during store copy. This configures the maximum time failed requests are allowed to resend."

89

"dbms.cypher.forbid_exhaustive_shortestpath""false"false"false""This setting is associated with performance optimization. Set this to `true` in situations where it is preferable to have any queries using the 'shortestPath' function terminate as soon as possible with no answer, rather than potentially running for a long time attempting to find an answer (even if there is no path to be found). For most queries, the 'shortestPath' algorithm will return the correct answer very quickly. However there are some cases where it is possible that the fast bidirectional breadth-first search algorithm will find no results even if they exist. This can happen when the predicates in the `WHERE` clause applied to 'shortestPath' cannot be applied to each step of the traversal, and can only be applied to the entire path. When the query planner detects these special cases, it will plan to perform an exhaustive depth-first search if the fast algorithm finds no paths. However, the exhaustive search may be orders of magnitude slower than the fast algorithm. If it is critical that queries terminate as soon as possible, it is recommended that this option be set to `true`, which means that Neo4j will never consider using the exhaustive search for shortestPath queries. However, please note that if no paths are found, an error will be thrown at run time, which will need to be handled by the application."

90

"dbms.cypher.forbid_shortestpath_common_nodes""true"false"true""This setting is associated with performance optimization. The shortest path algorithm does not work when the start and end nodes are the same. With this setting set to `false` no path will be returned when that happens. The default value of `true` will instead throw an exception. This can happen if you perform a shortestPath search after a cartesian product that might have the same start and end nodes for some of the rows passed to shortestPath. If it is preferable to not experience this exception, and acceptable for results to be missing for those rows, then set this to `false`. If you cannot accept missing results, and really want the shortestPath between two common nodes, then re-write the query using a standard Cypher variable length pattern expression followed by ordering by path length and limiting to one result."

91

"dbms.cypher.hints_error""false"false"false""Set this to specify the behavior when Cypher planner or runtime hints cannot be fulfilled. If true, then non-conformance will result in an error, otherwise only a warning is generated."

92

"dbms.cypher.lenient_create_relationship""false"false"false""Set this to change the behavior for Cypher create relationship when the start or end node is missing. By default this fails the query and stops execution, but by setting this flag the create operation is simply not performed and execution continues."

93

"dbms.cypher.min_replan_interval""10s"false"10s""The minimum time between possible cypher query replanning events. After this time, the graph statistics will be evaluated, and if they have changed by more than the value set by dbms.cypher.statistics_divergence_threshold, the query will be replanned. If the statistics have not changed sufficiently, the same interval will need to pass before the statistics will be evaluated again. Each time they are evaluated, the divergence threshold will be reduced slightly until it reaches 10% after 7h, so that even moderately changing databases will see query replanning after a sufficiently long time interval."

94

"dbms.cypher.planner""DEFAULT"false"DEFAULT""Set this to specify the default planner for the default language version."

95

"dbms.cypher.render_plan_description""true"true"true""If set to `true` a textual representation of the plan description will be rendered on the server for all queries running with `EXPLAIN` or `PROFILE`. This allows clients such as the neo4j browser and Cypher shell to show a more detailed plan description."

96

"dbms.cypher.statistics_divergence_threshold""0.75"false"0.75""The threshold for statistics above which a plan is considered stale. If any of the underlying statistics used to create the plan have changed more than this value, the plan will be considered stale and will be replanned. Change is calculated as `abs(a-b)/max(a,b)`. This means that a value of `0.75` requires the database to quadruple in size before query replanning. A value of `0` means that the query will be replanned as soon as there is any change in statistics and the replan interval has elapsed. This interval is defined by `dbms.cypher.min_replan_interval` and defaults to 10s. After this interval, the divergence threshold will slowly start to decline, reaching 10% after about 7h. This will ensure that long running databases will still get query replanning on even modest changes, while not replanning frequently unless the changes are very large."

97

"dbms.databases.seed_from_uri_providers""S3SeedProvider"false"S3SeedProvider""Databases may be created from an existing 'seed' (a database backup or dump) stored at some source URI. Different types of seed source are supported by different implementations of `com.neo4j.dbms.seeding.SeedProvider`. For example, seeds stored at 's3://' and 'https://' URIs are supported by the builtin `S3SeedProvider` and `URLConnectionSeedProvider` respectively. This list specifies enabled seed providers. If a seed source (URI scheme) is supported by multiple providers in the list, the first matching provider will be used. If the list is set to empty, the seed from uri functionality is effectively disabled."

98

"dbms.db.timezone""UTC"false"UTC""Database timezone. Among other things, this setting influences the monitoring procedures."

99

"dbms.kubernetes.address""kubernetes.default.svc:443"false"kubernetes.default.svc:443""Address for Kubernetes API"

100

"dbms.kubernetes.ca_crt""/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"false"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt""File location of CA certificate for Kubernetes API"

101

"dbms.kubernetes.cluster_domain""cluster.local"false"cluster.local""Kubernetes cluster domain"

102

"dbms.kubernetes.label_selector"nullfalsenull"LabelSelector for Kubernetes API"

103

"dbms.kubernetes.namespace""/var/run/secrets/kubernetes.io/serviceaccount/namespace"false"/var/run/secrets/kubernetes.io/serviceaccount/namespace""File location of namespace for Kubernetes API"

104

"dbms.kubernetes.service_port_name"nullfalsenull"Service port name for discovery for Kubernetes API"

105

"dbms.kubernetes.token""/var/run/secrets/kubernetes.io/serviceaccount/token"false"/var/run/secrets/kubernetes.io/serviceaccount/token""File location of token for Kubernetes API"

106

"dbms.logs.http.enabled""false"false"false""Enable HTTP request logging."

107

"dbms.max_databases""100"false"100""The maximum number of databases."

108

"dbms.memory.tracking.enable""true"false"true""Enable off heap and on heap memory tracking. Should not be set to `false` for clusters."

109

"dbms.memory.transaction.total.max""716.80MiB"true"716.80MiB""Limit the amount of memory that all of the running transactions can consume, in bytes (or kibibytes with the 'k' suffix, mebibytes with 'm' and gibibytes with 'g'). Zero means 'unlimited'. Defaults to 70% of the heap size limit."

110

"dbms.netty.ssl.provider""JDK"false"JDK""Netty SSL provider"

111

"dbms.routing.client_side.enforce_for_domains"""true"""Always use client side routing (regardless of the default router) for neo4j:// protocol connections to these domains. A comma separated list of domains. Wildcards (*) are supported."

112

"dbms.routing.default_router""CLIENT"false"CLIENT""Routing strategy for neo4j:// protocol connections. Default is `CLIENT`, using client-side routing, with server-side routing as a fallback (if enabled). When set to `SERVER`, client-side routing is short-circuited, and requests will rely on server-side routing (which must be enabled for proper operation, i.e. `dbms.routing.enabled=true`). Can be overridden by `dbms.routing.client_side.enforce_for_domains`."

113

"dbms.routing.driver.connection.connect_timeout""5s"false"5s""Socket connection timeout. A timeout of zero is treated as an infinite timeout and will be bound by the timeout configured on the operating system level."

114

"dbms.routing.driver.connection.max_lifetime""1h"false"1h""Pooled connections older than this threshold will be closed and removed from the pool. Setting this option to a low value will cause a high connection churn and might result in a performance hit. It is recommended to set maximum lifetime to a slightly smaller value than the one configured in network equipment (load balancer, proxy, firewall, etc. can also limit maximum connection lifetime). Zero and negative values result in lifetime not being checked."

115

"dbms.routing.driver.connection.pool.acquisition_timeout""1m"false"1m""Maximum amount of time spent attempting to acquire a connection from the connection pool. This timeout only kicks in when all existing connections are being used and no new connections can be created because maximum connection pool size has been reached. Error is raised when connection can't be acquired within configured time. Negative values are allowed and result in unlimited acquisition timeout. Value of 0 is allowed and results in no timeout and immediate failure when connection is unavailable"

116

"dbms.routing.driver.connection.pool.idle_test"nullfalsenull"Pooled connections that have been idle in the pool for longer than this timeout will be tested before they are used again, to ensure they are still alive. If this option is set too low, an additional network call will be incurred when acquiring a connection, which causes a performance hit. If this is set high, no longer live connections might be used which might lead to errors. Hence, this parameter tunes a balance between the likelihood of experiencing connection problems and performance. Normally, this parameter should not need tuning. Value 0 means connections will always be tested for validity. No connection liveliness check is done by default."

117

"dbms.routing.driver.connection.pool.max_size""-1"false"-1""Maximum total number of connections to be managed by a connection pool. The limit is enforced for a combination of a host and user. Negative values are allowed and result in unlimited pool. Value of `0`is not allowed. Defaults to `-1` (unlimited)."

118

"dbms.routing.driver.logging.level""INFO"false"INFO""Sets level for driver internal logging."

119

"dbms.routing.enabled""true"false"true""Enable server-side routing in clusters using an additional bolt connector. When configured, this allows requests to be forwarded from one cluster member to another, if the requests can't be satisfied by the first member (e.g. write requests received by a non-leader)."

120

"dbms.routing.load_balancing.plugin""server_policies"false"server_policies""The load balancing plugin to use."

121

"dbms.routing.load_balancing.shuffle_enabled""true"false"true""Vary the order of the entries in routing tables each time one is produced. This means that different clients should select a range of servers as their first contact, reducing the chance of all clients contacting the same server if alternatives are available. This makes the load across the servers more even."

122

"dbms.routing.reads_on_primaries_enabled""true"false"true""Configure if the `dbms.routing.getRoutingTable()` procedure should include non-writer primaries as read endpoints or return only secondaries. Note: if there are no secondaries for the given database primaries are returned as read end points regardless the value of this setting. Defaults to true so that non-writer primaries are available for read-only queries in a typical heterogeneous setup."

123

"dbms.routing.reads_on_writers_enabled""false"true"false""Configure if the `dbms.routing.getRoutingTable()` procedure should include the writer as read endpoint or return only non-writers (non writer primaries and secondaries) Note: writer is returned as read endpoint if no other member is present all."

124

"dbms.routing_ttl""5m"false"5m""How long callers should cache the response of the routing procedure `dbms.routing.getRoutingTable()`"

125

"dbms.security.allow_csv_import_from_file_urls""true"false"true""Determines if Cypher will allow using file URLs when loading data using `LOAD CSV`. Setting this value to `false` will cause Neo4j to fail `LOAD CSV` clauses that load data from the file system."

126

"dbms.security.auth_cache_max_capacity""10000"false"10000""The maximum capacity for authentication and authorization caches (respectively)."

127

"dbms.security.auth_cache_ttl""10m"false"10m""The time to live (TTL) for cached authentication and authorization info when using external auth providers (LDAP or plugin). Setting the TTL to 0 will disable auth caching. Disabling caching while using the LDAP auth provider requires the use of an LDAP system account for resolving authorization information."

128

"dbms.security.auth_cache_use_ttl""true"false"true""Enable time-based eviction of the authentication and authorization info cache for external auth providers (LDAP or plugin). Disabling this setting will make the cache live forever and only be evicted when `dbms.security.auth_cache_max_capacity` is exceeded."

129

"dbms.security.auth_enabled""true"false"true""Enable auth requirement to access Neo4j. Defaults to `true`."

130

"dbms.security.auth_lock_time""5s"false"5s""The amount of time user account should be locked after a configured number of unsuccessful authentication attempts. The locked out user will not be able to log in until the lock period expires, even if correct credentials are provided. Setting this configuration option to a low value is not recommended because it might make it easier for an attacker to brute force the password."

131

"dbms.security.auth_max_failed_attempts""3"false"3""The maximum number of unsuccessful authentication attempts before imposing a user lock for the configured amount of time, as defined by `dbms.security.auth_lock_time`.The locked out user will not be able to log in until the lock period expires, even if correct credentials are provided. Setting this configuration option to values less than 3 is not recommended because it might make it easier for an attacker to brute force the password."

132

"dbms.security.auth_minimum_password_length""8"false"8""The minimum number of characters required in a password."

133

"dbms.security.authentication_providers""native,plugin-com.neo4j.plugin.jwt.auth.JwtAuthPlugin"false"native""A list of security authentication providers containing the users and roles. This can be any of the built-in `native` or `ldap` providers, or it can be an externally provided plugin, with a custom name prefixed by `plugin-`, i.e. `plugin-<AUTH_PROVIDER_NAME>`. They will be queried in the given order when login is attempted."

134

"dbms.security.authorization_providers""native,plugin-com.neo4j.plugin.jwt.auth.JwtAuthPlugin"false"native""A list of security authorization providers containing the users and roles. This can be any of the built-in `native` or `ldap` providers, or it can be an externally provided plugin, with a custom name prefixed by `plugin-`, i.e. `plugin-<AUTH_PROVIDER_NAME>`. They will be queried in the given order when login is attempted."

135

"dbms.security.cluster_status_auth_enabled""true"false"true""Require authorization for access to the Causal Clustering status endpoints."

136

"dbms.security.http_access_control_allow_origin""*"false"*""Value of the Access-Control-Allow-Origin header sent over any HTTP or HTTPS connector. This defaults to '*', which allows broadest compatibility. Note that any URI provided here limits HTTP/HTTPS access to that URI only."

137

"dbms.security.http_auth_allowlist""/,/browser.*"false"/,/browser.*""Defines an allowlist of http paths where Neo4j authentication is not required."

138

"dbms.security.http_static_content_security_policy_header""default-src 'self'; script-src 'self' cdn.segment.com canny.io; img-src 'self' data:; style-src 'self' fonts.googleapis.com 'unsafe-inline'; font-src 'self' fonts.gstatic.com; base-uri 'none'; object-src 'none'; frame-ancestors 'none'; connect-src 'self' api.canny.io api.segment.io ws: wss: http: https:"false"default-src 'self'; script-src 'self' cdn.segment.com canny.io; img-src 'self' data:; style-src 'self' fonts.googleapis.com 'unsafe-inline'; font-src 'self' fonts.gstatic.com; base-uri 'none'; object-src 'none'; frame-ancestors 'none'; connect-src 'self' api.canny.io api.segment.io ws: wss: http: https:""Defines the Content-Security-Policy header to return to content returned on static endpoints."

139

"dbms.security.http_strict_transport_security"nullfalsenull"Value of the HTTP Strict-Transport-Security (HSTS) response header. This header tells browsers that a webpage should only be accessed using HTTPS instead of HTTP. It is attached to every HTTPS response. Setting is not set by default so 'Strict-Transport-Security' header is not sent. Value is expected to contain directives like 'max-age', 'includeSubDomains' and 'preload'."

140

"dbms.security.key.name""aesKey"true"aesKey""Name of the 256 length AES encryption key, which is used for the symmetric encryption."

141

"dbms.security.keystore.password"nulltruenull"Password for accessing the keystore holding a 256 length AES encryption key, which is used for the symmetric encryption."

142

"dbms.security.keystore.path"nulltruenull"Location of the keystore holding a 256 length AES encryption key, which is used for the symmetric encryption of secrets held in system database."

143

"dbms.security.ldap.authentication.attribute""samaccountname"true"samaccountname""The attribute to use when looking up users. Using this setting requires `dbms.security.ldap.authentication.search_for_attribute` to be true and thus `dbms.security.ldap.authorization.system_username` and `dbms.security.ldap.authorization.system_password` to be configured."

144

"dbms.security.ldap.authentication.cache_enabled""true"false"true""Determines if the result of authentication via the LDAP server should be cached or not. Caching is used to limit the number of LDAP requests that have to be made over the network for users that have already been authenticated successfully. A user can be authenticated against an existing cache entry (instead of via an LDAP server) as long as it is alive (see `dbms.security.auth_cache_ttl`). An important consequence of setting this to `true` is that Neo4j then needs to cache a hashed version of the credentials in order to perform credentials matching. This hashing is done using a cryptographic hash function together with a random salt. Preferably a conscious decision should be made if this method is considered acceptable by the security standards of the organization in which this Neo4j instance is deployed."

145

"dbms.security.ldap.authentication.mechanism""simple"false"simple""LDAP authentication mechanism. This is one of `simple` or a SASL mechanism supported by JNDI, for example `DIGEST-MD5`. `simple` is basic username and password authentication and SASL is used for more advanced mechanisms. See RFC 2251 LDAPv3 documentation for more details."

146

"dbms.security.ldap.authentication.search_for_attribute""false"false"false""Perform authentication by searching for an unique attribute of a user. Using this setting requires `dbms.security.ldap.authorization.system_username` and `dbms.security.ldap.authorization.system_password` to be configured."

147

"dbms.security.ldap.authentication.user_dn_template""uid={0},ou=users,dc=example,dc=com"true"uid={0},ou=users,dc=example,dc=com""LDAP user DN template. An LDAP object is referenced by its distinguished name (DN), and a user DN is an LDAP fully-qualified unique user identifier. This setting is used to generate an LDAP DN that conforms with the LDAP directory's schema from the user principal that is submitted with the authentication token when logging in. The special token {0} is a placeholder where the user principal will be substituted into the DN string."

148

"dbms.security.ldap.authorization.access_permitted_group"""true"""The LDAP group to which a user must belong to get any access to the system.Set this to restrict access to a subset of LDAP users belonging to a particular group. If this is not set, any user to successfully authenticate via LDAP will have access to the PUBLIC role and any other roles assigned to them via dbms.security.ldap.authorization.group_to_role_mapping."

149

"dbms.security.ldap.authorization.group_membership_attributes""memberOf"true"memberOf""A list of attribute names on a user object that contains groups to be used for mapping to roles when LDAP authorization is enabled. This setting is ignored when `dbms.ldap_authorization_nested_groups_enabled` is `true`."

150

"dbms.security.ldap.authorization.group_to_role_mapping"""true"""An authorization mapping from LDAP group names to Neo4j role names. The map should be formatted as a semicolon separated list of key-value pairs, where the key is the LDAP group name and the value is a comma separated list of corresponding role names. For example: group1=role1;group2=role2;group3=role3,role4,role5 You could also use whitespaces and quotes around group names to make this mapping more readable, for example: ---- dbms.security.ldap.authorization.group_to_role_mapping=\ "cn=Neo4j Read Only,cn=users,dc=example,dc=com" = reader; \ "cn=Neo4j Read-Write,cn=users,dc=example,dc=com" = publisher; \ "cn=Neo4j Schema Manager,cn=users,dc=example,dc=com" = architect; \ "cn=Neo4j Administrator,cn=users,dc=example,dc=com" = admin ----"

151

"dbms.security.ldap.authorization.nested_groups_enabled""false"true"false""This setting determines whether multiple LDAP search results will be processed (as is required for the lookup of nested groups). If set to `true` then instead of using attributes on the user object to determine group membership (as specified by `dbms.security.ldap.authorization.group_membership_attributes`), the `user` object will only be used to determine the user's Distinguished Name, which will subsequently be used with `dbms.security.ldap.authorization.user_search_filter` in order to perform a nested group search. The Distinguished Names of the resultant group search results will be used to determine roles."

152

"dbms.security.ldap.authorization.nested_groups_search_filter""(&(objectclass=group)(member:1.2.840.113556.1.4.1941:={0}))"true"(&(objectclass=group)(member:1.2.840.113556.1.4.1941:={0}))""The search template which will be used to find the nested groups which the user is a member of. The filter should contain the placeholder token `{0}` which will be substituted with the user's Distinguished Name (which is found for the specified user principle using `dbms.security.ldap.authorization.user_search_filter`). The default value specifies Active Directory's LDAP_MATCHING_RULE_IN_CHAIN (aka 1.2.840.113556.1.4.1941) implementation which will walk the ancestry of group membership for the specified user."

153

"dbms.security.ldap.authorization.system_password"nullfalsenull"An LDAP system account password to use for authorization searches when `dbms.security.ldap.authorization.use_system_account` is `true`."

154

"dbms.security.ldap.authorization.system_username"nullfalsenull"An LDAP system account username to use for authorization searches when `dbms.security.ldap.authorization.use_system_account` is `true`. Note that the `dbms.security.ldap.authentication.user_dn_template` will not be applied to this username, so you may have to specify a full DN."

155

"dbms.security.ldap.authorization.use_system_account""false"false"false""Perform LDAP search for authorization info using a system account instead of the user's own account. If this is set to `false` (default), the search for group membership will be performed directly after authentication using the LDAP context bound with the user's own account. The mapped roles will be cached for the duration of `dbms.security.auth_cache_ttl`, and then expire, requiring re-authentication. To avoid frequently having to re-authenticate sessions you may want to set a relatively long auth cache expiration time together with this option. NOTE: This option will only work if the users are permitted to search for their own group membership attributes in the directory. If this is set to `true`, the search will be performed using a special system account user with read access to all the users in the directory. You need to specify the username and password using the settings `dbms.security.ldap.authorization.system_username` and `dbms.security.ldap.authorization.system_password` with this option. Note that this account only needs read access to the relevant parts of the LDAP directory and does not need to have access rights to Neo4j, or any other systems."

156

"dbms.security.ldap.authorization.user_search_base""ou=users,dc=example,dc=com"true"ou=users,dc=example,dc=com""The name of the base object or named context to search for user objects when LDAP authorization is enabled. A common case is that this matches the last part of `dbms.security.ldap.authentication.user_dn_template`."

157

"dbms.security.ldap.authorization.user_search_filter""(&(objectClass=*)(uid={0}))"true"(&(objectClass=*)(uid={0}))""The LDAP search filter to search for a user principal when LDAP authorization is enabled. The filter should contain the placeholder token {0} which will be substituted for the user principal."

158

"dbms.security.ldap.connection_timeout""30s"false"30s""The timeout for establishing an LDAP connection. If a connection with the LDAP server cannot be established within the given time the attempt is aborted. A value of 0 means to use the network protocol's (i.e., TCP's) timeout value."

159

"dbms.security.ldap.host""localhost"false"localhost""URL of LDAP server to use for authentication and authorization. The format of the setting is `<protocol>://<hostname>:<port>`, where hostname is the only required field. The supported values for protocol are `ldap` (default) and `ldaps`. The default port for `ldap` is 389 and for `ldaps` 636. For example: `ldaps://ldap.example.com:10389`. You may want to consider using STARTTLS (`dbms.security.ldap.use_starttls`) instead of LDAPS for secure connections, in which case the correct protocol is `ldap`."

160

"dbms.security.ldap.read_timeout""30s"false"30s""The timeout for an LDAP read request (i.e. search). If the LDAP server does not respond within the given time the request will be aborted. A value of 0 means wait for a response indefinitely."

161

"dbms.security.ldap.referral""follow"false"follow""The LDAP referral behavior when creating a connection. This is one of `follow`, `ignore` or `throw`. * `follow` automatically follows any referrals * `ignore` ignores any referrals * `throw` throws an exception, which will lead to authentication failure"

162

"dbms.security.ldap.use_starttls""false"false"false""Use secure communication with the LDAP server using opportunistic TLS. First an initial insecure connection will be made with the LDAP server, and a STARTTLS command will be issued to negotiate an upgrade of the connection to TLS before initiating authentication."

163

"dbms.security.log_successful_authentication""true"false"true""Set to log successful authentication events to the security log. If this is set to `false` only failed authentication events will be logged, which could be useful if you find that the successful events spam the logs too much, and you do not require full auditing capability."

164

"dbms.security.logs.ldap.groups_at_debug_level_enabled""false"false"false""When set to `true`, will log the groups retrieved from the ldap server. This will only take effect when the security log level is set to `DEBUG`.WARNING: It is strongly advised that this is set to `false` when running in a production environment in order to prevent logging of sensitive information."

165

"dbms.security.logs.oidc.jwt_claims_at_debug_level_enabled""false"false"false""When set to `true`, will log the claims from the JWT. This will only take effect when the security log level is set to `DEBUG`.WARNING: It is strongly advised that this is set to `false` when running in a production environment in order to preventlogging of sensitive information. Please also note that the contents of the JWT claims set can change over time because they are dependent entirely upon the ID provider."

166

"dbms.security.procedures.allowlist""*"false"*""A list of procedures (comma separated) that are to be loaded. The list may contain both fully-qualified procedure names, and partial names with the wildcard '*'. If this setting is left empty no procedures will be loaded."

167

"dbms.security.procedures.unrestricted""jwt.security.*"false"""A list of procedures and user defined functions (comma separated) that are allowed full access to the database. The list may contain both fully-qualified procedure names, and partial names with the wildcard '*'. Note that this enables these procedures to bypass security. Use with caution."

168

"dbms.usage_report.enabled""true"false"true""Anonymous Usage Data reporting."

169

"initial.dbms.automatically_enable_free_servers""false"false"false""Automatically enable free servers"

170

"initial.dbms.database_allocator""EQUAL_NUMBERS"false"EQUAL_NUMBERS""Name of the initial database allocator. After the creation of the dbms it can be set with the 'dbms.setDatabaseAllocator' procedure."

171

"initial.dbms.default_database""neo4j"false"neo4j""Name of the default database (aliases are not supported)."

172

"initial.dbms.default_primaries_count""1"false"1""Initial default number of primary instances of user databases. If the user does not specify the number of primaries in 'CREATE DATABASE', this value will be used, unless it is overwritten with the 'dbms.setDefaultAllocationNumbers' procedure."

173

"initial.dbms.default_secondaries_count""0"false"0""Initial default number of secondary instances of user databases. If the user does not specify the number of secondaries in 'CREATE DATABASE', this value will be used, unless it is overwritten with the 'dbms.setDefaultAllocationNumbers' procedure."

174

"initial.server.allowed_databases"""false"""The names of databases that are allowed on this server - all others are denied. Empty means all are allowed. Can be overridden when enabling the server, or altered at runtime, without changing this setting. Exclusive with 'initial.server.denied_databases'"

175

"initial.server.denied_databases"""false"""The names of databases that are not allowed on this server. Empty means nothing is denied. Can be overridden when enabling the server, or altered at runtime, without changing this setting. Exclusive with 'initial.server.allowed_databases'"

176

"initial.server.mode_constraint""NONE"false"NONE""An instance can restrict itself to allow databases to be hosted only as primaries or secondaries. This setting is the default input for the `ENABLE SERVER` command - the user can overwrite it when executing the command."

177

"initial.server.tags"""false"""A list of tag names for the server used when configuring load balancing and replication policies.This setting is the default input for the `ENABLE SERVER` command - the user can overwrite it when executing the command."

178

"server.backup.enabled""false"false"true""Enable support for running online backups."

179

"server.backup.exec_connector.command"""false"""Command to execute for ExecDataConnector list"

180

"server.backup.exec_connector.scheme"""false"""Schemes ExecDataConnector will match on"

181

"server.backup.listen_address""127.0.0.1:6362"false"127.0.0.1:6362""Network interface and port for the backup server to listen on."

182

"server.backup.store_copy_max_retry_time_per_request""20m"false"20m""Maximum retry time per request during store copy. Regular store files and indexes are downloaded in separate requests during store copy. This configures the maximum time failed requests are allowed to resend. "

183

"server.bolt.advertised_address""localhost:7687"false":7687""Advertised address for this connector"

184

"server.bolt.connection_keep_alive""1m"false"1m""The maximum time to wait before sending a NOOP on connections waiting for responses from active ongoing queries. The minimum value is 1 millisecond."

185

"server.bolt.connection_keep_alive_for_requests""ALL"false"ALL""The type of messages to enable keep-alive messages for (ALL, STREAMING or OFF)"

186

"server.bolt.connection_keep_alive_probes""2"false"2""The total amount of probes to be missed before a connection is considered stale.The minimum for this value is 1."

187

"server.bolt.connection_keep_alive_streaming_scheduling_interval""1m"false"1m""The interval between every scheduled keep-alive check on all connections with active queries. Zero duration turns off keep-alive service."

188

"server.bolt.enable_network_error_accounting""true"false"true""Enables accounting based reporting of benign errors within the Bolt stack (when enabled, benign errors are reported only when such events occur with unusual frequency. Otherwise, all benign network errors will be reported)"

189

"server.bolt.enabled""true"false"true""Enable the bolt connector"

190

"server.bolt.listen_address""localhost:7687"false":7687""Address the connector should bind to"

191

"server.bolt.network_abort_clear_window_duration""10m"false"10m""The duration for which network related connection aborts need to remain at a reasonable level before the error is cleared"

192

"server.bolt.network_abort_warn_threshold""2"false"2""The maximum amount of network related connection aborts permitted within a given window before emitting log messages (a value of zero reverts to legacy warning behavior)"

193

"server.bolt.network_abort_warn_window_duration""10m"false"10m""The duration of the window in which network related connection aborts are sampled"

194

"server.bolt.ocsp_stapling_enabled""false"false"false""Enable server OCSP stapling for bolt and http connectors."

195

"server.bolt.telemetry.enabled""false"false"false""Enable the collection of driver telemetry."

196

"server.bolt.thread_pool_keep_alive""5m"false"5m""The maximum time an idle thread in the thread pool bound to this connector will wait for new tasks."

197

"server.bolt.thread_pool_max_size""400"false"400""The maximum number of threads allowed in the thread pool bound to this connector."

198

"server.bolt.thread_pool_min_size""5"false"5""The number of threads to keep in the thread pool bound to this connector, even if they are idle."

199

"server.bolt.thread_starvation_clear_window_duration""10m"false"10m""The duration for which unscheduled requests need to remain at a reasonable level before the error is cleared"

200

"server.bolt.thread_starvation_warn_threshold""2"false"2""The maximum amount of unscheduled requests permitted during thread starvation events within a given window before emitting log messages"

201

"server.bolt.thread_starvation_warn_window_duration""10m"false"10m""The duration of the window in which unscheduled requests are sampled"

202

"server.bolt.tls_level""DISABLED"false"DISABLED""Encryption level to require this connector to use"

203

"server.bolt.traffic_accounting_check_period""5m"false"5m""Amount of time spent between samples of current traffic usage (lower values result in more accurate reporting while incurring a higher performance penalty; a value of zero disables traffic accounting)"

204

"server.bolt.traffic_accounting_clear_duration""10m"false"10m""Time required to be spent below the configured traffic threshold in order to clear traffic warnings"

205

"server.bolt.traffic_accounting_incoming_threshold_mbps""950"false"950""Maximum permitted incoming traffic within a configured accounting check window before emitting a warning (in Mbps)"

206

"server.bolt.traffic_accounting_outgoing_threshold_mbps""950"false"950""Maximum permitted outgoing traffic within a configured accounting check window before emitting a warning (in Mbps)"

207

"server.cluster.advertised_address""localhost:6000"false":6000""Advertised hostname/IP address and port for the transaction shipping server."

208

"server.cluster.catchup.connect_randomly_to_server_group"""true"""Comma separated list of groups to be used by the connect-randomly-to-server-group selection strategy. The connect-randomly-to-server-group strategy is used if the list of strategies (`server.cluster.catchup.upstream_strategy`) includes the value `connect-randomly-to-server-group`."

209

"server.cluster.catchup.connect_randomly_to_server_tags"""true"""Comma separated list of tags to be used by the connect-randomly-to-server-with-tag selection strategy. The connect-randomly-to-server-with-tag strategy is used if the list of strategies (`server.cluster.catchup.upstream_strategy`) includes the value `connect-randomly-to-server-with-tag`."

210

"server.cluster.catchup.upstream_strategy"""false"""An ordered list in descending preference of the strategy which secondaries use to choose the upstream server from which to pull transactional updates. If none are valid or the list is empty, there is a default strategy of `typically-connect-to-random-secondary`."

211

"server.cluster.catchup.user_defined_upstream_strategy"""false"""Configuration of a user-defined upstream selection strategy. The user-defined strategy is used if the list of strategies (`server.cluster.catchup.upstream_strategy`) includes the value `user_defined`."

212

"server.cluster.listen_address""localhost:6000"false":6000""Network interface and port for the transaction shipping server to listen on. Please note that it is also possible to run the backup client against this port so always limit access to it via the firewall and configure an ssl policy."

213

"server.cluster.network.native_transport_enabled""true"false"true""Use native transport if available. Epoll for Linux or Kqueue for MacOS/BSD. If this setting is set to false, or if native transport is not available, Nio transport will be used."

214

"server.cluster.raft.advertised_address""localhost:7000"false":7000""Advertised hostname/IP address and port for the RAFT server."

215

"server.cluster.raft.listen_address""localhost:7001"false":7000""Network interface and port for the RAFT server to listen on."

216

"server.cluster.system_database_mode""PRIMARY"false"PRIMARY""Users must manually specify the mode for the system database on each instance."

217

"server.config.strict_validation.enabled""true"false"true""A strict configuration validation will prevent the database from starting up if unknown configuration options are specified in the neo4j settings namespace (such as dbms., cypher., etc) or if settings are declared multiple times."

218

"server.cypher.parallel.worker_limit""0"false"0""Number of threads to allocate to Cypher worker threads for the parallel runtime. If set to a positive number, that number of workers will be started. If set to 0, one worker will be started for every logical processor available to the Java Virtual Machine. If set to a negative number, the total number of logical processors available on the server will be reduced by the absolute value of that number. For example, if the server has 16 available processors and you set `server.cypher.parallel.worker_limit` to `-1`, the parallel runtime will have 15 threads available."

219

"server.databases.default_to_read_only""false"true"false""Whether or not any database on this instance are read_only by default. If false, individual databases may be marked as read_only using server.database.read_only. If true, individual databases may be marked as writable using server.databases.writable."

220

"server.databases.read_only"""true"""List of databases for which to prevent write queries. Databases not included in this list maybe read_only anyway depending upon the value of server.databases.default_to_read_only."

221

"server.databases.writable"""true"""List of databases for which to allow write queries. Databases not included in this list will allow write queries anyway, unless server.databases.default_to_read_only is set to true."

222

"server.db.query_cache_size""1000"false"1000""The number of cached queries per database. Use `server.memory.query_cache.per_db_cache_num_entries` instead."

223

"server.default_advertised_address""localhost"false"localhost""Default hostname or IP address the server uses to advertise itself."

224

"server.default_listen_address""localhost"false"localhost""Default network interface to listen for incoming connections. To listen for connections on all interfaces, use "0.0.0.0". "

225

"server.directories.cluster_state""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/data/cluster-state"false"cluster-state""Directory to hold cluster state including Raft log"

226

"server.directories.data""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/data"false"data""Path of the data directory. You must not configure more than one Neo4j installation to use the same data directory."

227

"server.directories.dumps.root""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/data/dumps"false"dumps""Root location where Neo4j will store database dumps optionally produced when dropping said databases."

228

"server.directories.import""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/import"falsenull"Sets the root directory for file URLs used with the Cypher `LOAD CSV` clause. This should be set to a directory relative to the Neo4j installation path, restricting access to only those files within that directory and its subdirectories. For example the value "import" will only enable access to files within the 'import' folder. Removing this setting will disable the security feature, allowing all files in the local system to be imported. Setting this to an empty field will allow access to all files within the Neo4j installation folder."

229

"server.directories.lib""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/lib"false"lib""Path of the lib directory"

230

"server.directories.licenses""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/licenses"false"licenses""Path of the licenses directory."

231

"server.directories.logs""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/logs"false"logs""Path of the logs directory."

232

"server.directories.metrics""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/metrics"false"metrics""The target location of the CSV files: a path to a directory wherein a CSV file per reported field will be written."

233

"server.directories.neo4j_home""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b"false"/""Root relative to which directory settings are resolved. Calculated and set by the server on startup. Defaults to the current working directory."

234

"server.directories.plugins""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/plugins"false"plugins""Location of the database plugin directory. Compiled Java JAR files that contain database procedures will be loaded if they are placed in this directory."

235

"server.directories.run""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/run"false"run""Path of the run directory. This directory holds Neo4j's runtime state, such as a pidfile when it is running in the background. The pidfile is created when starting neo4j and removed when stopping it. It may be placed on an in-memory filesystem such as tmpfs."

236

"server.directories.script.root""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/data/scripts"false"scripts""Root location where Neo4j will store scripts for configured databases."

237

"server.directories.transaction.logs.root""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/data/transactions"false"transactions""Root location where Neo4j will store transaction logs for configured databases."

238

"server.discovery.advertised_address""localhost:5000"false":5000""Advertised cluster member discovery management communication."

239

"server.discovery.listen_address""localhost:5001"false":5000""Host and port to bind the cluster member discovery management communication."

240

"server.dynamic.setting.allowlist""*"false"*""A list of setting name patterns (comma separated) that are allowed to be dynamically changed. The list may contain both full setting names, and partial names with the wildcard '*'. If this setting is left empty all dynamic settings updates will be blocked."

241

"server.groups"""false"""A list of tag names for the server used when configuring load balancing and replication policies."

242

"server.http.advertised_address""localhost:7474"false":7474""Advertised address for this connector"

243

"server.http.enabled""true"false"true""Enable the http connector"

244

"server.http.listen_address""localhost:7474"false":7474""Address the connector should bind to"

245

"server.http.transaction_idle_timeout""30s"false"30s""Timeout for idle transactions in the HTTP Server. Note: this is different from 'db.transaction.timeout' which will timeout the underlying transaction."

246

"server.http_enabled_modules""TRANSACTIONAL_ENDPOINTS,UNMANAGED_EXTENSIONS,BROWSER,ENTERPRISE_MANAGEMENT_ENDPOINTS"false"TRANSACTIONAL_ENDPOINTS,UNMANAGED_EXTENSIONS,BROWSER,ENTERPRISE_MANAGEMENT_ENDPOINTS""Defines the set of modules loaded into the Neo4j web server. The enterprise management endpoints are only available in the enterprise edition."

247

"server.http_enabled_transports""HTTP1_1,HTTP2"false"HTTP1_1,HTTP2""Defines the set of transports available on the HTTP server"

248

"server.https.advertised_address""localhost:7473"false":7473""Advertised address for this connector"

249

"server.https.enabled""false"false"false""Enable the https connector"

250

"server.https.listen_address""localhost:7473"false":7473""Address the connector should bind to"

251

"server.jvm.additional""-XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -XX:+AlwaysPreTouch -XX:+UnlockExperimentalVMOptions -XX:+TrustFinalNonStaticFields -XX:+DisableExplicitGC -XX:-RestrictContended -Djdk.nio.maxCachedBufferSize=1024 -Dio.netty.tryReflectionSetAccessible=true -Djdk.tls.ephemeralDHKeySize=2048 -Djdk.tls.rejectClientInitiatedRenegotiation=true -XX:FlightRecorderOptions=stackdepth=256 -XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED -Dlog4j2.disable.jmx=true"falsenull"Additional JVM arguments. Please note that argument order can be significant."

252

"server.logs.config""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/conf/server-logs.xml"false"conf/server-logs.xml""Path to the logging configuration for debug, query, http and security logs."

253

"server.logs.debug.enabled""true"false"true""Enable the debug log."

254

"server.logs.gc.enabled""false"false"false""Enable GC Logging"

255

"server.logs.gc.options""-Xlog:gc*,safepoint,age*=trace"false"-Xlog:gc*,safepoint,age*=trace""GC Logging Options"

256

"server.logs.gc.rotation.keep_number""5"false"5""Number of GC logs to keep."

257

"server.logs.gc.rotation.size""20.00MiB"false"20.00MiB""Size of each GC log that is kept."

258

"server.logs.user.config""/Users/chenlian/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-ae162aeb-f4ba-474b-b7e4-27e73a8eb24b/conf/user-logs.xml"false"conf/user-logs.xml""Path to the logging configuration of user logs."

259

"server.max_databases""100"false"100""The maximum number of databases. Use dbms.max_databases instead."

260

"server.memory.heap.initial_size""512.00MiB"falsenull"Initial heap size. By default it is calculated based on available system resources."

261

"server.memory.heap.max_size""1.00GiB"falsenull"Maximum heap size. By default it is calculated based on available system resources."

262

"server.memory.off_heap.block_cache_size""128"false"128""Defines the size of the off-heap memory blocks cache. The cache will contain this number of blocks for each block size that is power of two. Thus, maximum amount of memory used by blocks cache can be calculated as 2 * server.memory.off_heap.max_cacheable_block_size * server.memory.off_heap.block_cache_size"

263

"server.memory.off_heap.max_cacheable_block_size""512.00KiB"false"512.00KiB""Defines the maximum size of an off-heap memory block that can be cached to speed up allocations. The value must be a power of 2."

264

"server.memory.off_heap.transaction_max_size""2.00GiB"false"2.00GiB""The maximum amount of off-heap memory that can be used to store transaction state data; it's a total amount of memory shared across all active transactions. Zero means 'unlimited'. Used when db.tx_state.memory_allocation is set to 'OFF_HEAP'."

265

"server.memory.pagecache.directio""false"false"false""Use direct I/O for page cache. Setting is supported only on Linux and only for a subset of record formats that use platform aligned page size."

266

"server.memory.pagecache.flush.buffer.enabled""false"true"false""Page cache can be configured to use a temporal buffer for flushing purposes. It is used to combine, if possible, sequence of several cache pages into one bigger buffer to minimize the number of individual IOPS performed and better utilization of available I/O resources, especially when those are restricted."

267

"server.memory.pagecache.flush.buffer.size_in_pages""128"true"128""Page cache can be configured to use a temporal buffer for flushing purposes. It is used to combine, if possible, sequence of several cache pages into one bigger buffer to minimize the number of individual IOPS performed and better utilization of available I/O resources, especially when those are restricted. Use this setting to configure individual file flush buffer size in pages (8KiB). To be able to utilize this buffer during page cache flushing, buffered flush should be enabled."

268

"server.memory.pagecache.scan.prefetchers""4"false"4""The maximum number of worker threads to use for pre-fetching data when doing sequential scans. Set to '0' to disable pre-fetching for scans."

269

"server.memory.pagecache.size""512.00MiB"falsenull"The amount of memory to use for mapping the store files. If Neo4j is running on a dedicated server, then it is generally recommended to leave about 2-4 gigabytes for the operating system, give the JVM enough heap to hold all your transaction state and query context, and then leave the rest for the page cache. If no page cache memory is configured, then a heuristic setting is computed based on available system resources. By default, the size of page cache is 50% of available RAM minus the max heap size (but not larger than 70x the max heap size, due to some overhead of the page cache in the heap)."

270

"server.memory.query_cache.per_db_cache_num_entries""1000"true"1000""The number of cached queries per database. The max number of queries that can be kept in a cache is `number of databases` * `server.memory.query_cache.per_db_cache_num_entries`. With 10 databases and `server.memory.query_cache.per_db_cache_num_entries`=1000, the cache can keep 10000 plans in total. This setting is only deciding cache size when `server.memory.query_cache.sharing_enabled` is set to `false`."

271

"server.memory.query_cache.shared_cache_num_entries""1000"true"1000""The number of cached queries for all databases. The max number of queries that can be kept in a cache is exactly `server.memory.query_cache.shared_cache_num_entries`. This setting is only deciding cache size when `server.memory.query_cache.sharing_enabled` is set to `true`."

272

"server.memory.query_cache.sharing_enabled""false"false"false""Enable sharing cache space between different databases. With this option turned on, databases will share cache space, but not cache entries. This means that a database may store and retrieve entries from the shared cache, but it may not retrieve entries produced by another database. The database may, however, evict entries from other databases as necessary, according to the constrained cache size and cache eviction policy. In essence, databases may compete for cache space, but may not observe each others entries. When this option is turned on, the cache space available to all databases is configured with `server.memory.query_cache.shared_cache_num_entries`. With this option turned off, the cache space available to each individual database is configured with `server.memory.query_cache.per_db_cache_num_entries`."

273

"server.metrics.csv.enabled""true"false"true""Set to `true` to enable exporting metrics to CSV files"

274

"server.metrics.csv.interval""30s"false"30s""The reporting interval for the CSV files. That is, how often new rows with numbers are appended to the CSV files."

275

"server.metrics.csv.rotation.compression""ZIP"false"NONE""Decides what compression to use for the csv history files."

276

"server.metrics.csv.rotation.keep_number""7"false"7""Maximum number of history files for the csv files."

277

"server.metrics.csv.rotation.size""10.00MiB"false"10.00MiB""The file size in bytes at which the csv files will auto-rotate. If set to zero then no rotation will occur. Accepts a binary suffix `k`, `m` or `g`."

278

"server.metrics.enabled""true"false"true""Enable metrics. Setting this to `false` will to turn off all metrics."

279

"server.metrics.filter""*bolt.connections*,*bolt.messages_received*,*bolt.messages_started*,*dbms.pool.bolt.free,*dbms.pool.bolt.total_size,*dbms.pool.bolt.total_used,*dbms.pool.bolt.used_heap,*cluster.raft.is_leader,*cluster.raft.last_leader_message,*cluster.raft.replication_attempt,*cluster.raft.replication_fail,*cluster.raft.last_applied,*cluster.raft.last_appended,*cluster.raft.append_index,*cluster.raft.commit_index,*cluster.raft.applied_index,*check_point.*,*cypher.replan_events,*ids_in_use*,*pool.transaction.*.total_used,*pool.transaction.*.used_heap,*pool.transaction.*.used_native,*store.size*,*transaction.active_read,*transaction.active_write,*transaction.committed*,*transaction.last_committed_tx_id,*transaction.peak_concurrent,*transaction.rollbacks*,*page_cache.hit*,*page_cache.page_faults,*page_cache.usage_ratio,*vm.file.descriptors.count,*vm.gc.time.*,*vm.heap.used,*vm.memory.buffer.direct.used,*vm.memory.pool.g1_eden_space,*vm.memory.pool.g1_old_gen,*vm.pause_time,*vm.thread*,*db.query.execution*"false"*bolt.connections*,*bolt.messages_received*,*bolt.messages_started*,*dbms.pool.bolt.free,*dbms.pool.bolt.total_size,*dbms.pool.bolt.total_used,*dbms.pool.bolt.used_heap,*cluster.raft.is_leader,*cluster.raft.last_leader_message,*cluster.raft.replication_attempt,*cluster.raft.replication_fail,*cluster.raft.last_applied,*cluster.raft.last_appended,*cluster.raft.append_index,*cluster.raft.commit_index,*cluster.raft.applied_index,*check_point.*,*cypher.replan_events,*ids_in_use*,*pool.transaction.*.total_used,*pool.transaction.*.used_heap,*pool.transaction.*.used_native,*store.size*,*transaction.active_read,*transaction.active_write,*transaction.committed*,*transaction.last_committed_tx_id,*transaction.peak_concurrent,*transaction.rollbacks*,*page_cache.hit*,*page_cache.page_faults,*page_cache.usage_ratio,*vm.file.descriptors.count,*vm.gc.time.*,*vm.heap.used,*vm.memory.buffer.direct.used,*vm.memory.pool.g1_eden_space,*vm.memory.pool.g1_old_gen,*vm.pause_time,*vm.thread*,*db.query.execution*""Specifies which metrics should be enabled by using a comma separated list of globbing patterns. Only the metrics matching the filter will be enabled. For example `\*check_point*,neo4j.page_cache.evictions` will enable any checkpoint metrics and the pagecache eviction metric."

280

"server.metrics.graphite.enabled""false"false"false""Set to `true` to enable exporting metrics to Graphite."

281

"server.metrics.graphite.interval""30s"false"30s""The reporting interval for Graphite. That is, how often to send updated metrics to Graphite."

282

"server.metrics.graphite.server""localhost:2003"false":2003""The hostname or IP address of the Graphite server"

283

"server.metrics.jmx.enabled""true"false"true""Set to `true` to enable the JMX metrics endpoint"

284

"server.metrics.prefix""neo4j"false"neo4j""A common prefix for the reported metrics field names."

285

"server.metrics.prometheus.enabled""false"false"false""Set to `true` to enable the Prometheus endpoint"

286

"server.metrics.prometheus.endpoint""localhost:2004"false"localhost:2004""The hostname and port to use as Prometheus endpoint"

287

"server.panic.shutdown_on_panic""false"false"false""If there is a server panic (an unrecoverable error) should the neo4j process shut down or continue running. Following a server panic it is likely that a significant amount of functionality will be lost. Recovering full functionality will require the Neo4j process to restart. This feature is available in Neo4j Enterprise Edition. Defaults to `false` except for Neo4j Enterprise Edition deployments running on Kubernetes where it is `true`."

288

"server.routing.advertised_address""localhost:7688"false":7688""The advertised address for the intra-cluster routing connector"

289

"server.routing.listen_address""localhost:7688"false":7688""The address the routing connector should bind to"

290

"server.threads.worker_count""12"false"12""Number of Neo4j worker threads. This setting is only valid for REST, and does not influence bolt-server. It sets the amount of worker threads for the Jetty server used by neo4j-server. This option can be tuned when you plan to execute multiple, concurrent REST requests, with the aim of getting more throughput from the database. By default, it is set to the number of available processors, or to 500 for machines with more than 500 processors. Your OS might enforce a lower limit than the maximum value specified here."

291

"server.unmanaged_extension_classes"""false"""Comma-separated list of <classname>=<mount point> for unmanaged extensions."

292

"server.windows_service_name""neo4j"
  • 4
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

北京橙溪科技有限公司enwing.com

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值