spark连接cassandra配置说明

翻译 2015年11月19日 10:34:13

Cassandra Authentication Parameters

All parameters should be prefixed with spark.cassandra.

Property Name Default Description
auth.conf.factory DefaultAuthConfFactory Name of a Scala module or class implementing AuthConfFactory providing custom authentication configuration

Cassandra Connection Parameters

All parameters should be prefixed with spark.cassandra.

Property Name Default Description
connection.compression   Compression to use (LZ4, SNAPPY or NONE)
connection.factory DefaultConnectionFactory Name of a Scala module or class implementingCassandraConnectionFactory providing connections to the Cassandra cluster
connection.host localhost Contact point to connect to the Cassandra cluster
connection.keep_alive_ms 250 Period of time to keep unused connections open
connection.local_dc None The local DC to connect to (other nodes will be ignored)
connection.port 9042 Cassandra native connection port
connection.reconnection_delay_ms.max 60000 Maximum period of time to wait before reconnecting to a dead node
connection.reconnection_delay_ms.min 1000 Minimum period of time to wait before reconnecting to a dead node
connection.timeout_ms 5000 Maximum period of time to attempt connecting to a node
query.retry.count 10 Number of times to retry a timed-out query
query.retry.delay 4 * 1.5 The delay between subsequent retries (can be constant, like 1000; linearly increasing, like 1000+100; or exponential, like 1000*2)
read.timeout_ms 120000 Maximum period of time to wait for a read to return

Cassandra DataFrame Source Parameters

All parameters should be prefixed with spark.cassandra.

Property Name Default Description
table.size.in.bytes None Used by DataFrames Internally, will be updated in a future release toretrieve size from C*. Can be set manually now

Cassandra SQL Context Options

All parameters should be prefixed with spark.cassandra.

Property Name Default Description
sql.cluster default Sets the default Cluster to inherit configuration from
sql.keyspace None Sets the default keyspace

Cassandra SSL Connection Options

All parameters should be prefixed with spark.cassandra.

Property Name Default Description
connection.ssl.enabled false Enable secure connection to Cassandra cluster
connection.ssl.enabledAlgorithms Set(TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA) SSL cipher suites
connection.ssl.protocol TLS SSL protocol
connection.ssl.trustStore.password None Trust store password
connection.ssl.trustStore.path None Path for the trust store being used
connection.ssl.trustStore.type JKS Trust store type

Read Tuning Parameters

All parameters should be prefixed with spark.cassandra.

Property Name Default Description
input.consistency.level LOCAL_ONE Consistency level to use when reading
input.fetch.size_in_rows 1000 Number of CQL rows fetched per driver request
input.metrics true Sets whether to record connector specific metrics on write
input.split.size_in_mb 64 Approx amount of data to be fetched into a Spark partition

Write Tuning Parameters

All parameters should be prefixed with spark.cassandra.

Property Name Default Description
output.batch.grouping.buffer.size 1000 How many batches per single Spark task can be stored inmemory before sending to Cassandra
output.batch.grouping.key Partition Determines how insert statements are grouped into batches. Available values are
  • none : a batch may contain any statements
  • replica_set : a batch may contain only statements to be written to the same replica set
  • partition : a batch may contain only statements for rows sharing the same partition key value
output.batch.size.bytes 1024 Maximum total size of the batch in bytes. Overridden byspark.cassandra.output.batch.size.rows
output.batch.size.rows None Number of rows per single batch. The default is 'auto'which means the connector will adjust the numberof rows based on the amount of datain each row
output.concurrent.writes 5 Maximum number of batches executed in parallel by a single Spark task
output.consistency.level LOCAL_ONE Consistency level for writing
output.metrics true Sets whether to record connector specific metrics on write
output.throughput_mb_per_sec 2.147483647E9 *(Floating points allowed)*
Maximum write throughput allowed per single core in MB/s.
Limit this on long (+8 hour) runs to 70% of your max throughput as seen on a smaller job for stability

相关文章推荐

Cassandra与Hadoop整合

本文采用hadoop1.0.3和Cassandra1.2.0,以Cassandra源码给的WordCount为例说明。关于hadoop与cassandra的相关知识就不废话了,直接进入主题。    ...

Cassandra-1.2.0多节点集群搭建

说明:本篇使用的是Cassandra1.2.0版本(由于Cassandra存在bug,目前好像只有JDK1.6.30支持),以在linux上安装为例。 第1章Cassandra集群配置 1.1条件...

spark+cassandra实时数据分析方案

前言在本教程中,您将学习如何设置用于读取和写入数据至Cassandra的一个非常简单的spark应用程序。在开始前,你需要拥有spark和cassandra的基本知识,详情请参阅spark和cassa...

停止Redis服务需要注意的事

停止Redis服务需要注意的事:如果要正常停止Redis服务,可以通过pkill命令停止所有Redis服务或者使用kill -15 redis-pid停止某一个Redis服务。

Spark+Cassandra优化

问题1:reduce task数目不合适 解决方案: 需要根据实际情况调整默认配置,调整方式是修改参数spark.default.parallelism。通常的,reduce数目设置为co...

Cassandra入门指南--安装及配置

Apache Cassandra是一个开源分布式NoSQL数据库系统。最初由Facebook创建,集Google BigTable的数据模型与Amazon Dynamo的完全分布式的架构于一身。
  • pzoozq
  • pzoozq
  • 2015-08-29 21:04
  • 7121

Cassandra3.9版本配置文件参数说明

cluster_name 集群的名称。 这主要用于防止一个逻辑集群中的机器加入另一个逻辑集群。 默认名称: ‘Test Cluster’ num_tokens 这定义了随机分配给环上该节点的令牌...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:深度学习:神经网络中的前向传播和反向传播算法推导
举报原因:
原因补充:

(最多只允许输入30个字)