datastax java,如何使用Datastax Java驱动程序的异步/批量写入功能

I am planning to use Datastax Java driver for writing to Cassandra.. I was mainly interested in Batch Writes and Asycnhronous features of Datastax java driver but I am not able to get any tutorials which can explain me how to incorporate these features in my below code which uses Datastax Java driver..

/**

* Performs an upsert of the specified attributes for the specified id.

*/

public void upsertAttributes(final String userId, final Map attributes, final String columnFamily) {

try {

// make a sql here using the above input parameters.

String sql = sqlPart1.toString()+sqlPart2.toString();

DatastaxConnection.getInstance();

PreparedStatement prepStatement = DatastaxConnection.getSession().prepare(sql);

prepStatement.setConsistencyLevel(ConsistencyLevel.ONE);

BoundStatement query = prepStatement.bind(userId, attributes.values().toArray(new Object[attributes.size()]));

DatastaxConnection.getSession().execute(query);

} catch (InvalidQueryException e) {

LOG.error("Invalid Query Exception in DatastaxClient::upsertAttributes "+e);

} catch (Exception e) {

LOG.error("Exception in DatastaxClient::upsertAttributes "+e);

}

}

In the below code, I am creating a Connection to Cassandra nodes using Datastax Java driver.

/**

* Creating Cassandra connection using Datastax Java driver

*

*/

private DatastaxConnection() {

try{

builder = Cluster.builder();

builder.addContactPoint("some_nodes");

builder.poolingOptions().setCoreConnectionsPerHost(

HostDistance.LOCAL,

builder.poolingOptions().getMaxConnectionsPerHost(HostDistance.LOCAL));

cluster = builder

.withRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE)

.withReconnectionPolicy(new ConstantReconnectionPolicy(100L))

.build();

StringBuilder s = new StringBuilder();

Set allHosts = cluster.getMetadata().getAllHosts();

for (Host h : allHosts) {

s.append("[");

s.append(h.getDatacenter());

s.append(h.getRack());

s.append(h.getAddress());

s.append("]");

}

System.out.println("Cassandra Cluster: " + s.toString());

session = cluster.connect("testdatastaxks");

} catch (NoHostAvailableException e) {

e.printStackTrace();

throw new RuntimeException(e);

} catch (Exception e) {

}

}

Can anybody help me on how to add Batch writes or Asynchronous features to my above code.. Thanks for the help..

I am running Cassandra 1.2.9

解决方案

For asynch it's as simple as using the executeAsync function:

...

DatastaxConnection.getSession().executeAsync(query);

For the batch, you need to build the query (I use strings because the compiler knows how to optimize string concatenation really well):

String cql = "BEGIN BATCH "

cql += "INSERT INTO test.prepared (id, col_1) VALUES (?,?); ";

cql += "INSERT INTO test.prepared (id, col_1) VALUES (?,?); ";

cql += "APPLY BATCH; "

DatastaxConnection.getInstance();

PreparedStatement prepStatement = DatastaxConnection.getSession().prepare(cql);

prepStatement.setConsistencyLevel(ConsistencyLevel.ONE);

// this is where you need to be careful

// bind expects a comma separated list of values for all the params (?) above

// so for the above batch we need to supply 4 params:

BoundStatement query = prepStatement.bind(userId, "col1_val", userId_2, "col1_val_2");

DatastaxConnection.getSession().execute(query);

On a side note, I think your binding of the statement might look something like this, assuming you change attributes to a list of maps where each map represents an update/insert inside the batch:

BoundStatement query = prepStatement.bind(userId,

attributesList.get(0).values().toArray(new Object[attributes.size()]),

userId_2,

attributesList.get(1).values().toArray(new Object[attributes.size()]));

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值