我有一张垂直生长的大 table . 我想小批量读取行,以便我可以处理每个行并保存结果 .
表定义
CREATE TABLE foo (
uid timeuuid,
events blob,
PRIMARY KEY ((uid))
)
代码尝试1 - 使用CassandraSQLContext
// Step 1. Get uuid of the last row in a batch
val max = 10
val rdd = sc.cassandraTable("foo", "bar")
var cassandraRows = rdd.take(max)
var lastUUID = cassandraRows.last.getUUID("uid");
// lastUUID = 131ea620-2e4e-11e4-a2fc-8d5aad979e84
// Step 2. Use last row as a pointer to the start of the next batch
val cc = new CassandraSQLContext(sc)
val cql = s"SELECT events from foo.bar where token(uid) > token($lastUUID) limit $max"
// which is at runtime
// SELECT events from foo.bar WHERE
// token(uid) > token(131ea620-2e4e-11e4-a2fc-8d5aad979e84) limit 10
cc.sql(cql).collect()
最后一行抛出
线程“main”中的异常java.lang.RuntimeException:[1.79]失败:``)''预期但标识符ea620从foo.bar找到SELECT事件,其中token(uid)> token(131ea620-2e4e-11e4-a2fc- 8d5aad979e84)在org.apache.spark.sql的org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:33)的scala.sys.package $ .error(package.scala:27)中限制10 ^ .SQLContext $$ anonfun $ 1.apply(SQLContext.scala:79)at org.apache.spark.sql.SQLContext $$ anonfun $ 1.apply(SQLContext.scala:79)
但是如果我在cqlsh中运行我的cql,它会返回正确的10条记录 .
代码尝试2 - 使用DataStax Cassandra连接器
// Step 1. Get uuid of the last row in a batch
val max = 10
val rdd = sc.cassandraTable("foo", "bar")
var cassandraRows = rdd.take(max)
var lastUUID = cassandraRows.last.getUUID("uid");
// lastUUID = 131ea620-2e4e-11e4-a2fc-8d5aad979e84
// Step 2. Execute query
rdd.where(s"token(uid) > token($lastUUID)").take(max)
这引发了
org.apache.spark.SparkException:作业因阶段失败而中止:阶段1.0中的任务0失败1次,最近失败:阶段1.0中失去的任务0.0(TID 1,localhost):java.io.IOException:异常期间编写SELECT“uid”,“events”FROM“foo” . “bar”WHERE token(“uid”)>? AND令牌(“uid”)<=? AND uid> $ lastUUID ALLOW FILTERING:第1行:118在字符'$'上没有可行的选择
How to use where token(...) queries in spark and Cassandra?