1.引入依赖
<dependency>
<groupId>com.databricks</groupId>
<artifactId>spark-avro_2.10</artifactId>
<version>2.0.1</version>
</dependency>
2.当读取的avro文件
SparkConf conf = new SparkConf();
conf.setAppName("SparkReadAvroTest");
JavaSparkContext sc = new JavaSparkContext(conf);
SQLContext sqlCon = new SQLContext(sc);
sqlCon.sparkContext().hadoopConfiguration().set("avro.mapred.ignore.inputs.without.extension", "false");#不是以.avro后缀名结尾时要注意配置
DataFrame df = sqlCon.read().format("com.databricks.spark.avro").load("hdfs://avro/test")
3.注意spark的hivecontext不能读取hive中avro格式存储数据的表。