之前看geotrellis系列文章的时候,对于geotrellis发布服务一直看不太懂,但是最近发现项目里貌似也需要有这个功能,于是决定把他好好看一下~
首先是我的参考资料:
1. http://www.cnblogs.com/shoufengwei/p/5422844.html
2. https://blog.csdn.net/xtfge0915/article/details/52587041
4. https://github.com/geotrellis/geotrellis-chatta-demo
5. https://github.com/geotrellis/geotrellis-examples/tree/be8707499bdf0d481396049d42d44492db7ec982
6. https://blog.csdn.net/weixin_36926779/article/details/79183083
一开始我还是参照第一个@魏守峰的博客
嗯,看上去很短的样子,貌似不是很难,但是。。。这是什么写法啊??为什么完全没有见过。
百度了一下 “IO(Http) ! Http.Bind(service, host, port)” 这个看上去很奇怪的东西,稍稍了解了一下akka这个东西:
scala的akka框架有一个极简的http service组件,是把原来spray框架集成到akka里面修改而成。
Actor模式是一种并发模型,与另一种模型共享内存完全相反,Actor模型share nothing。所有的线程(或进程)通过消息传递的方式进行合作,这些线程(或进程)称为Actor。共享内存更适合单机多核的并发编程,而且共享带来的问题很多,编程也困难。随着多核时代和分布式系统的到来,共享模型已经不太适合并发编程,因此几十年前就已经出现的Actor模型又重新受到了人们的重视。MapReduce就是一种典型的Actor模式,而在语言级对Actor支持的编程语言Erlang又重新火了起来,Scala也提供了Actor,但是并不是在语言层面支持,Java也有第三方的Actor包,Go语言channel机制也是一种类Actor模型。
好吧。。我没有看懂。不管怎么样先找一个demo跑了一下试试,并且对照着两个demo(geotrellis-chatta-demo,geotrellis-landsat-tutorial)看了一下,稍微有一点感觉了:
以第二个demo为例:
找到Serve.scala,看到这一段应该是重点
def root =
pathPrefix(Segment / IntNumber / IntNumber / IntNumber) { (render, zoom, x, y) =>
complete {
Future {
// Read in the tile at the given z/x/y coordinates.
val tileOpt: Option[MultibandTile] =
try {
Some(Serve.reader(LayerId("landsat", zoom)).read(x, y))
} catch {
case _: ValueNotFoundError =>
None
}
render match {
case "ndvi" =>
tileOpt.map { tile =>
// Compute the NDVI
val ndvi =
tile.convert(DoubleConstantNoDataCellType).combineDouble(R_BAND, NIR_BAND) { (r, ir) =>
Calculations.ndvi(r, ir);
}
// Render as a PNG
val png = ndvi.renderPng(Serve.ndviColorMap)
pngAsHttpResponse(png)
}
case "ndwi" =>
tileOpt.map { tile =>
// Compute the NDWI
val ndwi =
tile.convert(DoubleConstantNoDataCellType).combineDouble(G_BAND, NIR_BAND) { (g, ir) =>
Calculations.ndwi(g, ir)
}
// Render as a PNG
val png = ndwi.renderPng(Serve.ndwiColorMap)
pngAsHttpResponse(png)
}
}
}
}
} ~
pathPrefix(Segment / IntNumber / IntNumber / IntNumber) { (render, zoom, x, y)
就代表了请求的参数,这里一般的tms就是zoom,x,y是三个参数,这里因为他要算ndvi以及ndwi,并且有不同的渲染方式,所以多了一个render的参数。
获得参数之后,通过reader到对应的backend里把请求的层级以及格网把Tille取出来:
Some(Serve.reader(LayerId("landsat", zoom)).read(x, y))
取出来之后,会计算一个ndvi:
val ndvi =tile.convert(DoubleConstantNoDataCellType).combineDouble(R_BAND, NIR_BAND)
{
(r, ir) => Calculations.ndvi(r, ir);
}
然后渲染:
val png = ndvi.renderPng(Serve.ndviColorMap)
然后返回给前端,这里返回的是png的bytes数组:
def pngAsHttpResponse(png: Png): HttpResponse =
HttpResponse(entity = HttpEntity(ContentType(MediaTypes.`image/png`), png.bytes))
这样一个服务就定义好了,然后再main函数里:
val catalogPath = new java.io.File("data/catalog").getAbsolutePath
// Create a reader that will read in the indexed tiles we produced in IngestImage.
val fileValueReader = FileValueReader(catalogPath)
def reader(layerId: LayerId) = fileValueReader.reader[SpatialKey, MultibandTile](layerId)
val ndviColorMap =
ColorMap.fromStringDouble(ConfigFactory.load().getString("tutorial.ndviColormap")).get
val ndwiColorMap =
ColorMap.fromStringDouble(ConfigFactory.load().getString("tutorial.ndwiColormap")).get
override implicit val system = ActorSystem("tutorial-system")
override implicit val executor = system.dispatcher
override implicit val materializer = ActorMaterializer()
override val logger = Logging(system, getClass)
Http().bindAndHandle(root, "0.0.0.0", 8080)
这里做的事情就是定义瓦片的reader,并且定义服务的名称,ip以及端口号等等。
依样画葫芦,我写了一个发布本地数据的服务:
首先要添加依赖:
val akkaActorVersion = "2.4.17"
val akkaHttpVersion = "10.0.3"
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-actor" % akkaActorVersion,
"com.typesafe.akka" %% "akka-http-core" % akkaHttpVersion,
"com.typesafe.akka" %% "akka-http" % akkaHttpVersion,
"com.typesafe.akka" %% "akka-http-spray-json" % akkaHttpVersion
)
package service
import etl.ETLOutputRender.colorRamp
import geotrellis.raster._
import geotrellis.spark._
import geotrellis.spark.io._
import geotrellis.spark.io.file._
import akka.actor._
import akka.event.{Logging, LoggingAdapter}
import akka.http.scaladsl.Http
import akka.http.scaladsl.model._
import akka.http.scaladsl.server.Directives._
import akka.stream.{ActorMaterializer, Materializer}
import org.apache.hadoop.fs.Path
import scala.concurrent._
import com.typesafe.config.ConfigFactory
import geotrellis.raster.render.{ColorMap, Png}
import geotrellis.spark.io.hadoop.{HadoopAttributeStore, HadoopLayerReader}
object Serve extends App with Service {
val catalogPath = new java.io.File("D:\\IdeaProjects\\GeotrellisETL\\data\\DL3857_output3857").getAbsolutePath
val fileValueReader = FileValueReader(catalogPath)
def reader(layerId: LayerId) = fileValueReader.reader[SpatialKey, BitArrayTile](layerId)
override implicit val system = ActorSystem("tutorial-system")
override implicit val executor = system.dispatcher
override implicit val materializer = ActorMaterializer()
override val logger = Logging(system, getClass)
Http().bindAndHandle(root,"localhost", 8080)
}
trait Service {
implicit val system: ActorSystem
implicit def executor: ExecutionContextExecutor
implicit val materializer: Materializer
val logger: LoggingAdapter
def pngAsHttpResponse(png: Png): HttpResponse =
HttpResponse(entity = HttpEntity(ContentType(MediaTypes.`image/png`), png.bytes))
def root =
// http://localhost:8080/6/16/25
pathPrefix(IntNumber / IntNumber / IntNumber) { (zoom, x, y) =>
complete {
Future {
// Read in the tile at the given z/x/y coordinates.
val tileOpt: Option[BitArrayTile] =
try {
Some(Serve.reader(LayerId("dl", zoom)).read(x, y))
} catch {
case _: ValueNotFoundError =>
None
}
tileOpt.map { tile =>
// Render as a PNG
val png = tile.renderPng(colorRamp)
pngAsHttpResponse(png)
}
}
}
} ~
pathEndOrSingleSlash {
getFromFile("static/index.html")
} ~
pathPrefix("") {
getFromDirectory("static")
}
}
然后访问:http://localhost:8080/6/16/25,可以看到渲染出来的瓦片了~
然后也可以在openlayers里面直接加载:
var tileUrl='http://202.121.180.55:8081/{z}/{x}/{y}.png'
注意这里geotrellis的切片规则和google是一样的,所以xyz不用变化。
成功啦嘻嘻~~好开心。
然后我把backend换成hdfs试了一下:
首先就是在ETL的过程中把输出路径改成hdfs的路径:
"backend": {
"type": "hadoop",
"path":"hdfs://localhost:9000/geotrellis/ETLoutput"
},
然后只要把reader换成hadoop对应的reader就行了:
implicit val sc = SparkUtils.createSparkContext("GeoTrellis ETL SinglebandIngest", new SparkConf(true).setMaster("local"))
val hdfsUri=new Path( "hdfs://localhost:9000/geotrellis/ETLoutput")
val hadoopValueReader:HadoopValueReader = HadoopValueReader(hdfsUri)
def reader(layerId: LayerId) = hadoopValueReader.reader[SpatialKey, Tile](layerId)
注意这里要初始化一个sparkcontext。其他的都和读取本地文件一样啦。
然后还可以根据不同的参数进行渲染:
def root =
// http://localhost:8081/6/16/25
pathPrefix(IntNumber / IntNumber / IntNumber / IntNumber) { (render,zoom, x, y) =>
complete{
Future {
// Read in the tile at the given z/x/y coordinates.
val tileOpt: Option[Tile] =
try {
Some(ServeHDFSRender.reader(LayerId("developland", zoom)).read(x, y))
} catch {
case _: ValueNotFoundError =>
None
}
tileOpt.map { tile =>
// Render as a PNG
var bytes: Array[Byte]=null
var png = new Png(bytes)
if(render==1){
png = tile.renderPng(colorRamp)
}
pngAsHttpResponse(png)
}
}
}
} ~
最后的结果:
这样就可以通过用户指定的颜色来进行渲染了。