java play 框架 加密_如何在Play框架的 生产环境 模式中包含文件

我的环境概述:Mac OS Yosemite,Play framework 2.3.7,sbt 0.13.7,Intellij Idea 14,java 1.8.0_25

我试图在Play框架中运行一个简单的Spark程序,所以我只是在Intellij中创建一个Play 2项目,并按如下方式更改一些文件:

应用程序/控制器/ Application.scala:

package controllers

import play.api._

import play.api.libs.iteratee.Enumerator

import play.api.mvc._

object Application extends Controller {

def index = Action {

Ok(views.html.index("Your new application is ready."))

}

def trySpark = Action {

Ok.chunked(Enumerator(utils.TrySpark.runSpark))

}

}

应用程序/ utils的/ TrySpark.scala:

package utils

import org.apache.spark.{SparkContext, SparkConf}

object TrySpark {

def runSpark: String = {

val conf = new SparkConf().setAppName("trySpark").setMaster("local[4]")

val sc = new SparkContext(conf)

val data = sc.textFile("public/data/array.txt")

val array = data.map ( line => line.split(' ').map(_.toDouble) )

val sum = array.first().reduce( (a, b) => a + b )

return sum.toString

}

}

公共/数据/ array.txt:

1 2 3 4 5 6 7

CONF /路线:

GET / controllers.Application.index

GET /spark controllers.Application.trySpark

GET /assets/*file controllers.Assets.at(path="/public", file)

build.sbt:

name := "trySpark"

version := "1.0"

lazy val `tryspark` = (project in file(".")).enablePlugins(PlayScala)

scalaVersion := "2.10.4"

libraryDependencies ++= Seq( jdbc , anorm , cache , ws,

"org.apache.spark" % "spark-core_2.10" % "1.2.0")

unmanagedResourceDirectories in Test

我键入 activator run 以在开发模式下运行此应用程序,然后在浏览器中键入 localhost:9000/spark ,它按预期显示结果 28 . 但是,当我想要类型 activator start 在 生产环境 模式下运行此应用程序时,它显示以下错误消息:

[info] play - Application started (Prod)

[info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000

[error] application -

! @6kik15fee - Internal server error, for (GET) [/spark] ->

play.api.Application$$anon$1: Execution exception[[InvalidInputException: Input path does not exist: file:/Path/to/my/project/target/universal/stage/public/data/array.txt]]

at play.api.Application$class.handleError(Application.scala:296) ~[com.typesafe.play.play_2.10-2.3.7.jar:2.3.7]

at play.api.DefaultApplication.handleError(Application.scala:402) [com.typesafe.play.play_2.10-2.3.7.jar:2.3.7]

at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$14$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:205) [com.typesafe.play.play_2.10-2.3.7.jar:2.3.7]

at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$14$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:202) [com.typesafe.play.play_2.10-2.3.7.jar:2.3.7]

at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [org.scala-lang.scala-library-2.10.4.jar:na]

Caused by: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/Path/to/my/project/target/universal/stage/public/data/array.txt

at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:251) ~[org.apache.hadoop.hadoop-mapreduce-client-core-2.2.0.jar:na]

at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270) ~[org.apache.hadoop.hadoop-mapreduce-client-core-2.2.0.jar:na]

at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:201) ~[org.apache.spark.spark-core_2.10-1.2.0.jar:1.2.0]

at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205) ~[org.apache.spark.spark-core_2.10-1.2.0.jar:1.2.0]

at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203) ~[org.apache.spark.spark-core_2.10-1.2.0.jar:1.2.0]

似乎我的 array.txt 文件未在 生产环境 模式下加载 . 怎么能解决这个问题?

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值