java newapihadooprdd_Java JavaSparkContext.newAPIHadoopRDD方法代碼示例

本文整理匯總了Java中org.apache.spark.api.java.JavaSparkContext.newAPIHadoopRDD方法的典型用法代碼示例。如果您正苦於以下問題:Java JavaSparkContext.newAPIHadoopRDD方法的具體用法?Java JavaSparkContext.newAPIHadoopRDD怎麽用?Java JavaSparkContext.newAPIHadoopRDD使用的例子?那麽恭喜您, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類org.apache.spark.api.java.JavaSparkContext的用法示例。

在下文中一共展示了JavaSparkContext.newAPIHadoopRDD方法的20個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於我們的係統推薦出更棒的Java代碼示例。

示例1: validateLassoAthenaFeatures

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public LassoValidationSummary validateLassoAthenaFeatures(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

LassoDetectionModel lassoDetectionModel,

Indexing indexing, Marking marking) {

long start = System.nanoTime(); //

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

LassoDetectionAlgorithm lassoDetectionAlgorithm =

(LassoDetectionAlgorithm) lassoDetectionModel.getDetectionAlgorithm();

LassoValidationSummary lassoValidationSummary = new LassoValidationSummary();

lassoValidationSummary.setLassoDetectionAlgorithm(lassoDetectionAlgorithm);

LassoDistJob lassoDistJob = new LassoDistJob();

lassoDistJob.validate(mongoRDD,

athenaMLFeatureConfiguration,

lassoDetectionModel,

lassoValidationSummary);

long end = System.nanoTime(); //

long time = end - start;

lassoValidationSummary.setValidationTime(time);

return lassoValidationSummary;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:35,

示例2: validateLinearRegressionAthenaFeatures

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public LinearRegressionValidationSummary validateLinearRegressionAthenaFeatures(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

LinearRegressionDetectionModel linearRegressionDetectionModel,

Indexing indexing, Marking marking) {

long start = System.nanoTime(); //

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

LinearRegressionDetectionAlgorithm linearRegressionDetectionAlgorithm =

(LinearRegressionDetectionAlgorithm) linearRegressionDetectionModel.getDetectionAlgorithm();

LinearRegressionValidationSummary linearRegressionValidationSummary =

new LinearRegressionValidationSummary();

linearRegressionValidationSummary.setLinearRegressionDetectionAlgorithm(linearRegressionDetectionAlgorithm);

LinearRegressionDistJob linearRegressionDistJob = new LinearRegressionDistJob();

linearRegressionDistJob.validate(mongoRDD,

athenaMLFeatureConfiguration,

linearRegressionDetectionModel,

linearRegressionValidationSummary);

long end = System.nanoTime(); //

long time = end - start;

linearRegressionValidationSummary.setValidationTime(time);

return linearRegressionValidationSummary;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:38,

示例3: validateSVMAthenaFeatures

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public SVMValidationSummary validateSVMAthenaFeatures(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

SVMDetectionModel svmDetectionModel,

Indexing indexing, Marking marking) {

long start = System.nanoTime(); //

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

SVMDetectionAlgorithm svmDetectionAlgorithm =

(SVMDetectionAlgorithm) svmDetectionModel.getDetectionAlgorithm();

SVMValidationSummary svmValidationSummary =

new SVMValidationSummary(sc.sc(),

2, indexing, marking);

SVMDistJob svmDistJob = new SVMDistJob();

svmDistJob.validate(mongoRDD,

athenaMLFeatureConfiguration,

svmDetectionModel,

svmValidationSummary);

long end = System.nanoTime(); //

long time = end - start;

svmValidationSummary.setTotalValidationTime(time);

return svmValidationSummary;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:37,

示例4: validateGradientBoostedTreesAthenaFeatures

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public GradientBoostedTreesValidationSummary validateGradientBoostedTreesAthenaFeatures(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

GradientBoostedTreesDetectionModel gradientBoostedTreesDetectionModel,

Indexing indexing, Marking marking) {

long start = System.nanoTime(); //

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

GradientBoostedTreesDetectionAlgorithm gradientBoostedTreesDetectionAlgorithm =

(GradientBoostedTreesDetectionAlgorithm) gradientBoostedTreesDetectionModel.getDetectionAlgorithm();

GradientBoostedTreesValidationSummary gradientBoostedTreesValidationSummary =

new GradientBoostedTreesValidationSummary(sc.sc(),

gradientBoostedTreesDetectionAlgorithm.getNumClasses(), indexing, marking);

GradientBoostedTreesDistJob gradientBoostedTreesDistJob = new GradientBoostedTreesDistJob();

gradientBoostedTreesDistJob.validate(mongoRDD,

athenaMLFeatureConfiguration,

gradientBoostedTreesDetectionModel,

gradientBoostedTreesValidationSummary);

long end = System.nanoTime(); //

long time = end - start;

gradientBoostedTreesValidationSummary.setTotalValidationTime(time);

return gradientBoostedTreesValidationSummary;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:37,

示例5: validateRandomForestAthenaFeatures

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public RandomForestValidationSummary validateRandomForestAthenaFeatures(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

RandomForestDetectionModel randomForestDetectionModel,

Indexing indexing, Marking marking) {

long start = System.nanoTime(); //

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

RandomForestDetectionAlgorithm randomForestDetectionAlgorithm = (RandomForestDetectionAlgorithm) randomForestDetectionModel.getDetectionAlgorithm();

RandomForestValidationSummary randomForestValidationSummary =

new RandomForestValidationSummary(sc.sc(), randomForestDetectionAlgorithm.getNumClasses(), indexing, marking);

RandomForestDistJob randomForestDistJob = new RandomForestDistJob();

randomForestDistJob.validate(mongoRDD,

athenaMLFeatureConfiguration,

randomForestDetectionModel,

randomForestValidationSummary);

long end = System.nanoTime(); //

long time = end - start;

randomForestValidationSummary.setTotalValidationTime(time);

return randomForestValidationSummary;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:35,

示例6: validateNaiveBayesAthenaFeatures

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public NaiveBayesValidationSummary validateNaiveBayesAthenaFeatures(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

NaiveBayesDetectionModel naiveBayesDetectionModel,

Indexing indexing, Marking marking) {

long start = System.nanoTime(); //

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

NaiveBayesDetectionAlgorithm naiveBayesDetectionAlgorithm = (NaiveBayesDetectionAlgorithm) naiveBayesDetectionModel.getDetectionAlgorithm();

NaiveBayesValidationSummary naiveBayesValidationSummary =

new NaiveBayesValidationSummary(sc.sc(), naiveBayesDetectionAlgorithm.getNumClasses(), indexing, marking);

NaiveBayesDistJob naiveBayesDistJob = new NaiveBayesDistJob();

naiveBayesDistJob.validate(mongoRDD,

athenaMLFeatureConfiguration,

naiveBayesDetectionModel,

naiveBayesValidationSummary);

long end = System.nanoTime(); //

long time = end - start;

naiveBayesValidationSummary.setTotalValidationTime(time);

return naiveBayesValidationSummary;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:36,

示例7: validateDecisionTreeAthenaFeatures

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public DecisionTreeValidationSummary validateDecisionTreeAthenaFeatures(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

DecisionTreeDetectionModel decisionTreeDetectionModel,

Indexing indexing, Marking marking) {

long start = System.nanoTime(); //

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

DecisionTreeDetectionAlgorithm decisionTreeDetectionAlgorithm = (DecisionTreeDetectionAlgorithm) decisionTreeDetectionModel.getDetectionAlgorithm();

DecisionTreeValidationSummary decisionTreeValidationSummary =

new DecisionTreeValidationSummary(sc.sc(), decisionTreeDetectionAlgorithm.getNumClasses(), indexing, marking);

DecisionTreeDistJob decisionTreeDistJob = new DecisionTreeDistJob();

decisionTreeDistJob.validate(mongoRDD,

athenaMLFeatureConfiguration,

decisionTreeDetectionModel,

decisionTreeValidationSummary);

long end = System.nanoTime(); //

long time = end - start;

decisionTreeValidationSummary.setTotalValidationTime(time);

return decisionTreeValidationSummary;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:35,

示例8: validateGaussianMixtureAthenaFeatures

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public GaussianMixtureValidationSummary validateGaussianMixtureAthenaFeatures(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

GaussianMixtureDetectionModel gaussianMixtureDetectionModel,

Indexing indexing,

Marking marking) {

long start = System.nanoTime(); //

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

GaussianMixtureDetectionAlgorithm gaussianMixtureDetectionAlgorithm = (GaussianMixtureDetectionAlgorithm) gaussianMixtureDetectionModel.getDetectionAlgorithm();

GaussianMixtureValidationSummary gaussianMixtureValidationSummary =

new GaussianMixtureValidationSummary(sc.sc(), gaussianMixtureDetectionAlgorithm.getK(), indexing, marking);

GaussianMixtureDistJob gaussianMixtureDistJob = new GaussianMixtureDistJob();

gaussianMixtureDistJob.validate(mongoRDD,

athenaMLFeatureConfiguration,

gaussianMixtureDetectionModel,

gaussianMixtureValidationSummary);

long end = System.nanoTime(); //

long time = end - start;

gaussianMixtureValidationSummary.setTotalValidationTime(time);

return gaussianMixtureValidationSummary;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:36,

示例9: validateKMeansAthenaFeatures

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public KmeansValidationSummary validateKMeansAthenaFeatures(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

KMeansDetectionModel kMeansDetectionModel,

Indexing indexing,

Marking marking) {

long start = System.nanoTime(); //

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

KMeansDetectionAlgorithm kMeansDetectionAlgorithm = (KMeansDetectionAlgorithm) kMeansDetectionModel.getDetectionAlgorithm();

KmeansValidationSummary kmeansValidationSummary =

new KmeansValidationSummary(sc.sc(), kMeansDetectionAlgorithm.getK(), indexing, marking);

KMeansDistJob KMeansDistJob = new KMeansDistJob();

KMeansDistJob.validate(mongoRDD,

athenaMLFeatureConfiguration,

kMeansDetectionModel,

kmeansValidationSummary);

long end = System.nanoTime(); //

long time = end - start;

kmeansValidationSummary.setTotalValidationTime(time);

return kmeansValidationSummary;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:33,

示例10: generateGaussianMixtureAthenaDetectionModel

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public GaussianMixtureDetectionModel generateGaussianMixtureAthenaDetectionModel(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

DetectionAlgorithm detectionAlgorithm,

Indexing indexing,

Marking marking) {

GaussianMixtureModelSummary gaussianMixtureModelSummary = new GaussianMixtureModelSummary(

sc.sc(), indexing, marking);

long start = System.nanoTime(); //

GaussianMixtureDetectionAlgorithm gaussianMixtureDetectionAlgorithm = (GaussianMixtureDetectionAlgorithm) detectionAlgorithm;

GaussianMixtureDetectionModel gaussianMixtureDetectionModel = new GaussianMixtureDetectionModel();

gaussianMixtureDetectionModel.setGaussianMixtureDetectionAlgorithm(gaussianMixtureDetectionAlgorithm);

gaussianMixtureModelSummary.setGaussianMixtureDetectionAlgorithm(gaussianMixtureDetectionAlgorithm);

gaussianMixtureDetectionModel.setFeatureConstraint(featureConstraint);

gaussianMixtureDetectionModel.setAthenaMLFeatureConfiguration(athenaMLFeatureConfiguration);

gaussianMixtureDetectionModel.setIndexing(indexing);

gaussianMixtureDetectionModel.setMarking(marking);

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

GaussianMixtureDistJob gaussianMixtureDistJob = new GaussianMixtureDistJob();

GaussianMixtureModel gaussianMixtureModel = gaussianMixtureDistJob.generateGaussianMixtureWithPreprocessing(mongoRDD,

athenaMLFeatureConfiguration, gaussianMixtureDetectionAlgorithm, gaussianMixtureModelSummary);

gaussianMixtureDetectionModel.setkGaussianMixtureModel(gaussianMixtureModel);

long end = System.nanoTime(); //

long time = end - start;

gaussianMixtureModelSummary.setTotalLearningTime(time);

gaussianMixtureDetectionModel.setClusterModelSummary(gaussianMixtureModelSummary);

gaussianMixtureModelSummary.setGaussianMixtureModel(gaussianMixtureModel);

return gaussianMixtureDetectionModel;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:44,

示例11: generateKMeansAthenaDetectionModel

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public KMeansDetectionModel generateKMeansAthenaDetectionModel(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

DetectionAlgorithm detectionAlgorithm,

Indexing indexing,

Marking marking) {

KmeansModelSummary kmeansModelSummary = new KmeansModelSummary(sc.sc(), indexing, marking);

long start = System.nanoTime(); //

KMeansDetectionAlgorithm kMeansDetectionAlgorithm = (KMeansDetectionAlgorithm) detectionAlgorithm;

KMeansDetectionModel kMeansDetectionModel = new KMeansDetectionModel();

kMeansDetectionModel.setkMeansDetectionAlgorithm(kMeansDetectionAlgorithm);

kmeansModelSummary.setkMeansDetectionAlgorithm(kMeansDetectionAlgorithm);

kMeansDetectionModel.setFeatureConstraint(featureConstraint);

kMeansDetectionModel.setAthenaMLFeatureConfiguration(athenaMLFeatureConfiguration);

kMeansDetectionModel.setIndexing(indexing);

kMeansDetectionModel.setMarking(marking);

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

KMeansDistJob KMeansDistJob = new KMeansDistJob();

KMeansModel kMeansModel = KMeansDistJob.generateKmeansWithPreprocessing(mongoRDD,

athenaMLFeatureConfiguration, kMeansDetectionAlgorithm, kmeansModelSummary);

kMeansDetectionModel.setkMeansModel(kMeansModel);

long end = System.nanoTime(); //

long time = end - start;

kmeansModelSummary.setTotalLearningTime(time);

kMeansDetectionModel.setClusterModelSummary(kmeansModelSummary);

return kMeansDetectionModel;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:42,

示例12: generateLassoAthenaDetectionModel

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public LassoDetectionModel generateLassoAthenaDetectionModel(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

DetectionAlgorithm detectionAlgorithm,

Indexing indexing,

Marking marking) {

LassoModelSummary lassoModelSummary = new LassoModelSummary(

sc.sc(), indexing, marking);

long start = System.nanoTime(); //

LassoDetectionAlgorithm lassoDetectionAlgorithm = (LassoDetectionAlgorithm) detectionAlgorithm;

LassoDetectionModel lassoDetectionModel = new LassoDetectionModel();

lassoDetectionModel.setLassoDetectionAlgorithm(lassoDetectionAlgorithm);

lassoModelSummary.setLassoDetectionAlgorithm(lassoDetectionAlgorithm);

lassoDetectionModel.setFeatureConstraint(featureConstraint);

lassoDetectionModel.setAthenaMLFeatureConfiguration(athenaMLFeatureConfiguration);

lassoDetectionModel.setIndexing(indexing);

lassoDetectionModel.setMarking(marking);

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

LassoDistJob lassoDistJob = new LassoDistJob();

LassoModel lassoModel = lassoDistJob.generateDecisionTreeWithPreprocessing(mongoRDD,

athenaMLFeatureConfiguration, lassoDetectionAlgorithm, marking, lassoModelSummary);

lassoDetectionModel.setModel(lassoModel);

long end = System.nanoTime(); //

long time = end - start;

lassoModelSummary.setTotalLearningTime(time);

lassoDetectionModel.setClassificationModelSummary(lassoModelSummary);

return lassoDetectionModel;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:45,

示例13: generateRidgeRegressionAthenaDetectionModel

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public RidgeRegressionDetectionModel generateRidgeRegressionAthenaDetectionModel(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

DetectionAlgorithm detectionAlgorithm,

Indexing indexing,

Marking marking) {

RidgeRegressionModelSummary ridgeRegressionModelSummary = new RidgeRegressionModelSummary(

sc.sc(), indexing, marking);

long start = System.nanoTime(); //

RidgeRegressionDetectionAlgorithm ridgeRegressionDetectionAlgorithm = (RidgeRegressionDetectionAlgorithm) detectionAlgorithm;

RidgeRegressionDetectionModel ridgeRegressionDetectionModel = new RidgeRegressionDetectionModel();

ridgeRegressionDetectionModel.setRidgeRegressionDetectionAlgorithm(ridgeRegressionDetectionAlgorithm);

ridgeRegressionModelSummary.setRidgeRegressionDetectionAlgorithm(ridgeRegressionDetectionAlgorithm);

ridgeRegressionDetectionModel.setFeatureConstraint(featureConstraint);

ridgeRegressionDetectionModel.setAthenaMLFeatureConfiguration(athenaMLFeatureConfiguration);

ridgeRegressionDetectionModel.setIndexing(indexing);

ridgeRegressionDetectionModel.setMarking(marking);

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

RidgeRegressionDistJob ridgeRegressionDistJob = new RidgeRegressionDistJob();

RidgeRegressionModel ridgeRegressionModel = ridgeRegressionDistJob.generateDecisionTreeWithPreprocessing(mongoRDD,

athenaMLFeatureConfiguration, ridgeRegressionDetectionAlgorithm, marking, ridgeRegressionModelSummary);

ridgeRegressionDetectionModel.setModel(ridgeRegressionModel);

long end = System.nanoTime(); //

long time = end - start;

ridgeRegressionModelSummary.setTotalLearningTime(time);

ridgeRegressionDetectionModel.setClassificationModelSummary(ridgeRegressionModelSummary);

return ridgeRegressionDetectionModel;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:45,

示例14: generateLinearRegressionAthenaDetectionModel

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public LinearRegressionDetectionModel generateLinearRegressionAthenaDetectionModel(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

DetectionAlgorithm detectionAlgorithm,

Indexing indexing,

Marking marking) {

LinearRegressionModelSummary linearRegressionModelSummary = new LinearRegressionModelSummary(

sc.sc(), indexing, marking);

long start = System.nanoTime(); //

LinearRegressionDetectionAlgorithm linearRegressionDetectionAlgorithm = (LinearRegressionDetectionAlgorithm) detectionAlgorithm;

LinearRegressionDetectionModel linearRegressionDetectionModel = new LinearRegressionDetectionModel();

linearRegressionDetectionModel.setLinearRegressionDetectionAlgorithm(linearRegressionDetectionAlgorithm);

linearRegressionModelSummary.setLinearRegressionDetectionAlgorithm(linearRegressionDetectionAlgorithm);

linearRegressionDetectionModel.setFeatureConstraint(featureConstraint);

linearRegressionDetectionModel.setAthenaMLFeatureConfiguration(athenaMLFeatureConfiguration);

linearRegressionDetectionModel.setIndexing(indexing);

linearRegressionDetectionModel.setMarking(marking);

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

LinearRegressionDistJob linearRegressionDistJob = new LinearRegressionDistJob();

LinearRegressionModel linearRegressionModel = linearRegressionDistJob.generateDecisionTreeWithPreprocessing(mongoRDD,

athenaMLFeatureConfiguration, linearRegressionDetectionAlgorithm, marking, linearRegressionModelSummary);

linearRegressionDetectionModel.setModel(linearRegressionModel);

long end = System.nanoTime(); //

long time = end - start;

linearRegressionModelSummary.setTotalLearningTime(time);

linearRegressionDetectionModel.setClassificationModelSummary(linearRegressionModelSummary);

return linearRegressionDetectionModel;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:45,

示例15: generateLogisticRegressionAthenaDetectionModel

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public LogisticRegressionDetectionModel generateLogisticRegressionAthenaDetectionModel(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

DetectionAlgorithm detectionAlgorithm,

Indexing indexing,

Marking marking) {

LogisticRegressionModelSummary logisticRegressionModelSummary = new LogisticRegressionModelSummary(

sc.sc(), indexing, marking);

long start = System.nanoTime(); //

LogisticRegressionDetectionAlgorithm logisticRegressionDetectionAlgorithm = (LogisticRegressionDetectionAlgorithm) detectionAlgorithm;

LogisticRegressionDetectionModel logisticRegressionDetectionModel = new LogisticRegressionDetectionModel();

logisticRegressionDetectionModel.setLogisticRegressionDetectionAlgorithm(logisticRegressionDetectionAlgorithm);

logisticRegressionModelSummary.setLogisticRegressionDetectionAlgorithm(logisticRegressionDetectionAlgorithm);

logisticRegressionDetectionModel.setFeatureConstraint(featureConstraint);

logisticRegressionDetectionModel.setAthenaMLFeatureConfiguration(athenaMLFeatureConfiguration);

logisticRegressionDetectionModel.setIndexing(indexing);

logisticRegressionDetectionModel.setMarking(marking);

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

LogisticRegressionDistJob logisticRegressionDistJob = new LogisticRegressionDistJob();

LogisticRegressionModel logisticRegressionModel = logisticRegressionDistJob.generateDecisionTreeWithPreprocessing(mongoRDD,

athenaMLFeatureConfiguration, logisticRegressionDetectionAlgorithm, marking, logisticRegressionModelSummary);

logisticRegressionDetectionModel.setModel(logisticRegressionModel);

long end = System.nanoTime(); //

long time = end - start;

logisticRegressionModelSummary.setTotalLearningTime(time);

logisticRegressionDetectionModel.setClassificationModelSummary(logisticRegressionModelSummary);

return logisticRegressionDetectionModel;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:45,

示例16: generateSVMAthenaDetectionModel

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public SVMDetectionModel generateSVMAthenaDetectionModel(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

DetectionAlgorithm detectionAlgorithm,

Indexing indexing,

Marking marking) {

SVMModelSummary svmModelSummary = new SVMModelSummary(

sc.sc(), indexing, marking);

long start = System.nanoTime(); //

SVMDetectionAlgorithm svmDetectionAlgorithm = (SVMDetectionAlgorithm) detectionAlgorithm;

SVMDetectionModel svmDetectionModel = new SVMDetectionModel();

svmDetectionModel.setSVMDetectionAlgorithm(svmDetectionAlgorithm);

svmModelSummary.setSVMDetectionAlgorithm(svmDetectionAlgorithm);

svmDetectionModel.setFeatureConstraint(featureConstraint);

svmDetectionModel.setAthenaMLFeatureConfiguration(athenaMLFeatureConfiguration);

svmDetectionModel.setIndexing(indexing);

svmDetectionModel.setMarking(marking);

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

SVMDistJob svmDistJob = new SVMDistJob();

SVMModel svmModel = svmDistJob.generateDecisionTreeWithPreprocessing(mongoRDD,

athenaMLFeatureConfiguration, svmDetectionAlgorithm, marking, svmModelSummary);

svmDetectionModel.setSVMModel(svmModel);

long end = System.nanoTime(); //

long time = end - start;

svmModelSummary.setTotalLearningTime(time);

svmDetectionModel.setClassificationModelSummary(svmModelSummary);

return svmDetectionModel;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:45,

示例17: generateGradientBoostedTreesAthenaDetectionModel

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public GradientBoostedTreesDetectionModel generateGradientBoostedTreesAthenaDetectionModel(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

DetectionAlgorithm detectionAlgorithm,

Indexing indexing,

Marking marking) {

GradientBoostedTreesModelSummary gradientBoostedTreesModelSummary = new GradientBoostedTreesModelSummary(

sc.sc(), indexing, marking);

long start = System.nanoTime(); //

GradientBoostedTreesDetectionAlgorithm gradientBoostedTreesDetectionAlgorithm = (GradientBoostedTreesDetectionAlgorithm) detectionAlgorithm;

GradientBoostedTreesDetectionModel gradientBoostedTreesDetectionModel = new GradientBoostedTreesDetectionModel();

gradientBoostedTreesDetectionModel.setGradientBoostedTreesDetectionAlgorithm(gradientBoostedTreesDetectionAlgorithm);

gradientBoostedTreesModelSummary.setGradientBoostedTreesDetectionAlgorithm(gradientBoostedTreesDetectionAlgorithm);

gradientBoostedTreesDetectionModel.setFeatureConstraint(featureConstraint);

gradientBoostedTreesDetectionModel.setAthenaMLFeatureConfiguration(athenaMLFeatureConfiguration);

gradientBoostedTreesDetectionModel.setIndexing(indexing);

gradientBoostedTreesDetectionModel.setMarking(marking);

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

GradientBoostedTreesDistJob gradientBoostedTreesDistJob = new GradientBoostedTreesDistJob();

GradientBoostedTreesModel decisionTreeModel = gradientBoostedTreesDistJob.generateDecisionTreeWithPreprocessing(mongoRDD,

athenaMLFeatureConfiguration, gradientBoostedTreesDetectionAlgorithm, marking, gradientBoostedTreesModelSummary);

gradientBoostedTreesDetectionModel.setGradientBoostedTreestModel(decisionTreeModel);

long end = System.nanoTime(); //

long time = end - start;

gradientBoostedTreesModelSummary.setTotalLearningTime(time);

gradientBoostedTreesDetectionModel.setClassificationModelSummary(gradientBoostedTreesModelSummary);

return gradientBoostedTreesDetectionModel;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:45,

示例18: generateRandomForestAthenaDetectionModel

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public RandomForestDetectionModel generateRandomForestAthenaDetectionModel(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

DetectionAlgorithm detectionAlgorithm,

Indexing indexing,

Marking marking) {

RandomForestModelSummary randomForestModelSummary = new RandomForestModelSummary(

sc.sc(), indexing, marking);

long start = System.nanoTime(); //

RandomForestDetectionAlgorithm randomForestDetectionAlgorithm = (RandomForestDetectionAlgorithm) detectionAlgorithm;

RandomForestDetectionModel randomForestDetectionModel = new RandomForestDetectionModel();

randomForestDetectionModel.setRandomForestDetectionAlgorithm(randomForestDetectionAlgorithm);

randomForestModelSummary.setRandomForestDetectionAlgorithm(randomForestDetectionAlgorithm);

randomForestDetectionModel.setFeatureConstraint(featureConstraint);

randomForestDetectionModel.setAthenaMLFeatureConfiguration(athenaMLFeatureConfiguration);

randomForestDetectionModel.setIndexing(indexing);

randomForestDetectionModel.setMarking(marking);

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

RandomForestDistJob randomForestDistJob = new RandomForestDistJob();

RandomForestModel decisionTreeModel = randomForestDistJob.generateDecisionTreeWithPreprocessing(mongoRDD,

athenaMLFeatureConfiguration, randomForestDetectionAlgorithm, marking, randomForestModelSummary);

randomForestDetectionModel.setRandomForestModel(decisionTreeModel);

long end = System.nanoTime(); //

long time = end - start;

randomForestModelSummary.setTotalLearningTime(time);

randomForestDetectionModel.setClassificationModelSummary(randomForestModelSummary);

return randomForestDetectionModel;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:45,

示例19: generateNaiveBayesAthenaDetectionModel

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public NaiveBayesDetectionModel generateNaiveBayesAthenaDetectionModel(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

DetectionAlgorithm detectionAlgorithm,

Indexing indexing,

Marking marking) {

NaiveBayesModelSummary naiveBayesModelSummary = new NaiveBayesModelSummary(

sc.sc(), indexing, marking);

long start = System.nanoTime(); //

NaiveBayesDetectionAlgorithm naiveBayesDetectionAlgorithm = (NaiveBayesDetectionAlgorithm) detectionAlgorithm;

NaiveBayesDetectionModel naiveBayesDetectionModel = new NaiveBayesDetectionModel();

naiveBayesDetectionModel.setNaiveBayesDetectionAlgorithm(naiveBayesDetectionAlgorithm);

naiveBayesModelSummary.setNaiveBayesDetectionAlgorithm(naiveBayesDetectionAlgorithm);

naiveBayesDetectionModel.setFeatureConstraint(featureConstraint);

naiveBayesDetectionModel.setAthenaMLFeatureConfiguration(athenaMLFeatureConfiguration);

naiveBayesDetectionModel.setIndexing(indexing);

naiveBayesDetectionModel.setMarking(marking);

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

NaiveBayesDistJob naiveBayesDistJob = new NaiveBayesDistJob();

NaiveBayesModel naiveBayesModel = naiveBayesDistJob.generateModelWithPreprocessing(mongoRDD,

athenaMLFeatureConfiguration, naiveBayesDetectionAlgorithm, marking, naiveBayesModelSummary);

naiveBayesDetectionModel.setNaiveBayesModel(naiveBayesModel);

long end = System.nanoTime(); //

long time = end - start;

naiveBayesModelSummary.setTotalLearningTime(time);

naiveBayesDetectionModel.setClassificationModelSummary(naiveBayesModelSummary);

return naiveBayesDetectionModel;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:45,

示例20: generateDecisionTreeAthenaDetectionModel

​點讚 2

import org.apache.spark.api.java.JavaSparkContext; //導入方法依賴的package包/類

public DecisionTreeDetectionModel generateDecisionTreeAthenaDetectionModel(JavaSparkContext sc,

FeatureConstraint featureConstraint,

AthenaMLFeatureConfiguration athenaMLFeatureConfiguration,

DetectionAlgorithm detectionAlgorithm,

Indexing indexing,

Marking marking) {

DecisionTreeModelSummary decisionTreeModelSummary = new DecisionTreeModelSummary(

sc.sc(), indexing, marking);

long start = System.nanoTime(); //

DecisionTreeDetectionAlgorithm decisionTreeDetectionAlgorithm = (DecisionTreeDetectionAlgorithm) detectionAlgorithm;

DecisionTreeDetectionModel decisionTreeDetectionModel = new DecisionTreeDetectionModel();

decisionTreeDetectionModel.setDecisionTreeDetectionAlgorithm(decisionTreeDetectionAlgorithm);

decisionTreeModelSummary.setDecisionTreeDetectionAlgorithm(decisionTreeDetectionAlgorithm);

decisionTreeDetectionModel.setFeatureConstraint(featureConstraint);

decisionTreeDetectionModel.setAthenaMLFeatureConfiguration(athenaMLFeatureConfiguration);

decisionTreeDetectionModel.setIndexing(indexing);

decisionTreeDetectionModel.setMarking(marking);

JavaPairRDD mongoRDD;

mongoRDD = sc.newAPIHadoopRDD(

mongodbConfig, // Configuration

MongoInputFormat.class, // InputFormat: read from a live cluster.

Object.class, // Key class

BSONObject.class // Value class

);

DecisionTreeDistJob decisionTreeDistJob = new DecisionTreeDistJob();

DecisionTreeModel decisionTreeModel = decisionTreeDistJob.generateDecisionTreeWithPreprocessing(mongoRDD,

athenaMLFeatureConfiguration, decisionTreeDetectionAlgorithm, marking, decisionTreeModelSummary);

decisionTreeDetectionModel.setDecisionTreeModel(decisionTreeModel);

long end = System.nanoTime(); //

long time = end - start;

decisionTreeModelSummary.setTotalLearningTime(time);

decisionTreeDetectionModel.setClassificationModelSummary(decisionTreeModelSummary);

return decisionTreeDetectionModel;

}

開發者ID:shlee89,項目名稱:athena,代碼行數:45,

注:本文中的org.apache.spark.api.java.JavaSparkContext.newAPIHadoopRDD方法示例整理自Github/MSDocs等源碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值