启动PySpark:
[root@node1 ~]# pyspark Python 2.7.5 (default, Nov 6 2016, 00:28:07) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 1.6.0 /_/ Using Python version 2.7.5 (default, Nov 6 2016 00:28:07) SparkContext available as sc, HiveContext available as sqlContext.
上下文已经包含 sc 和 sqlContext:
SparkContext available as sc, HiveContext available as sqlContext.
执行脚本:
>>> from __future__ import print_function >>> import os >>> import sys >>> from pyspark import SparkContext >>> from pyspark.sql import SQLContext >>> from pyspark.sql.types import Row, StructField, StructType, StringType, IntegerType
# RDD is created from a list of rows >>> some_rdd = sc.parallelize([Row(name="John", age=19),Row(name="Smith", age=23),Row(name="Sarah", age=18)])
# Infer schema from the first row, create a DataFrame and print the schema >>> some_df = sqlContext.createDataFrame(some_rdd) >>> some_df.printSchema() root |-- age: long (nullable = true) |-- name: string (nullable = true)
# Another RDD is created from a list of tuples >>> another_rdd = sc.parallelize([("John", 19), ("Smith", 23), ("Sarah", 18)])
# Schema with two fields - person_name and person_age >>> schema = StructType([StructField("person_name", StringType(), False),StructField("person_age", IntegerType(), False)])
# Create a DataFrame by applying the schema to the RDD and print the schema >>> another_df = sqlContext.createDataFrame(another_rdd, schema) >>> another_df.printSchema() root |-- person_name: string (nullable = false) |-- person_age: integer (nullable = false)
进入Github下载people.json文件:
并上传到HDFS上:
继续执行脚本:
# A JSON dataset is pointed to by path. # The path can be either a single text file or a directory storing text files. >>> if len(sys.argv) < 2: ... path = "/user/cf/people.json" ... else: ... path = sys.argv[1] ... # Create a DataFrame from the file(s) pointed to by path >>> people = sqlContext.jsonFile(path) [Stage 5:> (0 + 1) / 2]19/07/04 10:34:33 WARN spark.ExecutorAllocationManager: No stages are running, but numRunningTasks != 0 # The inferred schema can be visualized using the printSchema() method. >>> people.printSchema() root |-- age: long (nullable = true) |-- name: string (nullable = true) # Register this DataFrame as a table. >>> people.registerAsTable("people") /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/spark/python/pyspark/sql/dataframe.py:142: UserWarning: Use registerTempTable instead of registerAsTable. warnings.warn("Use registerTempTable instead of registerAsTable.") # SQL statements can be run by using the sql methods provided by sqlContext >>> teenagers = sqlContext.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19") >>> for each in teenagers.collect(): ... print(each[0]) ... Justin
执行结束:
>>> sc.stop()
>>>
参考程序:
# # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from __future__ import print_function import os import sys from pyspark import SparkContext from pyspark.sql import SQLContext from pyspark.sql.types import Row, StructField, StructType, StringType, IntegerType if __name__ == "__main__": sc = SparkContext(appName="PythonSQL") sqlContext = SQLContext(sc) # RDD is created from a list of rows some_rdd = sc.parallelize([Row(name="John", age=19), Row(name="Smith", age=23), Row(name="Sarah", age=18)]) # Infer schema from the first row, create a DataFrame and print the schema some_df = sqlContext.createDataFrame(some_rdd) some_df.printSchema() # Another RDD is created from a list of tuples another_rdd = sc.parallelize([("John", 19), ("Smith", 23), ("Sarah", 18)]) # Schema with two fields - person_name and person_age schema = StructType([StructField("person_name", StringType(), False), StructField("person_age", IntegerType(), False)]) # Create a DataFrame by applying the schema to the RDD and print the schema another_df = sqlContext.createDataFrame(another_rdd, schema) another_df.printSchema() # root # |-- age: integer (nullable = true) # |-- name: string (nullable = true) # A JSON dataset is pointed to by path. # The path can be either a single text file or a directory storing text files. if len(sys.argv) < 2: path = "file://" + \ os.path.join(os.environ['SPARK_HOME'], "examples/src/main/resources/people.json") else: path = sys.argv[1] # Create a DataFrame from the file(s) pointed to by path people = sqlContext.jsonFile(path) # root # |-- person_name: string (nullable = false) # |-- person_age: integer (nullable = false) # The inferred schema can be visualized using the printSchema() method. people.printSchema() # root # |-- age: IntegerType # |-- name: StringType # Register this DataFrame as a table. people.registerAsTable("people") # SQL statements can be run by using the sql methods provided by sqlContext teenagers = sqlContext.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19") for each in teenagers.collect(): print(each[0]) sc.stop()