1. unzip
2. go to /sqoop_home/server/conf/catalina.properties
change common.loader add hadoop lib jars
3. add tomcat-juli.jar to server/startup.sh
sqoop server cannot be started
Reason:
1. port confliction, delete catalina environment variable let it use the default setting.
2. set jar files into catalina-properties, check if hadoop-home env variable does not affect
3. catalina.properites common-loader lack of guava.jar
4. check server webapp, sqoop under this should've been created
5. sqoop.properties @LOG_DIR@ ...
6. make sure hadoop is running
7. if want to transfer data between oracle and hdfs, you must add ojdbc6.jar to hadoop lib like common/lib, and specify this dir to sqoop server conf catalina.properties common.loader
command line run sqoop
1. set server --host vm-9ac7-806d.apac.nsroot.net --port 12000 --webapp sqoop
2. create connection --cid 1
3. create job --xid 1 --type import // create job --xid 1
4. start job --jid 1
more:
5. update job --jid 1
6. delete job --jid 1
shift + g vi the last row
change directory authority in hdfs
bin/hdfs dfs -chown -R zg67978 /SqoopTable/
kill a hadoop job
bin/hadoop job -kill job_1394627419025_0073
java.lang.VerifyError: class org.apache.hadoop.security.proto.SecurityProtos$GetDelegationTokenRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
solution:
this error is mainly coursed by protobuf.jar version conflicted. Hadoop applies 2.5.0 but hive applies 2.4.1
question: hive
java.net.SocketTimeoutException: Read timed out
solution:
revise hive-site.xml set hive.metastore.client.socket.timeout to a larger number
org.apache.sqoop.common.SqoopException: MAPRED_EXEC_0012:The type is not supported - java.math.BigDecimal
solution:
remove extractor, like below
//frameworkForm.getIntegerInput("throttling.loaders").setValue(1);
java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
solution:
add hadoop-mapreduce-client-jobclient-2.2.0.jar and hadoop-yarn-api-2.2.0.jar and hadoop-yarn-client-2.2.0.jar and hadoop-yarn-common-2.2.0.jar and hadoop-mapreduce-client-common-2.2.0.jar to class path
ERROR org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException
solution:
user is not in the group of linux
use DistributedCache encounter error : file not found
solution:
you have to use DistributedCache.getLocalCacheFiles nor getCacheFiles
2. go to /sqoop_home/server/conf/catalina.properties
change common.loader add hadoop lib jars
3. add tomcat-juli.jar to server/startup.sh
sqoop server cannot be started
Reason:
1. port confliction, delete catalina environment variable let it use the default setting.
2. set jar files into catalina-properties, check if hadoop-home env variable does not affect
3. catalina.properites common-loader lack of guava.jar
4. check server webapp, sqoop under this should've been created
5. sqoop.properties @LOG_DIR@ ...
6. make sure hadoop is running
7. if want to transfer data between oracle and hdfs, you must add ojdbc6.jar to hadoop lib like common/lib, and specify this dir to sqoop server conf catalina.properties common.loader
command line run sqoop
1. set server --host vm-9ac7-806d.apac.nsroot.net --port 12000 --webapp sqoop
2. create connection --cid 1
3. create job --xid 1 --type import // create job --xid 1
4. start job --jid 1
more:
5. update job --jid 1
6. delete job --jid 1
shift + g vi the last row
change directory authority in hdfs
bin/hdfs dfs -chown -R zg67978 /SqoopTable/
kill a hadoop job
bin/hadoop job -kill job_1394627419025_0073
java.lang.VerifyError: class org.apache.hadoop.security.proto.SecurityProtos$GetDelegationTokenRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
solution:
this error is mainly coursed by protobuf.jar version conflicted. Hadoop applies 2.5.0 but hive applies 2.4.1
question: hive
java.net.SocketTimeoutException: Read timed out
solution:
revise hive-site.xml set hive.metastore.client.socket.timeout to a larger number
org.apache.sqoop.common.SqoopException: MAPRED_EXEC_0012:The type is not supported - java.math.BigDecimal
solution:
remove extractor, like below
//frameworkForm.getIntegerInput("throttling.loaders").setValue(1);
java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
solution:
add hadoop-mapreduce-client-jobclient-2.2.0.jar and hadoop-yarn-api-2.2.0.jar and hadoop-yarn-client-2.2.0.jar and hadoop-yarn-common-2.2.0.jar and hadoop-mapreduce-client-common-2.2.0.jar to class path
ERROR org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException
solution:
user is not in the group of linux
use DistributedCache encounter error : file not found
solution:
you have to use DistributedCache.getLocalCacheFiles nor getCacheFiles