2011-8-9 10:43:08

VLC多媒体播放器是一款开源的跨平台多媒体播放软件,支持多种操作系统,包括Linux、Windows等。它最初作为巴黎中央理工学院的学生项目启动,并在2001年以GPL许可证发布。VLC具有强大的多媒体文件播放能力,能够处理多种格式的音频和视频文件,并且支持播放未完全下载的视频文件部分内容。
2011-8-9 10:43:08 组件版本号前面不能有空格编译好了 直接升级 VideoLAN计划是一个开发多媒体播放程序的计划。原本针对流影音有两个程序—VideoLAN Client (VLC) 及 VideoLAN Server (VLS) — 然而大部分的VLS功能都集成进VLC,所以就将程序名称改为VLC media player。这个计划原本是巴黎中央理工学院学生的专题计划。在2001年2月1日以GPL发布后,现在贡献者已经遍及世界。这个播放软件的三角锥图标是源自于交通三角锥[1]。目前的图标是2006年高分辨率的CGI(英语:电脑成像)[2],用以取代先前手绘的低分辨率图标[3]。VLC media player具有跨平台的特性,可用于Linux、Microsoft Windows、Mac OS X、BeOS、BSD、Pocket PC及Solaris。VLC 在 Windows 和 Linux 上的操作接口基于 Qt4 。在Windows,Linux以及某些平台,VLC提供了一个Mozilla扩展,使得某些网站上附带的QuickTime及Windows Media多媒体文件,可以在非微软或苹果电脑的操作系统中,正常显示于Mozilla的浏览器下。从版本0.8.2开始,VLC亦提供了一个ActiveX的扩展,使用户可以在Internet Explorer下,正常显示某些网站上附带的QuickTime及Windows Media多媒体文件。VLC支持播放某些没有下载完成的视频文件部份内容。现在是mozilla扩展和activeX扩展 Class OverviewMediaPlayer class can be used to control playback of audio/video files and streams. An example on how to use the methods in this class can be found in VideoView. Please see Audio and Video for additional help using MediaPlayer. Topics covered here are: State Diagram Valid and Invalid States Permissions State DiagramPlayback control of audio/video files and streams is managed as a state machine. The following diagram shows the life cycle and the states of a MediaPlayer object driven by the supported playback control operations. The ovals represent the states a MediaPlayer object may reside in. The arcs represent the playback control operations that drive the object state transition. There are two types of arcs. The arcs with a single arrow head represent synchronous method calls, while those with a double arrow head represent asynchronous method calls. 同步调用 和异步调用 From this state diagram, one can see that a MediaPlayer object has the following states:When a MediaPlayer object is just created using new or after reset() is called, it is in the Idle state;创建或reset 后进入到idle状态 and after release() is called, it is in the End state. Between these two states is the life cycle of the MediaPlayer object. release 后就进入到结束状态 There is a subtle but important difference between a newly constructed MediaPlayer object and the MediaPlayer object after reset() is called.新创建的和调用reset后产生的有很轻微但很重要的区别。 It is a programming error to invoke methods such as getCurrentPosition(), getDuration(), getVideoHeight(), getVideoWidth(), setAudioStreamType(int), setLooping(boolean), setVolume(float, float), pause(), start(), stop(), seekTo(int), prepare() or prepareAsync() in the Idle state for both cases. If any of these methods is called right after a MediaPlayer object is constructed, the user supplied callback method OnErrorListener.onError() won't be called by the internal player engine and the object state remains unchanged; but if these methods are called right after reset(), the user supplied callback method OnErrorListener.onError() will be invoked by the internal player engine and the object will be transfered to the Error state. It is also recommended that once a MediaPlayer object is no longer being used, call release() immediately so that resources used by the internal player engine associated with the MediaPlayer object can be released immediately. Resource may include singleton resources such as hardware acceleration components and failure to call release() may cause subsequent instances of MediaPlayer objects to fallback to software implementations or fail altogether. Once the MediaPlayer object is in the End state, it can no longer be used and there is no way to bring it back to any other state. Furthermore, the MediaPlayer objects created using new is in the Idle state, while those created with one of the overloaded convenient create methods are NOT in the Idle state. In fact, the objects are in the Prepared state if the creation using create method is successful. In general, some playback control operation may fail due to various reasons, such as unsupported audio/video format, poorly interleaved audio/video, resolution too high, streaming timeout, and the like. Thus, error reporting and recovery is an important concern under these circumstances. Sometimes, due to programming errors, invoking a playback control operation in an invalid state may also occur. Under all these error conditions, the internal player engine invokes a user supplied OnErrorListener.onError() method if an OnErrorListener has been registered beforehand via setOnErrorListener(android.media.MediaPlayer.OnErrorListener). It is important to note that once an error occurs, the MediaPlayer object enters the Error state (except as noted above), even if an error listener has not been registered by the application. In order to reuse a MediaPlayer object that is in the Error state and recover from the error, reset() can be called to restore the object to its Idle state. It is good programming practice to have your application register a OnErrorListener to look out for error notifications from the internal player engine. IllegalStateException is thrown to prevent programming errors such as calling prepare(), prepareAsync(), or one of the overloaded setDataSource methods in an invalid state. Calling setDataSource(FileDescriptor), or setDataSource(String), or setDataSource(Context, Uri), or setDataSource(FileDescriptor, long, long) transfers a MediaPlayer object in the Idle state to the Initialized state. An IllegalStateException is thrown if setDataSource() is called in any other state. It is good programming practice to always look out for IllegalArgumentException and IOException that may be thrown from the overloaded setDataSource methods. A MediaPlayer object must first enter the Prepared state before playback can be started. There are two ways (synchronous vs. asynchronous) that the Prepared state can be reached: either a call to prepare() (synchronous) which transfers the object to the Prepared state once the method call returns, or a call to prepareAsync() (asynchronous) which first transfers the object to the Preparing state after the call returns (which occurs almost right way) while the internal player engine continues working on the rest of preparation work until the preparation work completes. When the preparation completes or when prepare() call returns, the internal player engine then calls a user supplied callback method, onPrepared() of the OnPreparedListener interface, if an OnPreparedListener is registered beforehand via setOnPreparedListener(android.media.MediaPlayer.OnPreparedListener). It is important to note that the Preparing state is a transient state, and the behavior of calling any method with side effect while a MediaPlayer object is in the Preparing state is undefined. An IllegalStateException is thrown if prepare() or prepareAsync() is called in any other state. While in the Prepared state, properties such as audio/sound volume, screenOnWhilePlaying, looping can be adjusted by invoking the corresponding set methods. To start the playback, start() must be called. After start() returns successfully, the MediaPlayer object is in the Started state. isPlaying() can be called to test whether the MediaPlayer object is in the Started state. While in the Started state, the internal player engine calls a user supplied OnBufferingUpdateListener.onBufferingUpdate() callback method if a OnBufferingUpdateListener has been registered beforehand via setOnBufferingUpdateListener(OnBufferingUpdateListener). This callback allows applications to keep track of the buffering status while streaming audio/video. Calling start() has not effect on a MediaPlayer object that is already in the Started state. Playback can be paused and stopped, and the current playback position can be adjusted. Playback can be paused via pause(). When the call to pause() returns, the MediaPlayer object enters the Paused state. Note that the transition from the Started state to the Paused state and vice versa happens asynchronously in the player engine. It may take some time before the state is updated in calls to isPlaying(), and it can be a number of seconds in the case of streamed content. Calling start() to resume playback for a paused MediaPlayer object, and the resumed playback position is the same as where it was paused. When the call to start() returns, the paused MediaPlayer object goes back to the Started state. Calling pause() has no effect on a MediaPlayer object that is already in the Paused state. Calling stop() stops playback and causes a MediaPlayer in the Started, Paused, Prepared or PlaybackCompleted state to enter the Stopped state. Once in the Stopped state, playback cannot be started until prepare() or prepareAsync() are called to set the MediaPlayer object to the Prepared state again. Calling stop() has no effect on a MediaPlayer object that is already in the Stopped state. The playback position can be adjusted with a call to seekTo(int). Although the asynchronuous seekTo(int) call returns right way, the actual seek operation may take a while to finish, especially for audio/video being streamed. When the actual seek operation completes, the internal player engine calls a user supplied OnSeekComplete.onSeekComplete() if an OnSeekCompleteListener has been registered beforehand via setOnSeekCompleteListener(OnSeekCompleteListener). Please note that seekTo(int) can also be called in the other states, such as Prepared, Paused and PlaybackCompleted state. Furthermore, the actual current playback position can be retrieved with a call to getCurrentPosition(), which is helpful for applications such as a Music player that need to keep track of the playback progress. When the playback reaches the end of stream, the playback completes. If the looping mode was being set to truewith setLooping(boolean), the MediaPlayer object shall remain in the Started state. If the looping mode was set to false , the player engine calls a user supplied callback method, OnCompletion.onCompletion(), if a OnCompletionListener is registered beforehand via setOnCompletionListener(OnCompletionListener). The invoke of the callback signals that the object is now in the PlaybackCompleted state. While in the PlaybackCompleted state, calling start() can restart the playback from the beginning of the audio/video source. Valid and invalid states
insert overwrite table case_data_sample select * from case_data_sample_tmp; 2025-06-18 16:37:06,500 INFO [main] conf.HiveConf: Using the default value passed in for log id: 531f6207-2ea7-471a-9eac-9ce1e6a79910 2025-06-18 16:37:06,500 INFO [main] session.SessionState: Updating thread name to 531f6207-2ea7-471a-9eac-9ce1e6a79910 main 2025-06-18 16:37:06,503 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: Compiling command(queryId=root_20250618163706_29623b7c-a221-4e37-9b54-e05709d7f990): insert overwrite table case_data_sample select * from case_data_sample_tmp 2025-06-18 16:37:06,546 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-18 16:37:06,547 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Starting Semantic Analysis 2025-06-18 16:37:06,559 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Completed phase 1 of Semantic Analysis 2025-06-18 16:37:06,560 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Get metadata for source tables 2025-06-18 16:37:06,588 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Get metadata for subqueries 2025-06-18 16:37:06,588 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Get metadata for destination tables 2025-06-18 16:37:06,627 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Completed getting MetaData in Semantic Analysis 2025-06-18 16:37:08,746 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Get metadata for source tables 2025-06-18 16:37:08,784 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Get metadata for subqueries 2025-06-18 16:37:08,784 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Get metadata for destination tables 2025-06-18 16:37:08,884 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] common.FileUtils: Creating directory if it doesn&#39;t exist: hdfs://master:8020/user/hive/warehouse/ad_traffic.db/case_data_sample/.hive-staging_hive_2025-06-18_16-37-06_538_2993870488593298816-1 2025-06-18 16:37:09,012 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Generate an operator pipeline to autogather column stats for table ad_traffic.case_data_sample in query insert overwrite table case_data_sample select * from case_data_sample_tmp 2025-06-18 16:37:09,069 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Get metadata for source tables 2025-06-18 16:37:09,098 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Get metadata for subqueries 2025-06-18 16:37:09,098 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Get metadata for destination tables 2025-06-18 16:37:09,155 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Context: New scratch dir is hdfs://master:8020/user/hive/tmp/root/531f6207-2ea7-471a-9eac-9ce1e6a79910/hive_2025-06-18_16-37-09_012_5684077500801740374-1 2025-06-18 16:37:09,221 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] common.FileUtils: Creating directory if it doesn&#39;t exist: hdfs://master:8020/user/hive/tmp/root/531f6207-2ea7-471a-9eac-9ce1e6a79910/hive_2025-06-18_16-37-09_012_5684077500801740374-1/-mr-10000/.hive-staging_hive_2025-06-18_16-37-09_012_5684077500801740374-1 2025-06-18 16:37:09,234 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: CBO Succeeded; optimized logical plan. 2025-06-18 16:37:09,329 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ppd.OpProcFactory: Processing for FS(2) 2025-06-18 16:37:09,329 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ppd.OpProcFactory: Processing for FS(9) 2025-06-18 16:37:09,329 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ppd.OpProcFactory: Processing for SEL(8) 2025-06-18 16:37:09,329 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ppd.OpProcFactory: Processing for GBY(7) 2025-06-18 16:37:09,329 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ppd.OpProcFactory: Processing for RS(6) 2025-06-18 16:37:09,329 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ppd.OpProcFactory: Processing for GBY(5) 2025-06-18 16:37:09,329 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ppd.OpProcFactory: Processing for SEL(4) 2025-06-18 16:37:09,329 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ppd.OpProcFactory: Processing for SEL(1) 2025-06-18 16:37:09,330 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ppd.OpProcFactory: Processing for TS(0) 2025-06-18 16:37:09,385 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] optimizer.ColumnPrunerProcFactory: RS 6 oldColExprMap: {VALUE._col20=Column[_col20], VALUE._col10=Column[_col10], VALUE._col21=Column[_col21], VALUE._col11=Column[_col11], VALUE._col12=Column[_col12], VALUE._col2=Column[_col2], VALUE._col3=Column[_col3], VALUE._col4=Column[_col4], VALUE._col5=Column[_col5], VALUE._col0=Column[_col0], VALUE._col1=Column[_col1], VALUE._col13=Column[_col13], VALUE._col14=Column[_col14], VALUE._col15=Column[_col15], VALUE._col16=Column[_col16], VALUE._col6=Column[_col6], VALUE._col17=Column[_col17], VALUE._col7=Column[_col7], VALUE._col18=Column[_col18], VALUE._col8=Column[_col8], VALUE._col19=Column[_col19], VALUE._col9=Column[_col9]} 2025-06-18 16:37:09,386 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] optimizer.ColumnPrunerProcFactory: RS 6 newColExprMap: {VALUE._col20=Column[_col20], VALUE._col10=Column[_col10], VALUE._col21=Column[_col21], VALUE._col11=Column[_col11], VALUE._col12=Column[_col12], VALUE._col2=Column[_col2], VALUE._col3=Column[_col3], VALUE._col4=Column[_col4], VALUE._col5=Column[_col5], VALUE._col0=Column[_col0], VALUE._col1=Column[_col1], VALUE._col13=Column[_col13], VALUE._col14=Column[_col14], VALUE._col15=Column[_col15], VALUE._col16=Column[_col16], VALUE._col6=Column[_col6], VALUE._col17=Column[_col17], VALUE._col7=Column[_col7], VALUE._col18=Column[_col18], VALUE._col8=Column[_col8], VALUE._col19=Column[_col19], VALUE._col9=Column[_col9]} 2025-06-18 16:37:09,500 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] spark.SetSparkReducerParallelism: Number of reducers for sink RS[6] was already determined to be: 1 2025-06-18 16:37:09,646 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Examining input format to see if vectorization is enabled. 2025-06-18 16:37:09,655 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Vectorization is enabled for input format(s) [org.apache.hadoop.mapred.TextInputFormat] 2025-06-18 16:37:09,655 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Validating and vectorizing MapWork... (vectorizedVertexNum 0) 2025-06-18 16:37:09,706 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Map vectorization enabled: true 2025-06-18 16:37:09,706 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Map vectorized: false 2025-06-18 16:37:09,707 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Map notVectorizedReason: Aggregation Function expression for GROUPBY operator: UDF compute_stats not supported 2025-06-18 16:37:09,707 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Map vectorizedVertexNum: 0 2025-06-18 16:37:09,707 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Map enabledConditionsMet: [hive.vectorized.use.vector.serde.deserialize IS true] 2025-06-18 16:37:09,707 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Map inputFileFormatClassNameSet: [org.apache.hadoop.mapred.TextInputFormat] 2025-06-18 16:37:09,753 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Validating and vectorizing ReduceWork... (vectorizedVertexNum 1) 2025-06-18 16:37:09,753 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Reduce vectorization enabled: true 2025-06-18 16:37:09,753 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Reduce vectorized: false 2025-06-18 16:37:09,753 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Reduce notVectorizedReason: Aggregation Function expression for GROUPBY operator: UDF compute_stats not supported 2025-06-18 16:37:09,753 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Reduce vectorizedVertexNum: 1 2025-06-18 16:37:09,753 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Reducer hive.vectorized.execution.reduce.enabled: true 2025-06-18 16:37:09,753 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] physical.Vectorizer: Reducer engine: spark 2025-06-18 16:37:09,784 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] parse.CalcitePlanner: Completed plan generation 2025-06-18 16:37:09,785 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: Semantic Analysis Completed (retrial = false) 2025-06-18 16:37:09,785 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: , FieldSchema(name:case_data_sample_tmp.timestamps, type:int, comment:null), FieldSchema(name:case_data_sample_tmp.camp, type:int, comment:null), FieldSchema(name:case_data_sample_tmp.creativeid, type:int, comment:null), FieldSchema(name:case_data_sample_tmp.mobile_os, type:int, comment:null), FieldSchema(name:case_data_sample_tmp.mobile_type, type:string, comment:null), FieldSchema(name:case_data_sample_tmp.app_key_md5, type:string, comment:null), FieldSchema(name:case_data_sample_tmp.app_name_md5, type:string, comment:null), FieldSchema(name:case_data_sample_tmp.placementid, type:string, comment:null), FieldSchema(name:case_data_sample_tmp.useragent, type:string, comment:null), FieldSchema(name:case_data_sample_tmp.mediaid, type:string, comment:null), FieldSchema(name:case_data_sample_tmp.os_type, type:string, comment:null), FieldSchema(name:case_data_sample_tmp.born_time, type:int, comment:null), FieldSchema(name:case_data_sample_tmp.label, type:int, comment:null)], properties:null) 2025-06-18 16:37:09,785 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: Completed compiling command(queryId=root_20250618163706_29623b7c-a221-4e37-9b54-e05709d7f990); Time taken: 3.282 seconds 2025-06-18 16:37:09,785 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] reexec.ReExecDriver: Execution #1 of query 2025-06-18 16:37:09,785 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-18 16:37:09,785 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: Executing command(queryId=root_20250618163706_29623b7c-a221-4e37-9b54-e05709d7f990): insert overwrite table case_data_sample select * from case_data_sample_tmp 2025-06-18 16:37:09,786 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: Query ID = root_20250618163706_29623b7c-a221-4e37-9b54-e05709d7f990 2025-06-18 16:37:09,786 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: Total jobs = 1 2025-06-18 16:37:09,813 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: Launching Job 1 out of 1 2025-06-18 16:37:09,813 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: Starting task [Stage-1:MAPRED] in serial mode 2025-06-18 16:37:09,813 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] spark.SparkTask: In order to change the average load for a reducer (in bytes): 2025-06-18 16:37:09,813 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] spark.SparkTask: set hive.exec.reducers.bytes.per.reducer=<number> 2025-06-18 16:37:09,813 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] spark.SparkTask: In order to limit the maximum number of reducers: 2025-06-18 16:37:09,813 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] spark.SparkTask: set hive.exec.reducers.max=<number> 2025-06-18 16:37:09,813 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] spark.SparkTask: In order to set a constant number of reducers: 2025-06-18 16:37:09,813 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] spark.SparkTask: set mapreduce.job.reduces=<number> 2025-06-18 16:37:09,834 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] session.SparkSessionManagerImpl: Setting up the session manager. 2025-06-18 16:37:10,327 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] session.SparkSession: Trying to open Spark session 4201eecc-977e-4e69-89ec-50403379b3d2 Failed to execute spark task, with exception &#39;org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session 4201eecc-977e-4e69-89ec-50403379b3d2)&#39; 2025-06-18 16:37:10,372 ERROR [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] spark.SparkTask: .lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) Caused by: java.lang.NoClassDefFoundError: org/apache/spark/SparkConf at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263) at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:98) at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87) ... 24 more Caused by: java.lang.ClassNotFoundException: org.apache.spark.SparkConf at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:359) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 28 more 2025-06-18 16:37:10,378 ERROR [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] spark.SparkTask: Failed to execute spark task, with exception &#39;org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session 4201eecc-977e-4e69-89ec-50403379b3d2)&#39; org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create Spark client for Spark session 4201eecc-977e-4e69-89ec-50403379b3d2 at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:221) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:92) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115) at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136) at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) Caused by: java.lang.NoClassDefFoundError: org/apache/spark/SparkConf at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263) at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:98) at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87) ... 24 more Caused by: java.lang.ClassNotFoundException: org.apache.spark.SparkConf at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:359) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 28 more 2025-06-18 16:37:10,391 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] reexec.ReOptimizePlugin: ReOptimization: retryPossible: false FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 4201eecc-977e-4e69-89ec-50403379b3d2 2025-06-18 16:37:10,391 ERROR [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 4201eecc-977e-4e69-89ec-50403379b3d2 2025-06-18 16:37:10,392 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: Completed executing command(queryId=root_20250618163706_29623b7c-a221-4e37-9b54-e05709d7f990); Time taken: 0.607 seconds 2025-06-18 16:37:10,392 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-18 16:37:10,433 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] conf.HiveConf: Using the default value passed in for log id: 531f6207-2ea7-471a-9eac-9ce1e6a79910 2025-06-18 16:37:10,433 INFO [531f6207-2ea7-471a-9eac-9ce1e6a79910 main] session.SessionState: Resetting thread name to main
06-19
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值