SAP BI:SSIS实现对SAP数据的抽取

SAP BI:SSIS实现对SAP数据的抽取 Posted on 2012-11-22 13:39 Vincent Viger 阅读( 236) 评论( 0 编辑 收藏  

Microsoft Connector 1.0 for SAP BI is delivered in the Microsoft SQL Server 2008 Feature Pack. It enables data extraction from and to SAP NetWeaver BI in both Full and Delta modes via standard interfaces, within the Microsoft SQL Server Integration Services environment. The SAP datasets supported by the connector include SAP BI InfoProviders like InfoCubes, Data Store Objects (DSO), and InfoObjects.

The Microsoft Connector 1.0 for SAP BI has three main components:

  • SAP BI Source, to extract data from SAP BI
  • SAP BI Destination, to load data into SAP BI
  • SAP BI Connection Manager, to manage the RFC connection between the Integration Services package and SAP BI

Microsoft Connector 1.0 for SAP BI is an add-in for SQL Server Integration Services. It provides an efficient and streamlined solution for integrating non-SAP data sources with SAP BI. It also enables the construction of data warehouse solutions for SAP data in SQL Server 2008, where SAP BI is exposed as a data source of SQL Server.

 

Microsoft Connector 1.0 for SAP BI has the following requirements:

  • Windows Server 2003 and later, Windows Vista, or Microsoft Windows XP Professional with Service Pack 2.
  • SQL Server 2008 Integration Services. Microsoft Connector 1.0 for SAP BI needs to be installed on the same computer where Integration Services is installed.
  • Windows Installer 4.5 and later.
  • Extracting data using Microsoft Connector 1.0 for SAP BI from SAP BI system requires the SAP Open Hub license. For more information about SAP licensing, consult your SAP representative.
  • On the SAP BI system, SAP_BW component support package level 16 (as part of SAP NetWeaver Support Pack Stack 14) is required. SAP_BW component support package level 17 or higher is strongly recommended.
  • To use Microsoft Connector 1.0 for SAP BI in 32-bit (64-bit) mode on any 32-bit (64-bit) operating system, The 32-bit (64-bit) version of librfc32.dll needs to copied to the following location: %windir%\system32.
  • To use Microsoft Connector 1.0 for SAP BI in 32-bit mode on a 64-bit operating system, the 32-bit librfc32.dll needs to be copied to the following location: %windir%\SysWow64.

Notes

  • Microsoft Connector 1.0 for SAP BI can only be used with SQL Server 2008 Integration Services. However, you can load data from or extract data to SQL Server 2008, SQL Server 2005, or SQL Server 2000 databases.
  • Librfc32.dll is a component owned by SAP. Microsoft does not support this SAP component and assumes no liability for its use.
  • Microsoft Connector 1.0 for SAP BI does not support SAP BW 3.5 and earlier versions.
  • Extracting data from an SAP BI system by using Microsoft Connector 1.0 for SAP BI only supports Open Hub Destinations. It does not support InfoSpokes, because InfoSpokes are obsolete in SAP NetWeaver BI.

With Microsoft Connector 1.0 for SAP BI, it is now possible use components of the SQL Server platform to move data in and out of SAP BI.

 

Figure 1: Overview of the solution architecture

 

This scenario uses an Integration Services package that leverages the “SAP BI Source” component. It treats SAP BI as a data source for a SQL Server database. Behind the scenes, SAP’s Open Hub Services interface is used to fetch data from SAP BI InfoProviders.

 

To configure SAP BI to extract data into a non-SAP destination such as SQL Server, you need to follow these steps:

  1. Set up the RFC Destination.
  2. Configure and create the Open Hub Destination.
  3. Create the Data Transfer Process (DTP) and transformation.
  4. Define parallel processing.
  5. Define the size of the data package.
  6. Configure the process chain.
 

In transaction code SM59 on SAP BI, create a new HTTP connection with type T (TCP/IP Connection), as shown in Figure 2. Under Activation Type, select “Registered Server Program”. Then, fill in an appropriate Program ID, which can be any descriptive short text. The RFC Destination and Program ID will be used later to set up the connection manager in Integration Services.

Figure 2: Configuring the RFC Destination in SAP BI

 

There are two Open Hub implementation options in SAP BI: the legacy InfoSpoke, and the new Open Hub Destination via Data Transfer Process (DTP). The InfoSpoke is marked as obsolete in SAP NetWeaver BI. Therefore the Microsoft Connector 1.0 for SAP BI officially supports only the Open Hub Destination.

 

In Admin Workbench on SAP BI (transaction code RSA1), create a new Open Hub Destination with Destination Type “Third-Party Tool”, and specify the previously created RFC Destination name (Figure 3). Save and activate the new destination.

Figure 3: Creating the Open Hub Destination in SAP BI

 

Create a Data Transfer Process under the Open Hub Destination. Specify Full or Delta for Extraction Mode. Activate the DTP. Check and activate the Transformation.

Figure 4: Creating the Data Transfer Process in SAP BI

 

By default, SAP BI sets the number of parallel DTP processes as a value greater than 1 for performance reasons. This is configurable through SAP transaction code RSBATCH (SAP BI Background Management).

Figure 5: Configuring Parallel Processing in SAP BI

We want to keep the number of parallel processes to a reasonable value for the overall DTP process type DTP_LOAD, but this parallelism can lead to a timeout error during the Open Hub DTP extraction through Microsoft Connector for SAP BI. To get around this issue, the number of processes for the Open Hub DTP should be set to 1 by following the steps below:

1. In the Open Hub DTP screen, select “Goto” from the menu, then “Setting for Batch Manager”:

Figure 6: Opening Batch Manager in SAP BI to configure the number of parallel processes

2. Change the Number of Processes to “1”.

Figure 7: Configuring the number of parallel processes in SAP BI

3. Save the changed settings.

 

The default setting for the DTP data package is 50,000. Depending on the actual hardware infrastructure, adjusting the package size may improve the extract, transform, and load (ETL) performance. Note that the Microsoft SAP BI source will read the DTP package size to determine the actual data packet size in the Integration Services package. It is highly recommended to reach an agreement on the size that balances the concerns of the SAP Basis team and the SQL Server DBA. In reality, a value between 50,000 and 200,000 should satisfy most needs.

Figure 8: Defining the data package size in SAP BI

 

A process chain is required to work with the Microsoft Connector (Figure 9).

Figure 9: The two nodes that are the minimum requirement for a process chain in SAP BI

The process chain must contain at least these two nodes:

  • Start node with the scheduling option “Start Using Meta Chain or API” (Figure 10)
  • Data Transfer Process node

Figure 10: Configuring scheduling options for a process chain in SAP BI

After you activate the process chain, it is ready to be called from the Integration Services package.

 

Configuring the Integration Services package in Business Intelligence Development Studio involves three main steps:

  1. Add the “SAP BI Source” as a source in the data flow.
  2. Set up the connection manager for SAP BI.
  3. Define the workflow of the package.
 

After you install the Microsoft Connector from the SQL Server 2008 Feature Pack, go to Business Intelligence Development Studio and create a new Integration Services project. The Microsoft Connector components are not available in the Toolbox until you add them manually. To add them, right click Data Flow Sources in the Toolbox, click ChooseItems, and then on the SSIS Data Flow Items tab, select the SAP BI Source check box, as shown in Figure 11.

Figure 11: Adding the SAP BI source to the Toolbox in Business Intelligence Development Studio

Now SAP BI Source is available in Data Flow Sources (Figure 12).

Figure 12: The list of Data Flow Sources in the Toolbox in Business Intelligence Development Studio after adding the SAP BI source

Setting Up the Connection Manager for SAP BI

In the Integration Services package, add a new connection and choose SAPBI (Figure 13).

Figure 13: Adding a new SAP BI connection to an Integration Services package

After the connection is created, in the SAP BI connection manager dialog box, edit the connection and fill out the system and logon information. Click Test Connection to verify successful configuration (Figure 14).

Figure 14: Configuring the SAP BI connection manager

 

Adding and Configuring the SAP BI Source

In Business Intelligence Development Studio, drag the SAP BI source to the data flow of the package (Figure 15).

Figure 15: The representation of the SAP BI source in the data flow of a package

Edit the source by choosing the appropriate SAP BI connection manager, specifying the RFC destination, and choosing the previously-created process chain (Figure 16).

Figure 16: Configuring the SAP BI source on theConnection Manager page of the SAP BI Source Editor

Note the different execution modes that are available:

  • P– Trigger Process Chain: The specified process chain is started, the extraction is made, and after ending the extraction, data is extracted in packets.
  • W– Wait for Notify: No process chain is started; instead the tool only waits until it is notified of that the extraction is complete. Someone else is responsible for starting up the extraction (for example, SAP’s own scheduler).
  • E– Extract Only: A process chain is not started, and the source does not wait for notification. Instead, the Request ID entered in the field “Request ID” is used to retrieve data that is hidden behind the respective request.

If the Integration Services package will initiate the ETL process from SAP BI, then the mode “P” should be chosen to trigger the SAP BI process chain for data movement through Open Hub. This is the most suitable option for a “pull” pattern.

The mode “W” is the best for a “push” pattern. In this mode, SAP BI schedules its own internal ETL, and then it starts the Open Hub DTP to push data to SQL Server.

The mode “E” is used when there is an error during the ETL and a particular request needs to be reprocessed. This is mostly useful during testing, or in production during a data recovery process.

Note that the Extract-Only mode will fail if there are multiple packages within one request. This failure occurs because the SAP BI system does not provide the number of packets correctly when the Read function of the Open Hub API is called. To work around this limitation and support Extract-Only mode, increase the package size in the DTP of the Open Hub Destination to a value greater than the number of rows that will be extracted. As a result, only one package is created.

Configuring the Advanced Settings

There are three main options available on the Advancedpage of the SAP BI Source Editor:

  • String conversion options
  • Timeout setting
  • Request ID reset

Figure 17: Configuring advanced options for the SAP BI source on the Advanced page of the SAP BI Source Editor

Timeout and Request ID are very important.

Timeout specifies the valid period that the Integration Services destination should wait for the SAP BI source, before the package fails due to a timeout error. If an Open Hub DTP is expected to run for a long time, as in a full initial extraction, increase the timeout to a large enough number to avoid the timeout error. However, for routine delta loads, where the duration is not so long, enter a realistic timeout value. Any value between 300 and 3600 should be acceptable under normal delta circumstances.

Request ID can be used to reset a DTP that encountered a problem. If a DTP load is stuck in Yellow status in SAP BI, the request can be reset to Green. After a request is successfully reset, it can be deleted in SAP BI in Admin Workbench Monitor. For more information about DTP request status, check the SAP system table RSBKREQUEST table on SAP BI, and look under the columns USTATE (User-Defined Processing Status for a DTP Request) and TSTATE (Technical Processing Status for a DTP Request). The overall DTP status will be successful when both USTATE and TSTATE of a DTP request indicate success (value “2”). Figure 18 shows all available values of USTATE and TSTATE.

Figure 18: The available values for the status of a DTP request in SAP BI

Adding and Configuring the Destination

After you set up the SAP BI source, define the destination in the package. An OLE DB destination is commonly used for this purpose. Based upon the metadata from the SAP BI source, the system may propose a table creation script if the target table is not available in the database. After the column mapping is done, the Integration Services package is ready to run (Figure 19).

Figure 19: A data flow for extracting from an SAP BI source to a non-SAP destination

 

Figure 20: Overview of the solution architecture

 

Sometimes non-SAP data needs to be moved into SAP BI, but it can be challenging to load some data sources into SAP BI. This challenge can be solved by using the SAP BI Destination component in Integration Services. Because Integration Services is versatile in supporting various types of data sources, like XML and flat files, it is now possible to have a unified ETL platform to move data into SAP BI. This versatility can be particularly useful in a heterogeneous environment for ad-hoc reporting or for data analysis and processing purposes. The SAP BI destination component greatly expands SAP BI’s capability in extracting data from non-SAP environments.

 

To configure SAP BI to load non-SAP data, you set up the data source and the ETL.

 

A new “External System” source system needs to be set up in SAP BI to be able to communicate with the SAP BI Destination component in Integration Services. This can be achieved in Admin Workbench (transaction code RSA1), by selecting “Source Systems” from the left panel. This selection leads to the RFC Destination setup screen.

Figure 21: Configuring a source system on the RFC Destination screen in SAP BI

 

The InfoSource and InfoPackage can either be set up within SAP BI’s Admin Workbench, or in Integration Services from within the SAP BI Destination Editor dialog box.

Figure 22: Creating SAP BI objects directly from the SAP BI Destination Editordialog box

Note that the objects created from the SAP BI Destination Editor dialog box are put under the “Unassigned node”application area in SAP workbench. If you prefer a dedicated application area, consider creating the objects in SAP BI Admin Workbench.

 

Configuring the Integration Services package in Business Intelligence Development Studio involves three main steps:

  1. Add the “SAP BI Destination” as a destination in the data flow.
  2. Set up the connection manager for SAP BI.
  3. Define the workflow of the package.
 

Figure 23: Adding the SAP BI destination to the Toolbox in Business Intelligence Development Studio

 

Create a new connection manager for SAP BI first. The details can be found in the setup steps for Application Scenario 1. For more information, see “Setting Up the Connection Manager for SAP BI” earlier in this white paper.

 

After the InfoPackage and InfoSource are available, add the SAP BI destination to the data flow of the package. Then configure the destination in the SAP BI Destination Editor dialog box.

Figure 24: Configuring the SAP BI destination on the Connection Manager page of the SAP BI Destination Editordialog box

The data flow of the package now looks like this.

Figure 25: A data flow for loading from a non-SAP source to an SAP BI destination

 

A compelling use case is to leverage Microsoft Connector 1.0 for SAP BI to move the multidimensional data in SAP BI’s InfoCubes to SQL Server Analysis Services cubes, with all the dimensional structures and content intact. The main objective is to migrate SAP BI InfoCubes to SQL Server cubes efficiently, in order to construct an Analysis Services based enterprise data warehouse. This use case demonstrates that this objective can be achieved with stability, quality, and performance, and with a relatively small amount of effort.

 

When SAP BI Open Hub processes InfoCube data, it flattens the multidimensional structure into a relational structure. So the design idea is to mirror the same flat structure first in a staging table, then reconstruct the dimensions in the Analysis Services cube.

Figure 26: Overview of the solution architecture

 

The standard SAP InfoCube 0FIAP_C03 is used. Its dimensions and fact table metadata are shown in Figure 27:

sapbi_metadata.jpg

Figure 27: Metadata for the dimensions and fact tables in standard SAP BI InfoCube 0FIAP_C03

The flattened Open Hub structure is shown in Figure 28.

sapbi_openhub.jpg

Figure 28: The flattened Open Hub structure in SAP BI

The SAP BI process chain and Integration Services package are shown in Figure 29.

sapbi_3.jpg

Figure 29: The configuration of the process chain in SAP BI, and of the data flow in the SQL Server Integration Services package

The column mappings in the Integration Services package are shown in Figure 30.

Figure 30: The column mappings between the SAP BI source and the destination on the Mappings page of the OLE DB Destination Editor dialog box

The matching structure of the data in SQL Server Analysis Services is shown in Figure 31.

sapbi_4.jpg

Figure 31: The structure of the SQL Server Analysis Services cube based on the data extracted from SAP BI to SQL Server

After the Analysis Services cube is set up, it needs be deployed. Then, each dimension and the cube itself can be processed to dispatch data from the staging table to each dimension respectively.

 

An easy way to validate the data quality after the cube migration is to run and compare reports on SAP BI and Analysis Services.

Here is the result of an SAP BI BEx query against the SAP BI InfoCube.

Figure 32: Viewing the data in the InfoCube in SAP BI

Here is a Microsoft Excel® PivotTable® report against the Analysis Services cube.

Figure 33: Viewing the data from the SQL Server Analysis Services cube in an Excel PivotTable report

Here is a SQL Server Reporting Services report against the Analysis Services cube.

Figure 34: Viewing the data from the Analysis Services cube in a Reporting Services report

The query results on SAP BI and in the Analysis Services cube match precisely.

 

This paper has described the functionality of the Microsoft Connector 1.0 for SAP BI, and provided detailed step-by-step instructions on how to use the connector in SQL Server Integration Services. A realistic use case is presented with the design highlights and rationale. Overall, the connector bridges the gap to support building an enterprise data warehouse solution centered on Microsoft SQL Server 2008 in a heterogeneous environment with heavy presence of SAP BI. It offers great flexibility and efficiency for extracting non-SAP data into SAP BI, and for extracting SAP BI data into a SQL Server data warehouse.

By utilizing the Microsoft Connector 1.0 for SAP BI effectively, it is now possible to construct a streamlined end-to-end data warehouse and business intelligence solution based upon Microsoft technologies for enterprises running SAP, with lower TCO, better design, and more flexibility.

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Jocky Java Obfuscate Compiler Kit for You 一、前言 1.1 什么是Jocky? 我们知道,Java是一种跨平台的编程语言,其源码(.java文件)被编译成与平台无关的字节码(.class文件),然后在运行期动态链接。这样,编译后的类文件中将包含有符号表,从而使得Java程序很容易被反编译。相信每一个Java开发人员,都曾经用过诸如Jad之类的反编译器,对Java的class 文件进行反编译,从而观察程序的结构与实现细节。如此一来,对于那些需要严格进行知识产权保护的Java应用,如何有效的保护客户的商业投资,是开发人员经常需要面对的问题。 于是就出现了Java混淆编译器,它的作用是打乱class文件中的符号信息,从而使反向工程变得非常困难。 Jocky就是这样一款优秀的Java混淆编译器。 1.2 为什么需要Jocky? 目前业界有不少商业的甚或是开源的混淆编译器,但它们普遍存在一些这样或者那样的问题。一般而言,现有的混淆器都是对编译好的 class文件进行混淆,这样就需要编译和混淆两个步骤。而事实上,并不是所有的符号都需要混淆。如果你开发的是一个类库,或者某些类需要动态装载,那些公共API(或者说:那些被publish出来的API)就必须保留符号不变,只有这样,别人才能使用你的类库。现有的混淆器提供了GUI或脚本的方式来对那些需要保留的符号名称进行配置,但如果程序较大时,配置工作将变得很复杂,而程序一旦修改,配置工作又要重新进行。某些混淆器能够调整字节码的顺序,使反编译更加困难,但笔者经历过混淆之后的程序运行出错的情况。 而Jocky与其它混淆编译器最大的不同之处在于:它是直接从源码上做文章,也就是说编译过程本身就是一个混淆过程。 1.3 Jocky是如何工作的? Jocky混淆编译器是在Sun JDK中提供的Java编译器(javac)的基础上完成的,修改了其中的代码生成过程,对编译器生成的中间代码进行混淆,最后再生成class文件,这样编译和混淆只需要一个步骤就可以完成。另外可以在源程序中插入 符号保留指令 来控制哪些符号需要保留,将混淆过程与开发过程融合在一起,不需要单独的配置。 1.4 Jocky的作用 1.4.1代码混淆 如前文所述,混淆编译是Jocky的首要用途。我们举一个最简单的例子,下面的SimpleBean是未经混淆的class文件通过Jad反编译以后获得的源文件: public class SimpleBean implements Serializable { private String name = \"myname\"; private List myList = null; public void SimpleBean() { myList = new ArrayList(10); } public void foo1() { myList.add(\"name\"); } private void foo2() { } private void writeObject(java.io.ObjectOutputStream out) throws IOException { } } <未混淆的类文件反编译后的效果> 下面是经Jocky混淆过的类文件,通过Jad反编译后产生的源文件: public class SimpleBean implements Serializable { private String _$2; private List _$1; public SimpleBean() { _$2 = \"myname\"; this; JVM INSTR new #4 <Class ArrayList>; JVM INSTR dup ; JVM INSTR swap ; 10; ArrayList(); _$1; } public void foo1() { _$1.add(\"name\"); } private void _$1() { } private void writeObject(ObjectOutputStream objectoutputstream){ throws IOException { } } <Jocky混淆过的类文件反编译的效果> 1.4.2 支持将JDK 5.0的语法编译成能够在JDK 1.4上运行的类文件 JDK 5.0在语法层面上有许多新增特色,能够为简化应用的开发带来一些便利。譬如Generics、Enhanced for Loop以及 Autoboxing/Unboxing等。但另人遗憾的是,倘若利用这些新的语法开发应用,就意味着不能够在JDK 1.4上运行,而JDK 1.4毕竟是目前最为普及的VM版本。幸运是,Jocky的另一个特色就是:通过参数配置,能够把用JDK 5.0语法编写的应用编译成JDK 1.4上的类文件版本。我们可以把经过 Jocky编译的类文件以UltraEdit打开,可以发现在第8个字节上(类文件的major version)的数值是0x30,即十进制的48,这是JDK 1.4所能够理解的类文件版本(JDK 5.0默认编译的类文件版本是49)。前提是:应用中不能够使用JDK 1.4中所没有的一些API。 二、Jocky的用法 2.1 常规用法 使用Jocky非常简单,获得jocky.jar以后,只需要运行java -jar jocky.jar就可以启动Jocky混淆编译器,jocky的命令行参数和javac完全相同,但增加了一个新的参数-scramble,它的用法如下: -scramble 混淆所有package private或private符号 -scrambleall 混淆所有符号 -scramble:<level> 混淆相应级别的符号 其中<level>指定混淆级别,可以是以下几种级别: -scramble:none 不进行混淆 -scramble:private 对所有private访问级别的元素进行混淆 -scramble:package 对所有private或package private元素进行混淆 -scramble:protected 对所有private, package private, protected元素进行混淆 -scramble:public 对所有的元素都进行混淆 -scramble:all 相当于-scramble:public 如果使用-scramble不带级别参数,则相当于-scramble:package 2.2 Jocky for Ant 近年来,Ant已经成为Java应用开发中打包工具的事实上的标准。在应用的开发过程中,我们往往都会有一个Ant脚本,通过该脚本,能够对应用进行编译、打包、发布等一系列过程。因此,Jocky的最佳切入点便是对Ant的支持。 在Ant中使用Jocky非常简单: 1. 将lib\\jocky-ant.jar copy至ANT_HOME\\lib目录下。 2. 在ant脚本中加入这样一行代码,以引入Jocky Task <taskdef resource=\"jockytasks/\"> 3. 设置Jocky的一些基本属性,包括: jocky.jar包的位置,以及混淆级别,如下所示: <jocky jar=\" F:\\Works2\\Jocky\\jocky1.0\\lib\\jocky.jar\" enable=\"true\" level=\"private/\"> 4. 当设置jocky的enable属性为true时,此时,Ant脚本中的javac编译命令,便会被自动替换成Jocky编译器;当设置enable属性为false时,javac编译命令将恢复成正常设置,示例脚本如下: <project name=\"jocky\" default=\"build\"> <!-- 引入Jocky Ant Task,要确保jocky-ant.jar位于ANT_HOME\\lib目录下 --> <taskdef resource=\"jockytasks\"> </taskdef> <target name=\"build\"> <!-- 设置jocky.jar的位置以及混淆级别,当enable为true时,javac task将被自动替换成Jocky混淆编译器 --> <jocky jar=\" F:\\Works2\\Jocky\\jocky1.0\\lib\\jocky.jar\" enable=\" true\" level=\" private\"> </jocky> <!-- 下面的编译,将使用Jocky混淆编译器 --> <javac destdir=\"bin2\" debug=\"on\" source=\"1.5\" target=\"1.4\"> <src path=\"src\"></src> </javac> <!-- 当enable为false时,javac task将被恢复成正常设置, Jocky编译器不再起作用 --> <jocky enable=\"false\"></jocky> <!-- 下面的编译,将使用正常的Javac编译器 --> <javac destdir=\"bin3\" debug=\"on\" target=\"1.4\"> <src path=\"src\"></src> </javac> </target> </project> <Jocky的Ant脚本示例> 注意: Jocky for Ant在Ant 1.6.5上开发,推荐使用该版本。 2.3 Jocky for Eclipse Jocky提供了Eclipse的插件,从而能够直接在Eclipse中使用Jocky。 1. Jocky插件的安装: 将Jocky插件安装至Eclipse中非常简单,只需要将eclipse/plugins/org.apusic.jocky_1.0.0目录 copy 至 Eclipse的 plugins目录即可。或者在Eclipse/links文件夹中,通过link方式指定Jocky的插件目录。 2. 在Eclipse中使用Jocky: 在Eclipse中使用Jocky也非常简单,任何一个Java工程,选中工程通过右键菜单,都可以出现Jocky的快捷菜单: <Jocky在Eclipse中的右键菜单> <Jocky在Eclipse中的属性设置> 事实上,在Eclipse中使用Jocky时,Jocky也是首先针对所选工程生成Ant的Build文件(默认名称jocky_build.xml),然后再通过Ant完成混淆编译。 以下是Jocky在Eclipse中自动生成的Ant Build 文件示例: <project basedir=\".\" default=\"build\" name=\"jocky.example.jocky\"> <property name=\"jocky.jar\" value=\"f:\\EclipseWTP1.0.8\\workspace_jdk5_apusicstudio\\org.apusic.jocky\\jocky.jar\"></property> <property name=\"jocky.output.dir\" value=\"jocky\"></property> <property name=\"jocky.scramble.level\" value=\"package\"></property> <property name=\"target\" value=\"1.4\"></property> <path id=\"project.classpath\"> <pathelement location=\"bin\"></pathelement> </path> <target name=\"init\"> <jocky jar=\"${jocky.jar}\" level=\"${jocky.scramble.level}\"></jocky> <mkdir dir=\"${jocky.output.dir}\"></mkdir> <mkdir dir=\"${jocky.output.dir}/bin\"></mkdir> </target> <target name=\"clean\"> <delete dir=\"${jocky.output.dir}/bin\"></delete> <delete dir=\"${jocky.output.dir}\"></delete> </target> <target depends=\"init\" name=\"build\"> <echo message=\"${ant.project.name}: ${ant.file}\"></echo> <jocky enable=\"true\"></jocky> <javac destdir=\"${jocky.output.dir}/bin\" target=\"${target}\"> <src path=\"src\"></src> <classpath refid=\"project.classpath\"></classpath> </javac> </target> </project> <Jocky在Eclipse中自动生成的Ant脚本示例> 注1:只支持Eclipse 3.1.1及以上版本。 注2:如果在Eclipse中找不到Jocky插件,请删除 Eclipse安装目录/configuration/org.eclipse.update 文件夹 (Maybe an eclipse bug?)。 2.4 如何使用符号保留指令 除了在命令行用 -scramble 参数控制符号混淆级别外,还可以在源代码中使用符号保留指令来控制那些符号需要保留。符号保留指令是一个Java文档注释指令,可以插入在类和类成员的文档注释中,例如: /** * This class should preserve. * @preserve */ public class Foo { /** * You can specify which field should be preserved. * @preserve */ private int x; /** * This field is not preserved. */ private int y; /** * You can also preserve methods. * @preserve */ public void hello() {} /** * This method is not preserved. */ private void collect() {} } <使用preserved指令的示例> 如果没有@preserve指令,则根据混淆级别及成员的访问级别来确定符号是否保留。 对于类的符号保留指令可以附带一个保留级别参数,来控制类成员的符号保留,包括: @preserve 仅对类名进行保留,类成员的保留根据-scramble命令行参数决定 @preserve public 保留所有public成员 @preserve protected 保留所有public和protected成员 @preserve package 保留所有public, protected, package private成员 @preserve private 保留所有成员 @preserve all 相当于@preserve private 事实上,即便不加@preserve指令,Jocky对Java语言特有的一些private级别的方法不进行混淆,譬如,在序列化时有特殊作用的writeObject及readObject方法等。但笔者强烈建议: 针对这些有特殊含义不能够被混淆的 private级别的方法或者字段,请以@preserve指令予以保护。 注1:建议通过IDE的JavaDoc设置,来辅助@preserve指令的书写。 三、Jocky的限制 正如前文所说,Jocky是基于源代码的混淆编译器,因此,Jocky不支持分别编译,必须对所有的源文件同时进行混淆编译。但事实上,倘若混淆级别控制在private级别上,该限制便不复存在。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值