how optimize the webiintelligence query speed?

从来没有写个BO的优化方案,百度,谷歌相关咨询也很少,今偶得一片,收藏
Why is my report so slow?

icon_exclaim.gif Submitted by kmahler, additional comments by Andreas (July 2009).

What makes up the total time a report runs from hitting the Refresh button to being displayed:
1. Query execution time (database side)
2. Transferring results across the network to the client/web browser (network bandwidth & latency)
3. Computing local report variables, formulas, local report filters, alerters, etc. within the report (BOE server/ DesktopIntelligence workstation)
4. Rendering the page (page layout, page margins, page numbers, etc.)

---------------------------------------------
1. Optimizing Query Performance (tuning universes and databases):

There are a number of things you can do to speed up your query. The first things I will do is to review your universe to minimize the un-necessary joins, using aggregate awareness, and shortcut joins if you can. Creating appropriate indexes definitely will speed up the query performance also. Make sure all of your measure objects have aggregrate functions.

Your Oracle database may be using either "Cost based" or "Rule Based" optimization. You need to find out which your system is using. If your system is using Cost Based, (which is the more recent offering), you must make sure the "Statistics" are up to date. Oracle under rule based oprimization uses row counts etc to determine the fastest path to resolve queries. Once you are happy that the stats are up to date you can use "Hints" to alter the approach the db might take.
If your system is using Rule Based optimization (old technology!), the order of the the tables in the "From" clause are important, altering these by manually assigning row counts in designer is how you do this.

In particular for Oracle:
If you are using a star-schema for your data mart turn on the STAR TRANSFORMATION option in Oracle, prefer bitmap indexes for your dimension tables. Check also for your block size and if your (fact) tables are fragmented (tablespace).

For large data volumes consider partitioning your fact table for example by time dimension (Year, Month), country dimension, etc..

If you are able to look at the Plan of execution, this may give you an insight on how you can make improvements. Most of DBMS (Oracle, IBM UDB, etc.) will have some sort of utility to find out what index the SQL statement are using, what execution path they go through. You might need to use this kind of tool(s) to find out what's wrong with the SQL first. Investigate your SQL statement in SQL Plus (Oracle) or a similar tool. In a past job, I was given the task of performance tuning SQL statements. Minor changes to the SQL can have tremendous impacts on retrieval times. Just because a statement is using an index does not mean that it is using a good index. Especially investigate selected objects that are also used in table joins. Which table you are pulling the object from can make a big difference but still yeild the same results.
You must have someone do an Explain Plan on your query. The result will provide a starting point for your performance tuning. If you have the knowledge to do this yourself, I recommend using TOAD, SQL Station, DBArtisan, etc. These tools will also let you monitor Server Session statistics to see what your Server is doing while your slow query is running.

Consider playing with the value of the array fetch size on your universe/database connection (see also 2.). The optimum value is different for everyone.

Consider building summary/aggregate tables (in the database) to speed up queries, for example: Sales transaction precaculated at the granularity of Month (instead of daily sales transactions). As a rule of thumb your summary table should contain 10% or less rows than your original fact table to produce measurable performance gains.
To utilize these summary tables use the Query Rewrite Option (Oracle, etc.), which is totally transparent to BusinessObjects or if needed use @Aggregate_Aware in the universe/Designer.

Universe Design
- Make sure all your universe measures are using a SQL aggregate function (as a rule of thumb) to reduce the size of your result set by aggregating the data at the database level (instead of projecting the data in your report). Note: SQL aggregation and projection for measures (set via Designer) are two very different things.
- Evaluate shortcut joins or Index Awareness.
- Consider using the universe parameter JOIN_BY_SQL (available since XI R2) (see B) below) if a data provider generates multiple SQL statements, whose microcubes are "joined".

2. Network performance
Make sure you are using a fast network connection with appropriate bandwidth (between database/DBMS hosting your source data/data mart and the BOE server/DesktopIntelligence workstation), check for latency, required hops, and packet loss (if any) from the database to the BOE server, using the command PING or NETSTAT for example.
Again, adjust the Array Fetch Size Parameter (universe connection via Designer, see also 1.). Rule of thumb: if your network is reliable (no packet loss) choose a higher Array Fetch Size.

3. Optimizing Computation Time (Reporter):

Suggestions:
- Reduce formulas/local report variables and complex filters (reporter side), etc.
- Use variables instead of formulas
- Remove Auto-sizing (auto-height/width) if possible

- Look at drilling and applying drill filters to your data provider
- Parameterize your documents using Prompts
- Limit the amount of data, if you can limit rows further that will help

- Crosstabs and complex formulas will slow it down
- Complex filters, breaks, everything will slow it down a little
- Charts can slow the report tabs down

Here are some other ideas:

A) Is the result set of data in the report very large? Perhaps you should add more conditions/prompts to your query to reduce the number of records returned from the database.
Do not expect a report containing 100ks of rows to be fast, as it has to be rendered as well.

B) Is the report very complicated, eg. does it contain several data providers/queries or many calculations? You might try to see if you can reduce the complexity of the report. Data providsers are executed sequentially, and the merging of the microcubes will cost performance.
Consider using the universe parameter JOIN_BY_SQL (available since XI R2), if a data provider generates multiple SQL statements, whose microcubes are "joined".

C) Is the query on the database side taking a long time to execute? Then you should work with your DBA to see what can be done on the database side to optimize query performance (show the generated SQL to your DBA).

D) Consider optimizing your BOE cluster architecture (XI R2): For example configuring additional Webi Report servers, adding cache, etc.




转载:http://www.forumtopics.com/busobj/viewtopic.php?p=147179#147179

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/15720542/viewspace-677017/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/15720542/viewspace-677017/

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值