启发式评估(Heuristic Evaluation )的一点介绍

原创 2006年05月18日 08:57:00

 

Heuristic evaluation
Summary
Heuristic evaluation is a form of usability inspection where usability specialists judge whether each element of a user interface follows a list of established usability heuristics. Expert evaluation is similar, but does not use specific heuristics.

Usually two to three analysts evaluate the system with reference to established guidelines or principles, noting down their observations and often ranking them in order of severity. The analysts are usually experts in human factors or HCI, but others, less experienced have also been shown to report valid problems.

A heuristic or expert evaluation can be conducted at various stages of the development lifecycle, although it is preferable to have already performed some form of context analysis to help the experts focus on the circumstances of actual or intended product usage.

Benefits
The method provides quick and relatively cheap feedback to designers. The results generate good ideas for improving the user interface. The development team will also receive a good estimate of how much the user interface can be improved.
There is a general acceptance that the design feedback provided by the method is valid and useful. It can also be obtained early on in the design process, whilst checking conformity to established guidelines helps to promote compatibility with similar systems.
It is beneficial to carry out a heuristic evaluation on early prototypes before actual users are brought in to help with further testing.
Usability problems found are normally restricted to aspects of the interface that are reasonably easy to demonstrate: use of colours, lay-out and information structuring, consistency of the terminology, consistency of the interaction mechanisms. It is generally agreed that problems found by inspection methods and by performance measures overlap to some degree, although both approaches will find problems not found by the other.
The method can seem overly critical as designers may only get feedback on the problematic aspects of the interface as the method is normally not used for the identification of the 'good' aspects.
Method
This method is to identify usability problems based on established human factors principles. The method will provide recommendations for design improvements. However, as the method relies on experts, the output will naturally emphasise interface functionality and design rather than the properties of the interaction between an actual user and the product.

Planning
The panel of experts must be established in good time for the evaluation. The material and the equipment for the demonstration should also be in place. All analysts need to have sufficient time to become familiar with the product in question along with intended task scenarios. They should operate by an agreed set of evaluative criteria.

Running
The experts should be aware of any relevant contextual information relating to the intended user group, tasks and usage of the product. A heuristics briefing can be held to ensure agreement on a relevant set of criteria for the evaluation although this might be omitted if the experts are familiar with the method and operate by a known set of criteria.

The experts then work with the system preferably using mock tasks and record their observations as a list of problems. If two or more experts are assessing the system, they should not communicate with one another until the assessment is complete. After the assessment period, the analysts can collate the problem lists and the individual items can be rated for severity and/or safety criticality.

Reporting
A list of identified problems, which may be prioritised with regard to severity and/or safety criticality is produced.

In terms of summative output the number of found problems, the estimated proportion of found problems compared to the theoretical total, and the estimated number of new problems expected to be found by including a specified number of new experts in the evaluation can also be provided.

A report detailing the identified problems is written and fed back to the development team. The report should clearly define the ranking scheme used if the problem lists have been prioritised.

More Information
Nielsen, Jakob. How to Conduct a Heuristic Evaluation

Variations
Three to five experts are recommended for a thorough evaluation. A quick review by one expert (often without reference to specific heuristics) is usual before a user-based evaluation to identify potential problems.

If usability experts are not available, other project members can be trained to use the method, which is useful in sensitising project members to usability issues.

Background Reading
Bias, R.G. and Mayhew, D.J. (Eds.). Cost justifying usability. Academic Press, 1994, pp.251-254.

Nielsen, J. (1992). Finding usability problems through heuristic evaluation. Proc. ACM CHI'92 (Monterey, CA, 3-7 May), pp. 373-380.

Nielsen, J. & Landauer, T. K. (1993). A Mathematical Model of Finding of Usability Problems. Proc. INTERCHI '93 (Amsterdam NL 24-29 April),

A*算法:启发式(heuristic)算法

A*算法:启发式(heuristic)算法A*(A-Star)算法是一种静态路网中求解最短路最有效的方法。公式表示为:        f(n)=g(n)+h(n), 其中f(n) 是节点n从初始点到目...
  • yangdelong
  • yangdelong
  • 2007年04月20日 10:17
  • 4305

启发式算法greedy heuristic、贪心算法

一般来说,我们碰到一个需要解决的问题,第一步是建立一个问题的模型,通过给出优化目标、约束条件、决策变量等方式来对问题从数学层面进行描述。然后我们就可以通过所学的线性规划、凸优化等方式对问题进行求解了。...
  • yiqingyang2012
  • yiqingyang2012
  • 2017年03月24日 20:07
  • 1088

启发式评估

使用Nielsen的十条准则做WEB的启发式评估 ——根据workshop, Heuristic Evaluation: Fitting The Approach to the Project整理 N...
  • lingshaoxia
  • lingshaoxia
  • 2013年11月14日 11:05
  • 1076

产品可用性检验——启发式评估法十原则(尼尔森博士)

1系统状态的可视性——系统状态的反馈(windows的沙漏图标;收发数据时显示状态的进度条;网页里的导航控件) 2系统和现实的协调——以现实为基准显示信息(windows的垃圾箱;在线商店的“购物车”...
  • sunxueling
  • sunxueling
  • 2016年06月03日 17:53
  • 1522

启发式算法(Heuristic Algorithm)

文章来自:https://shaoweicai.wordpress.com/2010/09/23/%E5%90%AF%E5%8F%91%E5%BC%8F%E7%AE%97%E6%B3%95%EF%BC...
  • u010180815
  • u010180815
  • 2016年12月29日 11:13
  • 1842

使用Nielsen的十条准则做WEB的启发式评估

Nielsen & Molichs Heuristics,优点:所需的时间和资本都比较低,可以快速进行评估。缺点:不能得到详尽的数据。基于以下设计准则去做评估:  Visibility of s...
  • ye728
  • ye728
  • 2007年05月22日 10:53
  • 1001

启发式算法(heuristic algorithm)

启发式算法(heuristic algorithm)是相对于最优化算法提出的。一个问题的最优算法求得该问题每个实例的最优解。启发式算法可以这样定义:一个基于直观或经验构造的算法,在可接受的花费(指计算...
  • zanghui426
  • zanghui426
  • 2015年12月18日 10:17
  • 981

com.atomikos.icatch.HeurHazardException: Heuristic Exception

先看看错误; 在查找启动时的日志; 仔细查找出错的地方,然后看看上下文;错误很明显了,日志文件被锁住了,在加载jta日志配置时尝试写入报错...
  • Memorytag
  • Memorytag
  • 2017年03月01日 21:13
  • 523

windows server 2012 standard eval评估板过期-解决方法

研发服务器当初安装的windows server 2012 standard eval评估版,过期后,每隔一个小时自动关机。 解决办法:1.从网上下载 Microsoft Toolkit 2.4...
  • he_shan_
  • he_shan_
  • 2015年11月12日 09:45
  • 3765

数据挖掘中的模式发现(四)模式评估(Pattern Evaluation)

Pattern Evaluation简介模式评估指的是根据某种兴趣度度量,识别代表知识的真正有趣的模式。 我们之前通过support-confidence association rule mini...
  • u013007900
  • u013007900
  • 2017年02月03日 10:39
  • 1333
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:启发式评估(Heuristic Evaluation )的一点介绍
举报原因:
原因补充:

(最多只允许输入30个字)