Auto-Icon:一款自动代码生成工具 | 开发​工程师必备

在这个春暖花开的日子,前端智能化的核心技术产品 imgcook.com 被 Thought Works 的 24 期技术雷达收录,ICON 识别的论文已正式发表,真可谓双喜临门。

(https://dl.acm.org/doi/10.1145/3397481.3450671)

在 Thought Works 的 24 期技术雷达里,指出:(https://insights.thoughtworks.cn/thoughtworks-techradar-vol-24/)

imgcook是阿里巴巴旗下的软件即服务产品。它可以通过智能化技术把不同种类的视觉稿(Sketch/PSD/静态图片)一键生成前端代码。imgcook可以生成静态代码,如果你定义了领域专用语言,它也可以生成数据绑定模块代码,该技术还没达到完美的程度,设计人员需要参考某些规范,以提高代码生成的准确性(此后仍需开发人员的调整)。我们对于魔术代码生成一直十分谨慎,因为从长远看,生成的代码通常很难维护,imgcook也不例外。但是如果你限定它用于特定的上下文,例如一次性活动广告页,这项技术值得一试。

可见,从 Thought Works 团队到前端技术行业,对于 imgcook.com 的价值虽然保持肯定态度,但是,对于智能生成代码的应用场景和代码可维护性还是普遍存在担忧。因此,前端智能化方向和 imgcook.com 的 D2C (Design To Code)技术还需要进一步的成熟和完善,用优秀的产品品质、先进的技术和扎实的服务,真正赋能一线前端研发人员,做到“技术普惠”。也因此,我想针对应用场景和代码的可维护性问题做一些简要的说明,借此打消一部分用户的疑虑。

imgcook 应用场景问题

应用场景问题诚如 Thought Works 所言,imgcook 首先是一款通过视觉稿一键生成前端代码的 SaaS(软件即服务)技术产品,通过访问 imgcook.com 即可使用全部功能。其次,通过 DSL(Domain-Specific Language 领域专用语言)的定义,imgcook 可以适用于:前端、客户端、小程序等不同的领域。最后,通过 CLI(Command-Line Interface 命令行界面)将 imgcook 的功能深度整合进自己的前端业务平台,并利用 imgcook 的字段模型拓展自己的业务数据对象识别能力,即可适用于:用户产品、工具产品、营销导购产品、营销活动承接页……等一系列业务场景,现已在阿里巴巴集团超过 17 个 BU 落地,超过 20 个团队通过 API 调用 imgcook 技术能力建设自己的业务平台,支撑着各种 C 端业务场景和移动端业务。此外,通过 imgcook 还能够生成以 CRUD 为核心功能的中后台页面代码,已经大规模在阿里云等业务上落地应用。

imgcook 代码可维护性问题

代码的可维护性问题一直是我们关注的焦点,从最初的生成代码规则中对于:变量命名规范、变量书写规范、变量使用规范等,到代码可用率度量,我们用丰富的维度去全面考察 imgcook 生成代码的可用率和可维护性:

代码可维护性的指标度量和监控

并且,每周都会对 Badcase 进行详细的分析,从中找出工程技术、规则算法和机器学习模型能力的问题和缺陷,加以改进,平均每个月因此修改的 BUG 和问题有数百个之多,持续的改进和迭代,持续的用户使用和反馈,让 imgcook 可以形成以用户体验为驱动力的增长飞轮。

对未来的思考

首先,我们会继续关注:应用场景泛化能力和代码生成准确率 这两个核心指标,根据用户的反馈不断借助机器学习的能力进行优化和迭代,用优秀的体验和坚若磐石的产品质量,真正服务好一线研发同学打造普惠的技术。

其次,利用 https://github.com/alibaba/pipcook提供的前端机器学习框架,帮助一线开发者借助机器学习的能力定制自己的算法模型,进一步提升 imgcook 在业务上的表现力。

最后,围绕动画、动效、富交互……等问题,用 imgcook 降低前端在复杂需求、跨平台、跨场景业务开发过程中遇到的实际研发问题,进一步提升前端技术的业务表现力,让前端技术支撑的业务彻底摆脱生产力束缚。

从 CCF 论文 IconFont 识别为起点,阿里巴巴前端委员会智能化方向必将产出更多有价值的研究成果,分享到各种顶会和学术期刊上接受同行的检验,让我们可以用更加客观、更加全面的视角,来评价我们的研究和技术工程成果是否具有技术先进性。同时,也能够用这种方式来为前端技术带来更多新的思考和理论工具。我会和阿里巴巴前端委员会、智能化方向、淘系技术前端、F(x)Team、开源社区和技术生态的伙伴们一起,继续努力和坚持我们“技术普惠、赋能一线研发”的理想,让时间来证明一切吧。

论文

Auto-Icon: An Automated Code Generation Tool for Icon

Designs Assisting in UI Development

ABSTRACT

Approximately 50% of development resources are devoted to UI development tasks [8]. Occupied a large proportion of development resources, developing icons can be a time-consuming task, because developers need to consider not only effective implementation methods but also easy-to-understand descriptions. In this study, we define 100 icon classes through an iterative open coding for the existing icon design sharing website. Based on a deep learning model and computer vision methods, we propose an approach to automatically convert icon images to fonts with descriptive labels, thereby reducing the laborious manual effort for developers and facilitating UI development. We quantitatively evaluate the quality of our method in the real world UI development environment and demonstrate that our method offers developers accurate, effcient, readable, and usable code for icon images, in terms of saving 65.2% developing time.

CCS CONCEPTS

Human-centered computing →Empirical studies in acces-

sibility.

KEYWORDS

code accessibility, icon design, neural networks


1  INTRODUCTION

An user interface (UI) consists of series of elements, such as text, colors, images, widgets, etc. Designers are constantly focusing on icons as they are highly functional in an user interface [9,39,41,51]. One of the biggest benefits of icons isthat they can beuniversal. For instance, by adding a red “X” icon to your user interface design, users are informed that clicking this icon leads to the closure of a component. Furthermore, icons can make UIs look more engaging. For example, instead of using basic bullets or drop-downs filled with words, a themed group of icons can capture instant attention from users. Consequently, icons become an elegant yet effcient way to communicate with and help guide user through experience.

Despite of all these benefits, icons have two fundamental limitations in the day-to-day development environment, in terms of rendering speed and code accessibility. First, to ensure a smooth user interaction, UI should be rendered in under 16ms [22,23,30], while icon implemented as an image faces the slow rendering problem, due to image download speed, image loading effciency, etc. These issues will directly affect the quality of the product and user experience, requiring more effort from developers to develop an advanced method to overcome the problem. Second, in the process of UI implementation, many developers directly import the icon image resources from the UI draft files without considering the meaning of the content, resulting in poor description/comment during coding. Different codes render the same visual effect to users, while it is different for developers to develop and maintain. A non-descriptive code increases the complexity and effort required to developers as they need to look at the associated location of the UI to understand the meaning of the code.

This challenge motivates us to develop a proactive tool to address the existing UI development limitations and improve the effciency and accessibility of code. Our tool, Auto-Icon, involves three main features. First, to meet the requirement of effcient rendering, we develop an automated technique to convert icon image to icon font, which is a typeface font. Once the font is loaded, the icon will be rendered immediately without downloading the image resources, thereby reducing HTTP requests and improving the rendering speed. Icon font can further optimize the performance of rendering by adopting HTML5 offine storage. Besides, icon font has other potential attributes that can facilitate UI development, such as easy to use (i.e., use the CSS’s @fontface attribute to load the font), flexible (i.e., capable to change color, lossless scale), etc. Second, understanding the meaning of icons is a challenging problem. There are numerous types of icons in the UIs. Icons representing the same meaning can have different styles and can be presented in different scales as shown in Table 1. Also, icons are often not co-located with texts explaining their meaning, making it diffcult to understand from the context. In order to offer an easy access for developers to develop through understanding the meaning of icon, we collect 100k icons from existing icon sharing website Alibaba Iconfont [2] - each associating with a label described by designer. By analyzing the icon images and labels, we construct 100 categories, such as "left", "pay", "calendar", "house", etc. We then train a deep learning classification model to predict the category of the icon as its description. The experiments demonstrate that our model with the average accuracy as 0.87 in an effcient classification speed as 17.48ms, outperforms the other deep learning based models and computer vision based methods. Third, to provide more accessibility to developers on the description of icon images, we also detect the primary color of icons by adopting HSV color space [66]. We refer to our mechanism tool Auto-Icon to build an intelligent support for developers in the real context of UI development, assisting developing standardized and effcient code.

To demonstrate the usefulness of Auto-Icon, we carry out an user study to show if our tool for automatically converting an icon image to an icon font with label descriptions can help provide more knowledge on code accessibility and accelerate UI development for developers. After analyzing ten professional developers’ feedback with all positive responses on our mechanism tool and we find that the code for icon image generated by our tool can achieve better readability compared with the code manually written by professional developers. Besides, Auto-Icon has been implemented and deployed in Alibaba Imgcookplatform. The results demonstrates that our tool provides 84% usable code for icon images in a realistic development situations. Our contributions can be summarized below:

  • We identify the fundamental limitations of existing UI development of icon images. The informal interviews with professional developers also confirm these issues qualitatively. 

  • Based on the emerging label categories, we develop deeplearning and computer-vision based techniques, called Auto-Icon, for specifically converting icon image to icon font with label describing its meaning and color to provide developers understand knowledge of code.

  • We conduct large-scale experiments to evaluate the performance of our tool Auto-Icon and shows that our tool achieves good accuracy compared with baselines. The evaluation conducted with developers and tested on the Imgcook platform demonstrates the usefulness of our tool.

  • We contribute to the IUI community by offering intelligent support for developers to effciently develop icon images comply with code standardization.

2  RELATED WORKS

2.1  UI Rendering

Ensuring fast rendering speed is an essential part in UI development, since slow rendering creates poor user experience. Many studies focus on improving rendering speed via reducing bugs [11,36,45,47,49,57,60,70]. In contrast, we focus on analyzing image displaying performance in UI rendering. There are a few related works in this domain. For example, Systrace [4] isatool that allows developers to collect precise timing information about UI rendering on devices. However, it does not provide any suggestions for improvement. 

To address this problem, many studies introduce reliable approaches to improve rendering effciency such as image resizing based on pattern-based analysis [48], a manual image resource management based on resource leakage analysis [72].Gao et al. [29] implement a system called DRAW which aims to reveal UI performance problems in an application such as excessive overdraw and slow image components detection. With the suggestion of the image displaying performance analysis by DRAW, developers can manually improve the rendering performance of slow image displaying. While these works offer image management suggestions to developers to achieve better rendering performance, they still need to be improved manually. In contrast, we propose an image conversion technology based on computer vision and graphic algorithms to convert icon images into font types in order to automatically improve the performance of UI rendering.


2.2 Code Accessibility

Digital devices such as computer, mobile phone and tablets are widely used. To ensure the quality of software, many research works have been conducted [10,28,34]. Most of these works focus on the functionality and usability of apps such as GUI design [13,15,16,74], GUI animation linting [78], localization [68], privacy and security [17,18,20,25,76], performance [47,79], and energy-effciency[6,7].

Few research works are related to accessibility issues. Some works in Human-Computer Interaction area have explored the accessibility issues of mobile apps [14,40,54,67,75]. In these work, the lack of description in image-based components in UI is commonly regarded as an important accessibility issue. For instance, Harrison et al.[32] establish an initial ‘kineticon vocabulary’ containing a set of 39 kinetic behaviors for icon images, such asspin, bounce, running, etc. Ross etal. [61] identify some common labeling issues in Android apps via analyzing the icon image labeling. With crowd source method, Zhang et al [77] annotate GUI elements without content description. However, these works are mainly based on the support from developers. Due to the increasingly developed Convolutional Neural Networks (CNNs) technologies, dramatic advances appears in the field of image classification which is applied to automatically annotate tags for images. Chen et al. [12] analyze the tags associated with the whole GUI artwork collected from Dribbble, and emerge an vocabulary that summarizes the relationship between the tags. Based on the vocabulary, they adopt a classification model to recommend the general tags in the GUI, such as "sport", "food", etc. Different from their work, we predict more fine-grained categories, such as "football", "burger", etc. And also, they focus on predicting the categories of the whole UI which is subjective to human perception, but the categories of small icon images are usually more intuitive. A similar work to oursistheiconsensitiveclassificationbyXiaoetal[73].They utilize traditional computer vision techniques like SIFT and FAST to extract the features of icon and classify icons into 8 categories through calculating their similarity. After the systematically inves-tigation of icon images, we discover the fundamental limitations in icon images discussed in Section 4.1, in terms of highcross-class similarity and small, transparent and low contrast. These findings conflict with methods applied in their paper such as apply rotation to augment dataset. Moreover, we show that deep learning model is fruitful for the icon classification problem than the tradition computer vision technique in Section 5.1.3. In our work, according to the characteristic of icon images, we propose a deep learning model to automatically classify icons in a more fine-grained (100) category and also adopt a computer vision technique to detect its primary color.


由于篇幅过长,请下拉到「阅读原文」或点击下方链接,下载完整论文。

原文地址:https://dl.acm.org/doi/epdf/10.1145/3397481.3450671

✿  拓展阅读

作者|甄子

编辑|橙子君

出品|阿里巴巴新零售淘系技术

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值